From 50fc20858ddda17d185cf697339ec3401a348ded Mon Sep 17 00:00:00 2001 From: 0xSage Date: Fri, 24 Nov 2023 16:35:25 +0800 Subject: [PATCH] fix: syntax highlighting --- docs/docs/specs/data-structures.md | 60 +++------- docs/docs/specs/files.md | 10 +- docs/docs/specs/messages.md | 185 ++--------------------------- docs/docs/specs/models.md | 19 ++- docs/docs/specs/threads.md | 174 +-------------------------- 5 files changed, 45 insertions(+), 403 deletions(-) diff --git a/docs/docs/specs/data-structures.md b/docs/docs/specs/data-structures.md index 3852a3823..db854a8a6 100644 --- a/docs/docs/specs/data-structures.md +++ b/docs/docs/specs/data-structures.md @@ -8,66 +8,36 @@ This page is still under construction, and should be read as a scratchpad ::: - -```sh -janroot/ - assistants/ - assistant-a/ - assistant.json - src/ - index.ts - threads/ - thread-a/ - thread-b - models/ - model-a/ - model.json -``` - Jan use the local filesystem for data persistence, similar to VSCode. This allows for composability and tinkerability. -```sh= -/janroot # Jan's root folder (e.g. ~/jan) - /models # For raw AI models - /threads # For conversation history - /assistants # For AI assistants' configs, knowledge, etc. +```yaml +janroot/ # Jan's root folder (e.g. ~/jan) + models/ # For raw AI models + threads/ # For conversation history + assistants/ # For AI assistants' configs, knowledge, etc. ``` -```sh= +```yaml /models /modelA model.json # Default model settings llama-7b-q4.gguf # Model binaries - llama-7b-q5.gguf # Include different quantizations /threads - /jan-unixstamp-salt - model.json # Overrides assistant/model-level model settings + /jan-unixstamp thread.json # thread metadata (e.g. subject) - messages.json # messages - content.json # What is this? - files/ # Future for RAG + messages.jsonl # messages + files/ # RAG /assistants - /jan + /jan # A default assistant that can use all models assistant.json # Assistant configs (see below) - - # For any custom code - package.json # Import npm modules - # e.g. Langchain, Llamaindex - /src # Supporting files (needs better name) + package.json # Import npm modules, e.g. Langchain, Llamaindex + /src # For custom code index.js # Entrypoint - process.js # For electron IPC processes (needs better name) - - # `/threads` at root level - # `/models` at root level - /shakespeare + # `/threads` at root level + # `/models` at root level + /shakespeare # Example of a custom assistant assistant.json - model.json # Creator chooses model and settings package.json - /src - index.js - process.js - /threads # Assistants remember conversations in the future /models # Users can upload custom models - /finetuned-model ``` diff --git a/docs/docs/specs/files.md b/docs/docs/specs/files.md index 0775f779d..1aea59cfa 100644 --- a/docs/docs/specs/files.md +++ b/docs/docs/specs/files.md @@ -4,7 +4,7 @@ title: "Files" :::warning -Draft Specification: functionality has not been implemented yet. +Draft Specification: functionality has not been implemented yet. ::: @@ -18,7 +18,7 @@ Files can be used by `threads`, `assistants` and `fine-tuning` - Note: OAI's struct doesn't seem very well designed - `files.json` -```json +```js { // Public properties (OpenAI Compatible: https://platform.openai.com/docs/api-reference/files/object) "id": "file-BK7bzQj3FfZFXr7DbL6xJwfo", @@ -31,19 +31,25 @@ Files can be used by `threads`, `assistants` and `fine-tuning` ``` ## File API + ### List Files + > OpenAI Equivalent: https://platform.openai.com/docs/api-reference/files/list ### Upload file + > OpenAI Equivalent: https://platform.openai.com/docs/api-reference/files/create ### Delete file + > OpenAI Equivalent: https://platform.openai.com/docs/api-reference/files/delete ### Retrieve file + > OpenAI Equivalent: https://platform.openai.com/docs/api-reference/files/retrieve ### Retrieve file content + > OpenAI Equivalent: https://platform.openai.com/docs/api-reference/files/retrieve-contents ## Files Filesystem diff --git a/docs/docs/specs/messages.md b/docs/docs/specs/messages.md index 049e190c6..07d4f08a8 100644 --- a/docs/docs/specs/messages.md +++ b/docs/docs/specs/messages.md @@ -19,22 +19,19 @@ This is currently under development. Messages are saved in the `/threads/{thread_id}` folder in `messages.jsonl` files -```sh +```yaml jan/ threads/ assistant_name_unix_timestamp/ - ... - messages.jsonl - jan_2341243134/ - ... - messages.jsonl + thread.json # Thread metadata + messages.jsonl # Messages are stored in jsonl format ``` ## `message.jsonl` Individual messages are saved in `jsonl` format for indexing purposes. -```json +```js {...message_2} {...message_1} {...message_0} @@ -44,13 +41,13 @@ Individual messages are saved in `jsonl` format for indexing purposes. Here's a standard example `message` sent from a user. -```json +```js "id": "0", // Sequential or UUID "object": "thread.message", // Defaults to "thread.message" "created_at": 1698983503, "thread_id": "thread_asdf", // Defaults to parent thread "assistant_id": "jan", // Defaults to parent thread -"role": "user", // From either "user" or "assistant" +"role": "user", // From either "user" or "assistant" "content": [ { "type": "text", @@ -61,13 +58,11 @@ Here's a standard example `message` sent from a user. } ], "metadata": {}, // Defaults to {} -// "run_id": "...", // Rather than `run` id abstraction -// "file_ids": [], ``` Here's an example `message` response from an assistant. -```json +```js "id": "0", // Sequential or UUID "object": "thread.message", // Defaults to "thread.message" "created_at": 1698983503, @@ -84,9 +79,7 @@ Here's an example `message` response from an assistant. } ], "metadata": {}, // Defaults to {} -// "run_id": "...", // KIV -// "file_ids": [], // KIV -// "usage": {} // KIV: saving chat completion properties https://platform.openai.com/docs/api-reference/chat/object +"usage": {} // Save chat completion properties https://platform.openai.com/docs/api-reference/chat/object ``` ## API Reference @@ -94,165 +87,3 @@ Here's an example `message` response from an assistant. Jan's `messages` API is compatible with [OpenAI's Messages API](https://platform.openai.com/docs/api-reference/messages), with additional methods for managing messages locally. See [Jan Messages API](https://jan.ai/api-reference#tag/Messages) - - - diff --git a/docs/docs/specs/models.md b/docs/docs/specs/models.md index ef2ca7f95..ceb6b7cf0 100644 --- a/docs/docs/specs/models.md +++ b/docs/docs/specs/models.md @@ -22,7 +22,7 @@ In Jan, models are primary entities with the following capabilities: - Models are organized by individual folders, each containing the binaries and configurations needed to run the model. This makes for easy packaging and sharing. - Model folder names are unique and used as `model_id` default values. -```bash +```yaml jan/ # Jan root folder models/ llama2-70b-q4_k_m/ # Example: standard GGUF model @@ -52,9 +52,9 @@ Here's a standard example `model.json` for a GGUF model. - `source_url`: https://huggingface.co/TheBloke/zephyr-7B-beta-GGUF/. -```json +```js "id": "zephyr-7b" // Defaults to foldername -"object": "model", // Defaults to "model" +"object": "model", // Defaults to "model" "source_url": "https://huggingface.co/TheBloke/zephyr-7B-beta-GGUF/blob/main/zephyr-7b-beta.Q4_K_M.gguf", "name": "Zephyr 7B" // Defaults to foldername "owned_by": "you" // Defaults to you @@ -63,24 +63,21 @@ Here's a standard example `model.json` for a GGUF model. "description": "" "state": enum[null, "downloading", "ready", "starting", "stopping", ...] "format": "ggufv3", // Defaults to "ggufv3" -"settings": { // Models are initialized with these settings +"settings": { // Models are initialized with settings "ctx_len": "2048", "ngl": "100", "embedding": "true", "n_parallel": "4", - // KIV: "pre_prompt": "A chat between a curious user and an artificial intelligence", - // KIV:"user_prompt": "USER: ", - // KIV: "ai_prompt": "ASSISTANT: " } -"parameters": { // Models are called with these parameters +"parameters": { // Models are called parameters "temperature": "0.7", "token_limit": "2048", "top_k": "0", "top_p": "1", "stream": "true" }, -"metadata": {} // Defaults to {} -"assets": [ // Filepaths to model binaries; Defaults to current dir +"metadata": {} // Defaults to {} +"assets": [ // Defaults to current dir "file://.../zephyr-7b-q4_k_m.bin", ] ``` @@ -103,4 +100,4 @@ You can import a model by dragging the model binary or gguf file into the `/mode - Jan automatically generates a corresponding `model.json` file based on the binary filename. - Jan automatically organizes it into its own `/models/model-id` folder. -- Jan automatically populates the `model.json` properties, which you can subsequently modify. \ No newline at end of file +- Jan automatically populates the `model.json` properties, which you can subsequently modify. diff --git a/docs/docs/specs/threads.md b/docs/docs/specs/threads.md index 3c9b17286..44253b21c 100644 --- a/docs/docs/specs/threads.md +++ b/docs/docs/specs/threads.md @@ -23,13 +23,10 @@ This is currently under development. - Thread folders follow the naming: `assistant_id` + `thread_created_at`. - Thread folders also contain `messages.jsonl` files. See [messages](/specs/messages). -```sh -jan/ +```yaml +janroot/ threads/ - assistant_name_unix_timestamp/ - thread.json - messages.jsonl - jan_2341243134/ + assistant_name_unix_timestamp/ # Thread `ID` thread.json ``` @@ -43,14 +40,15 @@ jan/ Here's a standard example `thread.json` for a conversation between the user and the default Jan assistant. -```json +```js "id": "thread_....", // Defaults to foldername "object": "thread", // Defaults to "thread" "title": "funny physics joke", // Defaults to "" "assistants": [ { "assistant_id": "jan", // Defaults to "jan" - "model": { // Defaults to 1 currently active model (can be changed before thread is begun) + "model": { // Defaults to the currently active model (can be changed before thread is begun) + "id": "...", "settings": {}, // Defaults to and overrides assistant.json's "settings" (and if none, then model.json "settings") "parameters": {}, // Defaults to and overrides assistant.json's "parameters" (and if none, then model.json "parameters") } @@ -65,163 +63,3 @@ Here's a standard example `thread.json` for a conversation between the user and Jan's Threads API is compatible with [OpenAI's Threads API](https://platform.openai.com/docs/api-reference/threads), with additional methods for managing threads locally. See [Jan Threads API](https://jan.ai/api-reference#tag/Threads) - - - Can achieve this goal by calling `Modify Thread` API - -#### `GET v1/threads/{thread_id}/assistants` - --> Can achieve this goal by calling `Get Thread` API - -#### `POST v1/threads/{thread_id}/assistants/{assistant_id}` - --> Can achieve this goal by calling `Modify Assistant` API with `thread.assistant[]` - -### List `Thread.Messages` - --> Can achieve this goal by calling `Get Thread` API --> \ No newline at end of file