diff --git a/docs/docs/guides/04-using-models/02-import-manually.mdx b/docs/docs/guides/04-using-models/02-import-manually.mdx index 2943db63b..03fe12cc9 100644 --- a/docs/docs/guides/04-using-models/02-import-manually.mdx +++ b/docs/docs/guides/04-using-models/02-import-manually.mdx @@ -33,7 +33,6 @@ In this section, we will show you how to import a GGUF model from [HuggingFace]( Starting from version 0.4.7, Jan supports importing models using an absolute filepath, so you can import models from any location on your computer. Please check the [import models using absolute filepath](../import-models-using-absolute-filepath) guide for more information. - ## Manually Importing a Downloaded Model (nightly versions and v0.4.4+) ### 1. Create a Model Folder diff --git a/docs/docs/guides/04-using-models/03-import-models-using-absolute-filepath.mdx b/docs/docs/guides/04-using-models/03-import-models-using-absolute-filepath.mdx index 6884d9eab..a9e77c500 100644 --- a/docs/docs/guides/04-using-models/03-import-models-using-absolute-filepath.mdx +++ b/docs/docs/guides/04-using-models/03-import-models-using-absolute-filepath.mdx @@ -17,5 +17,61 @@ keywords: ] --- -In this guide, we will walk you through the process of importing a model using an absolute filepath in Jan. +In this guide, we will walk you through the process of importing a model using an absolute filepath in Jan, using our latest model, [Trinity](https://huggingface.co/janhq/trinity-v1-GGUF), as an example. +### 1. Get the Absolute Filepath of the Model + +First, you need to download the model file from the Hugging Face. Then, you can get the absolute filepath of the model file. + +### 2. Configure the Model JSON + +Navigate to the `~/jan/models` folder. Create a folder named ``, for example, `tinyllama` and create a `model.json` file inside the folder including the following configurations: + +- Ensure the filename must be `model.json`. +- Ensure the `id` property matches the folder name you created. +- Ensure the `url` property is the direct binary download link ending in `.gguf`. Now, you can use the absolute filepath of the model file. +- Ensure the `engine` property is set to `nitro`. + +```json +{ + "sources": [ + { + "filename": "tinyllama.gguf", + // highlight-next-line + "url": "" + } + ], + "id": "tinyllama-1.1b", + "object": "model", + "name": "(Absolute Path) TinyLlama Chat 1.1B Q4", + "version": "1.0", + "description": "TinyLlama is a tiny model with only 1.1B. It's a good model for less powerful computers.", + "format": "gguf", + "settings": { + "ctx_len": 4096, + "prompt_template": "<|system|>\n{system_message}<|user|>\n{prompt}<|assistant|>", + "llama_model_path": "tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf" + }, + "parameters": { + "temperature": 0.7, + "top_p": 0.95, + "stream": true, + "max_tokens": 2048, + "stop": [], + "frequency_penalty": 0, + "presence_penalty": 0 + }, + "metadata": { + "author": "TinyLlama", + "tags": ["Tiny", "Foundation Model"], + "size": 669000000 + }, + "engine": "nitro" +} +``` + +### 3. Start the Model + +Restart Jan and navigate to the Hub. Locate your model and click the Use button. + +![Demo](assets/03-demo-absolute-filepath.gif) \ No newline at end of file diff --git a/docs/docs/guides/04-using-models/04-integrate-with-remote-server.mdx b/docs/docs/guides/04-using-models/04-integrate-with-remote-server.mdx index f0db1bd55..3632a40b0 100644 --- a/docs/docs/guides/04-using-models/04-integrate-with-remote-server.mdx +++ b/docs/docs/guides/04-using-models/04-integrate-with-remote-server.mdx @@ -88,7 +88,7 @@ You can find your API keys in the [OpenAI Platform](https://platform.openai.com/ Restart Jan and navigate to the Hub. Then, select your configured model and start the model. -![image-01](assets/03-openai-platform-configuration.png) +![image-01](assets/04-openai-platform-configuration.png) ## Engines with OAI Compatible Configuration @@ -159,7 +159,7 @@ Navigate to the `~/jan/models` folder. Create a folder named `mistral-ins-7b-q4` Restart Jan and navigate to the Hub. Locate your model and click the Use button. -![image-02](assets/03-oai-compatible-configuration.png) +![image-02](assets/04-oai-compatible-configuration.png) ## Assistance and Support diff --git a/docs/docs/guides/04-using-models/assets/03-demo-absolute-filepath.gif b/docs/docs/guides/04-using-models/assets/03-demo-absolute-filepath.gif new file mode 100644 index 000000000..24dcc251a Binary files /dev/null and b/docs/docs/guides/04-using-models/assets/03-demo-absolute-filepath.gif differ diff --git a/docs/docs/guides/04-using-models/assets/03-oai-compatible-configuration.png b/docs/docs/guides/04-using-models/assets/04-oai-compatible-configuration.png similarity index 100% rename from docs/docs/guides/04-using-models/assets/03-oai-compatible-configuration.png rename to docs/docs/guides/04-using-models/assets/04-oai-compatible-configuration.png diff --git a/docs/docs/guides/04-using-models/assets/03-openai-platform-configuration.png b/docs/docs/guides/04-using-models/assets/04-openai-platform-configuration.png similarity index 100% rename from docs/docs/guides/04-using-models/assets/03-openai-platform-configuration.png rename to docs/docs/guides/04-using-models/assets/04-openai-platform-configuration.png diff --git a/docs/docs/guides/05-using-server/01-start-server.md b/docs/docs/guides/05-using-server/01-start-server.md index c8e5cdba3..2433fd80a 100644 --- a/docs/docs/guides/05-using-server/01-start-server.md +++ b/docs/docs/guides/05-using-server/01-start-server.md @@ -1,6 +1,6 @@ --- title: Start Local Server -slug: /guides/using-server/server +slug: /guides/using-server/start-server description: How to run Jan's built-in API server. keywords: [ diff --git a/docs/docs/guides/07-integrations/01-integrate-continue.mdx b/docs/docs/guides/07-integrations/01-integrate-continue.mdx index b3722874f..1fa0397e2 100644 --- a/docs/docs/guides/07-integrations/01-integrate-continue.mdx +++ b/docs/docs/guides/07-integrations/01-integrate-continue.mdx @@ -35,7 +35,7 @@ To get started with Continue in VS Code, please follow this [guide to install Co ### 2. Enable Jan API Server -To configure the Continue to use Jan's Local Server, you need to enable Jan API Server with your preferred model, please follow this [guide to enable Jan API Server](../05-using-server/01-server.md) +To configure the Continue to use Jan's Local Server, you need to enable Jan API Server with your preferred model, please follow this [guide to enable Jan API Server](/guides/using-server/start-server). ### 3. Configure Continue to Use Jan's Local Server diff --git a/docs/docs/guides/07-integrations/05-integrate-lmstudio.mdx b/docs/docs/guides/07-integrations/05-integrate-lmstudio.mdx index 163473eec..040b72402 100644 --- a/docs/docs/guides/07-integrations/05-integrate-lmstudio.mdx +++ b/docs/docs/guides/07-integrations/05-integrate-lmstudio.mdx @@ -90,7 +90,7 @@ Restart Jan and navigate to the Hub. Locate your model and click the Use button. ![LM Studio Integration Demo](assets/05-lmstudio-integration-demo.gif) -## Steps to Migrate Your Downloaded Model from LM Studio to Jan (Version 0.4.6 and older) +## Steps to Migrate Your Downloaded Model from LM Studio to Jan (version 0.4.6 and older) ### 1. Migrate Your Downloaded Model @@ -107,3 +107,67 @@ Ensure the folder name property is the same as the model name of `.gguf` filenam Restart Jan and navigate to the Hub. Jan will automatically detect the model and display it in the Hub. Locate your model and click the Use button for trying out the migrating model. ![Demo](assets/05-demo-migrating-model.gif) + +## Steps to Pointing the Downloaded Model of LM Studio from Jan (version 0.4.7+) + +Starting from version 0.4.7, Jan supports importing models using an absolute filepath, so you can directly use the model from the LM Studio folder. + +### 1. Revel the Model Absolute Path + +Navigate to `My Models` in the LM Studio application and reveal the model folder. Then, you can get the absolute path of your model. + +![Reveal-model-folder-lmstudio](assets/05-reveal-model-folder-lmstudio.gif) + +### 2. Modify a Model JSON + +Navigate to the `~/jan/models` folder. Create a folder named ``, for example, `phi-2.Q4_K_S` and create a `model.json` file inside the folder including the following configurations: + +- Ensure the filename must be `model.json`. +- Ensure the `id` property matches the folder name you created. +- Ensure the `url` property is the direct binary download link ending in `.gguf`. Now, you can use the absolute filepath of the model file. In this example, the absolute filepath is `/Users//.cache/lm-studio/models/TheBloke/phi-2-GGUF/phi-2.Q4_K_S.gguf`. +- Ensure the `engine` property is set to `nitro`. + +```json +{ + "object": "model", + "version": 1, + "format": "gguf", + "sources": [ + { + "filename": "phi-2.Q4_K_S.gguf", + "url": "" + } + ], + "id": "phi-2.Q4_K_S", + "name": "phi-2.Q4_K_S", + "created": 1708308111506, + "description": "phi-2.Q4_K_S - user self import model", + "settings": { + "ctx_len": 4096, + "embedding": false, + "prompt_template": "{system_message}\n### Instruction: {prompt}\n### Response:", + "llama_model_path": "phi-2.Q4_K_S.gguf" + }, + "parameters": { + "temperature": 0.7, + "top_p": 0.95, + "stream": true, + "max_tokens": 2048, + "stop": [""], + "frequency_penalty": 0, + "presence_penalty": 0 + }, + "metadata": { + "size": 1615568736, + "author": "User", + "tags": [] + }, + "engine": "nitro" +} +``` + +### 3. Start the Model + +Restart Jan and navigate to the Hub. Jan will automatically detect the model and display it in the Hub. Locate your model and click the Use button for trying out the migrating model. + +![Demo](assets/05-demo-pointing-model.gif) diff --git a/docs/docs/guides/07-integrations/assets/05-demo-pointing-model.gif b/docs/docs/guides/07-integrations/assets/05-demo-pointing-model.gif new file mode 100644 index 000000000..137fb955a Binary files /dev/null and b/docs/docs/guides/07-integrations/assets/05-demo-pointing-model.gif differ