From c8c9ec3064a3d8a9987a61358ae77a6e2ec948eb Mon Sep 17 00:00:00 2001 From: hieu-jan <150573299+hieu-jan@users.noreply.github.com> Date: Wed, 21 Feb 2024 12:41:20 +0700 Subject: [PATCH] docs: resolve suggestions --- .../04-using-models/02-import-manually.mdx | 3 +- ...-import-models-using-absolute-filepath.mdx | 7 +++-- .../07-integrations/05-integrate-lmstudio.mdx | 28 +++++++++++-------- 3 files changed, 21 insertions(+), 17 deletions(-) diff --git a/docs/docs/guides/04-using-models/02-import-manually.mdx b/docs/docs/guides/04-using-models/02-import-manually.mdx index 03fe12cc9..7c446ea1c 100644 --- a/docs/docs/guides/04-using-models/02-import-manually.mdx +++ b/docs/docs/guides/04-using-models/02-import-manually.mdx @@ -31,7 +31,7 @@ In this section, we will show you how to import a GGUF model from [HuggingFace]( ## Import Models Using Absolute Filepath (version 0.4.7) -Starting from version 0.4.7, Jan supports importing models using an absolute filepath, so you can import models from any location on your computer. Please check the [import models using absolute filepath](../import-models-using-absolute-filepath) guide for more information. +Starting from version 0.4.7, Jan has introduced the capability to import models using an absolute file path. It allows you to import models from any directory on your computer. Please check the [import models using absolute filepath](../import-models-using-absolute-filepath) guide for more information. ## Manually Importing a Downloaded Model (nightly versions and v0.4.4+) @@ -190,7 +190,6 @@ This means that you can easily reconfigure your models, export them, and share y Edit `model.json` and include the following configurations: -- Ensure the filename must be `model.json`. - Ensure the `id` property matches the folder name you created. - Ensure the GGUF filename should match the `id` property exactly. - Ensure the `source.url` property is the direct binary download link ending in `.gguf`. In HuggingFace, you can find the direct links in the `Files and versions` tab. diff --git a/docs/docs/guides/04-using-models/03-import-models-using-absolute-filepath.mdx b/docs/docs/guides/04-using-models/03-import-models-using-absolute-filepath.mdx index 3b377e76d..490f68cd6 100644 --- a/docs/docs/guides/04-using-models/03-import-models-using-absolute-filepath.mdx +++ b/docs/docs/guides/04-using-models/03-import-models-using-absolute-filepath.mdx @@ -25,9 +25,10 @@ After downloading .gguf model, you can get the absolute filepath of the model fi ### 2. Configure the Model JSON -Navigate to the `~/jan/models` folder. Create a folder named ``, for example, `tinyllama` and create a `model.json` file inside the folder including the following configurations: +1. Navigate to the `~/jan/models` folder. +2. Create a folder named ``, for example, `tinyllama`. +3. Create a `model.json` file inside the folder, including the following configurations: -- Ensure the filename must be `model.json`. - Ensure the `id` property matches the folder name you created. - Ensure the `url` property is the direct binary download link ending in `.gguf`. Now, you can use the absolute filepath of the model file. - Ensure the `engine` property is set to `nitro`. @@ -72,7 +73,7 @@ Navigate to the `~/jan/models` folder. Create a folder named ``, for :::warning -- Windows users may need to use double backslashes in the `url` property, for example: `C:\\Users\\username\\filename.gguf`. +- If you are using Windows, you need to use double backslashes in the url property, for example: `C:\\Users\\username\\filename.gguf`. ::: diff --git a/docs/docs/guides/07-integrations/05-integrate-lmstudio.mdx b/docs/docs/guides/07-integrations/05-integrate-lmstudio.mdx index f1636ce59..1af0edc69 100644 --- a/docs/docs/guides/07-integrations/05-integrate-lmstudio.mdx +++ b/docs/docs/guides/07-integrations/05-integrate-lmstudio.mdx @@ -24,7 +24,9 @@ With [LM Studio](https://lmstudio.ai/), you can discover, download, and run loca ### 1. Start the LM Studio Server -Navigate to the `Local Inference Server` on the LM Studio application, and select the model you want to use. Then, start the server after configuring the server port and options. +1. Navigate to the `Local Inference Server` on the LM Studio application. +2. Select the model you want to use. +3. Start the server after configuring the server port and options. ![LM Studio Server](assets/05-setting-lmstudio-server.gif) @@ -48,10 +50,9 @@ Modify the `openai.json` file in the `~/jan/engines` folder to include the full Navigate to the `~/jan/models` folder. Create a folder named ``, for example, `lmstudio-phi-2` and create a `model.json` file inside the folder including the following configurations: -- Ensure the filename must be `model.json`. -- Ensure the `format` property is set to `api`. -- Ensure the `engine` property is set to `openai`. -- Ensure the `state` property is set to `ready`. +- Set the `format` property to `api`. +- Set the `engine` property to `openai`. +- Set the `state` property to `ready`. ```json title="~/jan/models/lmstudio-phi-2/model.json" { @@ -82,7 +83,8 @@ Navigate to the `~/jan/models` folder. Create a folder named ``, for example, `phi-2.Q4_K_S` and create a `model.json` file inside the folder including the following configurations: -- Ensure the filename must be `model.json`. - Ensure the `id` property matches the folder name you created. - Ensure the `url` property is the direct binary download link ending in `.gguf`. Now, you can use the absolute filepath of the model file. In this example, the absolute filepath is `/Users//.cache/lm-studio/models/TheBloke/phi-2-GGUF/phi-2.Q4_K_S.gguf`. - Ensure the `engine` property is set to `nitro`. @@ -175,6 +177,8 @@ Navigate to the `~/jan/models` folder. Create a folder named ``, for ### 3. Start the Model -Restart Jan and navigate to the Hub. Jan will automatically detect the model and display it in the Hub. Locate your model and click the Use button for trying out the migrating model. +1. Restart Jan and navigate to the **Hub**. +2. Jan will automatically detect the model and display it in the **Hub**. +3. Locate your model and click the **Use** button to try the migrating model. ![Demo](assets/05-demo-pointing-model.gif)