docs: resolve suggestions
This commit is contained in:
parent
ad42b393e2
commit
c8c9ec3064
@ -31,7 +31,7 @@ In this section, we will show you how to import a GGUF model from [HuggingFace](
|
||||
|
||||
## Import Models Using Absolute Filepath (version 0.4.7)
|
||||
|
||||
Starting from version 0.4.7, Jan supports importing models using an absolute filepath, so you can import models from any location on your computer. Please check the [import models using absolute filepath](../import-models-using-absolute-filepath) guide for more information.
|
||||
Starting from version 0.4.7, Jan has introduced the capability to import models using an absolute file path. It allows you to import models from any directory on your computer. Please check the [import models using absolute filepath](../import-models-using-absolute-filepath) guide for more information.
|
||||
|
||||
## Manually Importing a Downloaded Model (nightly versions and v0.4.4+)
|
||||
|
||||
@ -190,7 +190,6 @@ This means that you can easily reconfigure your models, export them, and share y
|
||||
|
||||
Edit `model.json` and include the following configurations:
|
||||
|
||||
- Ensure the filename must be `model.json`.
|
||||
- Ensure the `id` property matches the folder name you created.
|
||||
- Ensure the GGUF filename should match the `id` property exactly.
|
||||
- Ensure the `source.url` property is the direct binary download link ending in `.gguf`. In HuggingFace, you can find the direct links in the `Files and versions` tab.
|
||||
|
||||
@ -25,9 +25,10 @@ After downloading .gguf model, you can get the absolute filepath of the model fi
|
||||
|
||||
### 2. Configure the Model JSON
|
||||
|
||||
Navigate to the `~/jan/models` folder. Create a folder named `<modelname>`, for example, `tinyllama` and create a `model.json` file inside the folder including the following configurations:
|
||||
1. Navigate to the `~/jan/models` folder.
|
||||
2. Create a folder named `<modelname>`, for example, `tinyllama`.
|
||||
3. Create a `model.json` file inside the folder, including the following configurations:
|
||||
|
||||
- Ensure the filename must be `model.json`.
|
||||
- Ensure the `id` property matches the folder name you created.
|
||||
- Ensure the `url` property is the direct binary download link ending in `.gguf`. Now, you can use the absolute filepath of the model file.
|
||||
- Ensure the `engine` property is set to `nitro`.
|
||||
@ -72,7 +73,7 @@ Navigate to the `~/jan/models` folder. Create a folder named `<modelname>`, for
|
||||
|
||||
:::warning
|
||||
|
||||
- Windows users may need to use double backslashes in the `url` property, for example: `C:\\Users\\username\\filename.gguf`.
|
||||
- If you are using Windows, you need to use double backslashes in the url property, for example: `C:\\Users\\username\\filename.gguf`.
|
||||
|
||||
:::
|
||||
|
||||
|
||||
@ -24,7 +24,9 @@ With [LM Studio](https://lmstudio.ai/), you can discover, download, and run loca
|
||||
|
||||
### 1. Start the LM Studio Server
|
||||
|
||||
Navigate to the `Local Inference Server` on the LM Studio application, and select the model you want to use. Then, start the server after configuring the server port and options.
|
||||
1. Navigate to the `Local Inference Server` on the LM Studio application.
|
||||
2. Select the model you want to use.
|
||||
3. Start the server after configuring the server port and options.
|
||||
|
||||

|
||||
|
||||
@ -48,10 +50,9 @@ Modify the `openai.json` file in the `~/jan/engines` folder to include the full
|
||||
|
||||
Navigate to the `~/jan/models` folder. Create a folder named `<lmstudio-modelname>`, for example, `lmstudio-phi-2` and create a `model.json` file inside the folder including the following configurations:
|
||||
|
||||
- Ensure the filename must be `model.json`.
|
||||
- Ensure the `format` property is set to `api`.
|
||||
- Ensure the `engine` property is set to `openai`.
|
||||
- Ensure the `state` property is set to `ready`.
|
||||
- Set the `format` property to `api`.
|
||||
- Set the `engine` property to `openai`.
|
||||
- Set the `state` property to `ready`.
|
||||
|
||||
```json title="~/jan/models/lmstudio-phi-2/model.json"
|
||||
{
|
||||
@ -82,7 +83,8 @@ Navigate to the `~/jan/models` folder. Create a folder named `<lmstudio-modelnam
|
||||
|
||||
### 3. Start the Model
|
||||
|
||||
Restart Jan and navigate to the Hub. Locate your model and click the Use button.
|
||||
1. Restart Jan and navigate to the **Hub**.
|
||||
2. Locate your model and click the **Use** button.
|
||||
|
||||

|
||||
|
||||
@ -94,17 +96,18 @@ Restart Jan and navigate to the Hub. Locate your model and click the Use button.
|
||||
|
||||
### 1. Migrate Your Downloaded Model
|
||||
|
||||
Navigate to `My Models` in the LM Studio application and reveal the model folder.
|
||||
1. Navigate to `My Models` in the LM Studio application and reveal the model folder.
|
||||
|
||||

|
||||
|
||||
Copy the model folder that you want to migrate to `~/jan/models` folder.
|
||||
2. Copy the model folder that you want to migrate to `~/jan/models` folder.
|
||||
|
||||
Ensure the folder name property is the same as the model name of `.gguf` filename by changing the folder name if necessary. For example, in this case, we changed foldername from `TheBloke` to `phi-2.Q4_K_S`.
|
||||
3. Ensure the folder name property is the same as the model name of `.gguf` filename by changing the folder name if necessary. For example, in this case, we changed foldername from `TheBloke` to `phi-2.Q4_K_S`.
|
||||
|
||||
### 2. Start the Model
|
||||
|
||||
Restart Jan and navigate to the Hub. Jan will automatically detect the model and display it in the Hub. Locate your model and click the Use button for trying out the migrating model.
|
||||
1. Restart Jan and navigate to the **Hub**. Jan will automatically detect the model and display it in the **Hub**.
|
||||
2. Locate your model and click the **Use** button to try the migrating model.
|
||||
|
||||

|
||||
|
||||
@ -122,7 +125,6 @@ Navigate to `My Models` in the LM Studio application and reveal the model folder
|
||||
|
||||
Navigate to the `~/jan/models` folder. Create a folder named `<modelname>`, for example, `phi-2.Q4_K_S` and create a `model.json` file inside the folder including the following configurations:
|
||||
|
||||
- Ensure the filename must be `model.json`.
|
||||
- Ensure the `id` property matches the folder name you created.
|
||||
- Ensure the `url` property is the direct binary download link ending in `.gguf`. Now, you can use the absolute filepath of the model file. In this example, the absolute filepath is `/Users/<username>/.cache/lm-studio/models/TheBloke/phi-2-GGUF/phi-2.Q4_K_S.gguf`.
|
||||
- Ensure the `engine` property is set to `nitro`.
|
||||
@ -175,6 +177,8 @@ Navigate to the `~/jan/models` folder. Create a folder named `<modelname>`, for
|
||||
|
||||
### 3. Start the Model
|
||||
|
||||
Restart Jan and navigate to the Hub. Jan will automatically detect the model and display it in the Hub. Locate your model and click the Use button for trying out the migrating model.
|
||||
1. Restart Jan and navigate to the **Hub**.
|
||||
2. Jan will automatically detect the model and display it in the **Hub**.
|
||||
3. Locate your model and click the **Use** button to try the migrating model.
|
||||
|
||||

|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user