245 lines
6.0 KiB
Plaintext
245 lines
6.0 KiB
Plaintext
---
|
|
title: Import Models Manually
|
|
slug: /guides/using-models/import-manually
|
|
description: Guide to manually import a local model into Jan.
|
|
keywords:
|
|
[
|
|
Jan AI,
|
|
Jan,
|
|
ChatGPT alternative,
|
|
local AI,
|
|
private AI,
|
|
conversational AI,
|
|
no-subscription fee,
|
|
large language model,
|
|
import-models-manually,
|
|
local model,
|
|
]
|
|
---
|
|
|
|
:::caution
|
|
This is currently under development.
|
|
:::
|
|
|
|
{/* Imports */}
|
|
import Tabs from "@theme/Tabs";
|
|
import TabItem from "@theme/TabItem";
|
|
|
|
In this section, we will show you how to import a GGUF model from [HuggingFace](https://huggingface.co/), using our latest model, [Trinity](https://huggingface.co/janhq/trinity-v1-GGUF), as an example.
|
|
|
|
> We are fast shipping a UI to make this easier, but it's a bit manual for now. Apologies.
|
|
|
|
## Import Models Using Absolute Filepath (version 0.4.7)
|
|
|
|
Starting from version 0.4.7, Jan supports importing models using an absolute filepath, so you can import models from any location on your computer. Please check the [import models using absolute filepath](../import-models-using-absolute-filepath) guide for more information.
|
|
|
|
|
|
## Manually Importing a Downloaded Model (nightly versions and v0.4.4+)
|
|
|
|
### 1. Create a Model Folder
|
|
|
|
Navigate to the `~/jan/models` folder. You can find this folder by going to `App Settings` > `Advanced` > `Open App Directory`.
|
|
|
|
<Tabs groupId="operating-systems">
|
|
<TabItem value="mac" label="macOS">
|
|
|
|
```sh
|
|
cd ~/jan/models
|
|
```
|
|
|
|
</TabItem>
|
|
<TabItem value="win" label="Windows">
|
|
|
|
```sh
|
|
C:/Users/<your_user_name>/jan/models
|
|
```
|
|
|
|
</TabItem>
|
|
<TabItem value="linux" label="Linux">
|
|
|
|
```sh
|
|
cd ~/jan/models
|
|
```
|
|
|
|
</TabItem>
|
|
</Tabs>
|
|
|
|
In the `models` folder, create a folder with the name of the model.
|
|
|
|
<Tabs groupId="operating-systems">
|
|
<TabItem value="mac" label="macOS">
|
|
|
|
```sh
|
|
mkdir trinity-v1-7b
|
|
```
|
|
|
|
</TabItem>
|
|
<TabItem value="win" label="Windows">
|
|
|
|
```sh
|
|
mkdir trinity-v1-7b
|
|
```
|
|
|
|
</TabItem>
|
|
<TabItem value="linux" label="Linux">
|
|
|
|
```sh
|
|
mkdir trinity-v1-7b
|
|
```
|
|
|
|
</TabItem>
|
|
</Tabs>
|
|
|
|
#### 2. Drag & Drop the Model
|
|
|
|
Drag and drop your model binary into this folder, ensuring the `modelname.gguf` is the same name as the folder name, e.g. `models/modelname`
|
|
|
|
#### 3. Voila
|
|
|
|
If your model doesn't show up in the Model Selector in conversations, please restart the app.
|
|
|
|
If that doesn't work, please feel free to join our [Discord community](https://discord.gg/Dt7MxDyNNZ) for support, updates, and discussions.
|
|
|
|
## Manually Importing a Downloaded Model (older versions)
|
|
|
|
### 1. Create a Model Folder
|
|
|
|
Navigate to the `~/jan/models` folder. You can find this folder by going to `App Settings` > `Advanced` > `Open App Directory`.
|
|
|
|
<Tabs groupId="operating-systems">
|
|
<TabItem value="mac" label="macOS">
|
|
|
|
```sh
|
|
cd ~/jan/models
|
|
```
|
|
|
|
</TabItem>
|
|
<TabItem value="win" label="Windows">
|
|
|
|
```sh
|
|
C:/Users/<your_user_name>/jan/models
|
|
```
|
|
|
|
</TabItem>
|
|
<TabItem value="linux" label="Linux">
|
|
|
|
```sh
|
|
cd ~/jan/models
|
|
```
|
|
|
|
</TabItem>
|
|
</Tabs>
|
|
|
|
In the `models` folder, create a folder with the name of the model.
|
|
|
|
<Tabs groupId="operating-systems">
|
|
<TabItem value="mac" label="macOS">
|
|
|
|
```sh
|
|
mkdir trinity-v1-7b
|
|
```
|
|
|
|
</TabItem>
|
|
<TabItem value="win" label="Windows">
|
|
|
|
```sh
|
|
mkdir trinity-v1-7b
|
|
```
|
|
|
|
</TabItem>
|
|
<TabItem value="linux" label="Linux">
|
|
|
|
```sh
|
|
mkdir trinity-v1-7b
|
|
```
|
|
|
|
</TabItem>
|
|
</Tabs>
|
|
|
|
### 2. Create a Model JSON
|
|
|
|
Jan follows a folder-based, [standard model template](/docs/engineering/models) called a `model.json` to persist the model configurations on your local filesystem.
|
|
|
|
This means that you can easily reconfigure your models, export them, and share your preferences transparently.
|
|
|
|
<Tabs groupId="operating-systems">
|
|
<TabItem value="mac" label="macOS">
|
|
|
|
```sh
|
|
cd trinity-v1-7b
|
|
touch model.json
|
|
```
|
|
|
|
</TabItem>
|
|
<TabItem value="win" label="Windows">
|
|
|
|
```sh
|
|
cd trinity-v1-7b
|
|
echo {} > model.json
|
|
```
|
|
|
|
</TabItem>
|
|
<TabItem value="linux" label="Linux">
|
|
|
|
```sh
|
|
cd trinity-v1-7b
|
|
touch model.json
|
|
```
|
|
|
|
</TabItem>
|
|
</Tabs>
|
|
|
|
Edit `model.json` and include the following configurations:
|
|
|
|
- Ensure the filename must be `model.json`.
|
|
- Ensure the `id` property matches the folder name you created.
|
|
- Ensure the GGUF filename should match the `id` property exactly.
|
|
- Ensure the `source.url` property is the direct binary download link ending in `.gguf`. In HuggingFace, you can find the direct links in the `Files and versions` tab.
|
|
- Ensure you are using the correct `prompt_template`. This is usually provided in the HuggingFace model's description page.
|
|
|
|
```json title="model.json"
|
|
{
|
|
// highlight-start
|
|
"sources": [
|
|
{
|
|
"filename": "trinity-v1.Q4_K_M.gguf",
|
|
"url": "https://huggingface.co/janhq/trinity-v1-GGUF/resolve/main/trinity-v1.Q4_K_M.gguf"
|
|
}
|
|
],
|
|
"id": "trinity-v1-7b",
|
|
// highlight-end
|
|
"object": "model",
|
|
"name": "Trinity-v1 7B Q4",
|
|
"version": "1.0",
|
|
"description": "Trinity is an experimental model merge of GreenNodeLM & LeoScorpius using the Slerp method. Recommended for daily assistance purposes.",
|
|
"format": "gguf",
|
|
"settings": {
|
|
"ctx_len": 4096,
|
|
// highlight-next-line
|
|
"prompt_template": "{system_message}\n### Instruction:\n{prompt}\n### Response:",
|
|
"llama_model_path": "trinity-v1.Q4_K_M.gguf"
|
|
},
|
|
"parameters": {
|
|
"max_tokens": 4096
|
|
},
|
|
"metadata": {
|
|
"author": "Jan",
|
|
"tags": ["7B", "Merged"],
|
|
"size": 4370000000
|
|
},
|
|
"engine": "nitro"
|
|
}
|
|
```
|
|
|
|
### 3. Download the Model
|
|
|
|
Restart Jan and navigate to the Hub. Locate your model and click the `Download` button to download the model binary.
|
|
|
|

|
|
|
|
Your model is now ready to use in Jan.
|
|
|
|
## Assistance and Support
|
|
|
|
If you have questions or are looking for more preconfigured GGUF models, please feel free to join our [Discord community](https://discord.gg/Dt7MxDyNNZ) for support, updates, and discussions.
|