[WIP] Updating basic usage pages

This commit is contained in:
Ashley 2024-12-31 09:23:43 +07:00
parent 60bb719a53
commit 724c9a8897
2 changed files with 80 additions and 140 deletions

View File

@ -1,6 +1,6 @@
{
"manage-models": {
"title": "Managing Models",
"title": "Model Management",
"href": "/docs/models/manage-models"
},
"model-parameters": {

View File

@ -19,125 +19,81 @@ keywords:
---
import { Callout, Steps } from 'nextra/components'
# Overview
This guide provides comprehensive instructions on adding, customizing, and deleting models within the Jan platform.
# Model Management
This guide provides comprehensive instructions on adding, customizing, and deleting models within Jan.
## Add Models
## Local Model
There are various ways to add models to Jan.
Currently, Jan natively supports the following model formats:
- GGUF (through a llama.cpp engine)
- TensorRT (through a TRT-LLM engine)
### Download from Jan Hub
Jan Hub provides three convenient methods to access machine learning models. Heres a clear step-by-step guide for each method:
#### 1. Download from the Recommended List
The Recommended List is a great starting point if you're looking for popular and pre-configured models that work well and quickly on most computers.
1. Open the Jan app and navigate to the Hub.
<br/>
![Jan Hub](../_assets/hub.png)
<br/>
2. Select models, clicking the `v` dropdown for more information.
Jan offers flexible options for managing local models through its [Cortex](https://cortex.so/) engine. Currently, Jan only supports **GGUF format** models.
<Callout type="info">
Models with the `Recommended` label will likely run faster on your computer.
Local models run directly on your computer, which means they use your computer's memory (RAM) and processing power. Please choose models carefully based on your hardware specifications.
</Callout>
3. Click **Download** to download the model.
<br/>
![Download Model](../_assets/download-button.png)
#### 2. Download with HuggingFace Model's ID or URL
If you need a specific model from [Hugging Face](https://huggingface.co/models), Jan Hub lets you download it directly using the models ID or URL.
### Add Models
#### 1. Download from Jan Hub (Recommended)
The easiest way to get started is using Jan's built-in model hub:
The Recommended List is a great starting point if you're looking for popular and pre-configured models that work well and quickly on most computers.
1. Go to the **Hub**
2. Browse available models and click on any model to see details about it
3. Choose a model that fits your needs & hardware specifications
4. Click **Download** on your chosen model
<Callout type="info">
Jan will indicate if a model might be **Slow on your device** or requires **Not enough RAM** based on your system specifications.
</Callout>
#### 2. Import from [Hugging Face](https://huggingface.co/)
You can import GGUF models directly from [Hugging Face](https://huggingface.co/):
##### Option A: Import in Jan
1. Visit [Hugging Face Models](https://huggingface.co/models).
2. Find a GGUF model you want to use
3. Copy the **model ID** (e.g., TheBloke/Mistral-7B-v0.1-GGUF) or its **URL**
4. In Jan, paste the model ID/URL to **Search** bar in **Hub** or in **Settings** > **My Models**
5. Select your preferred quantized version to download
##### Option B: Use Deep Link
You can use Jan's deep link feature to quickly import models:
1. Visit [Hugging Face Models](https://huggingface.co/models).
2. Find the GGUF model you want to use
3. Copy the **model ID**, for example: `TheBloke/Mistral-7B-v0.1-GGUF`
4. Create a **deep link URL** in this format:
```
jan://models/huggingface/<model_id>
```
5. Enter the URL in your browser & **Enter**, for example:
```
jan://models/huggingface/TheBloke/Mistral-7B-v0.1-GGUF
```
6. A prompt will appear: `This site is trying to open Jan`, click **Open** to open Jan app.
7. Select your preferred quantized version to download
<Callout type="warning">
Only `GGUF` models are supported for this feature.
Deep linking won't work for models requiring API tokens or usage agreements. You'll need to download these models manually through the Hugging Face website.
</Callout>
1. Go to the [Hugging Face](https://huggingface.co/models).
2. Select the model you want to use.
3. Copy the Model's ID or URL, for example: `MaziyarPanahi/Mixtral-8x22B-Instruct-v0.1-GGUF` or `https://huggingface.co/MaziyarPanahi/Mixtral-8x22B-Instruct-v0.1-GGUF`.
4. Return to the Jan app and click on the Hub tab.
<br/>
![Jan Hub](../_assets/hub.png)
<br/>
5. Paste the **URL** or the **model ID** you have copied into the search bar.
<br/>
![Search Bar](../_assets/search-bar.png)
<br/>
6. The app will show all available versions of the model.
7. Click **Download** to download the model.
<br/>
![Download Model](../_assets/download-button2.png)
<br/>
#### 3. Download with Deep Link
You can also use Jan's deep link feature to download a specific model from [Hugging Face](https://huggingface.co/models). The deep link format is: `jan://models/huggingface/<model's ID>`.
<Callout type="warning">
The deep link feature cannot be used for models that require:
- API Token.
- Acceptance of usage agreement.
You will need to download such models manually.
#### 3. Import Local Files
If you already have GGUF model files on your computer:
1. In Jan, go to **Hub** or **Settings** > **My Models**
2. Click **Import Model**
3. Select your **GGUF** file
4. Choose how you want to import:
- **Link Files:** Creates symbolic links to your model files (saves space)
- **Duplicate:** Makes a copy of model files in Jan's directory
5. Click **Import** to complete
<Callout type="info">
You need to own your **model configurations**, use at your own risk. Misconfigurations may result in lower quality or unexpected outputs.
</Callout>
1. Go to the [Hugging Face](https://huggingface.co/models).
2. Select the model you want to use.
3. Copy the Model's ID or URL, for example: `TheBloke/Magicoder-S-DS-6.7B-GGUF`.
4. Enter the deep link URL with your chosen model's ID in your browser. For example: `jan://models/huggingface/TheBloke/Magicoder-S-DS-6.7B-GGUF`
<br/>
![Paste the URL](../_assets/browser1.png)
<br/>
5. A prompt will appear, click **Open** to open the Jan app.
<br/>
![Click Open](../_assets/browser2.png)
<br/>
6. The app will show all available versions of the model.
7. Click **Download** to download the model.
<br/>
![Download Model](../_assets/download-button3.png)
<br/>
### Import or Symlink Local Models
You can also point to existing model binary files on your local filesystem.
This is the easiest and most space-efficient way if you have already used other local AI applications.
1. Navigate to the Settings
<br/>
![Jan Hub](../_assets/hub.png)
<br/>
2. Click on `My Models` at the top.
<br/>
![Import Model](../_assets/import.png)
<br/>
3. Click the `Import Model` button on the top-right of your screen.
4. Click the upload icon button.
<br/>
![Download Icon](../_assets/download-icon.png)
<br/>
4. Import using `.GGUF` file or a folder.
<br/>
![Import Model](../_assets/import2.png)
<br/>
5. Select the model or the folder containing multiple models.
### Add a Model Manually
You can also add a specific model that is not available within the **Hub** section by following the steps below:
1. Open the Jan app.
2. Click the **gear icon (⚙️)** on the bottom left of your screen.
<br/>
![Settings](../_assets/settings.png)
<br/>
3. Under the **Settings screen**, click **Advanced Settings**.
<br/>
![Settings](../_assets/advance-set.png)
<br/>
4. Open the **Jan Data folder**.
<br/>
![Jan Data Folder](../_assets/data-folder.png)
<br/>
5. Head to the `~/jan/data/models/`.
6. Make a new model folder and put a file named `model.json` in it.
7. Insert the following `model.json` default code:
```json
{
#### 4. Manual Setup
For advanced users who add a specific model that is not available within Jan **Hub**:
1. Navigate to `~/jan/data/models/`
2. Create a new **Folder** for your model
3. Add a `model.json` file with your configuration:
```
"id": "<unique_identifier_of_the_model>",
"object": "<type_of_object, e.g., model, tool>",
"name": "<name_of_the_model>",
@ -155,20 +111,11 @@ You can also add a specific model that is not available within the **Hub** secti
},
"engine": "<engine_or_platform_the_model_runs_on>",
"source": "<url_or_source_of_the_model_information>"
}
```
There are two important fields in `model.json` that you need to set:
#### Settings
This is the field where you can set your engine configurations.
#### Parameters
`parameters` are the adjustable settings that affect how your model operates or processes the data.
The fields in `parameters` are typically general and can be the same across models. Here is an example of model parameters:
```json
Key fields to configure:
1. **Settings** is where you can set your engine configurations.
2. [**Parameters**](/docs/models#model-parameters) are the adjustable settings that affect how your model operates or processes the data. The fields in parameters are typically general and can be the same across models. Here is an example of model parameters:
```
"parameters":{
"temperature": 0.7,
"top_p": 0.95,
@ -176,24 +123,17 @@ The fields in `parameters` are typically general and can be the same across mode
"max_tokens": 4096,
"frequency_penalty": 0,
"presence_penalty": 0
}
```
<Callout type='info'>
To see the complete list of a model's parameters, please see [Model Parameters](/docs/models#model-parameters).
</Callout>
## Delete Models
To delete a model:
### Delete Models
1. Go to **Settings** > **My Models**
2. Find the model you want to remove
3. Select the three dots next to it and select **Delete model**.
1. Go to **Settings**.
<br/>
![Settings](../_assets/settings.png)
<br/>
2. Go to **My Models**.
<br/>
![My Models](../_assets/mymodels.png)
<br/>
3. Select the three dots next and select **Delete model**.
<br/>
![Delete Model](../_assets/delete.png)
## Cloud model
Jan supports connecting to various AI cloud providers that are OpenAI API-compatible, including: OpenAI (GPT-4, o1,...), Anthropic (Claude), Groq, Mistral, and more.
1. Open **Settings**
2. Under **Model Provider** section in left sidebar (OpenAI, Anthropic, etc.), choose a provider
3. Enter your API key
4. The activated cloud models will available in your model selection dropdown