docs: update entire docs

This commit is contained in:
hieu-jan 2024-02-19 18:56:27 +07:00
parent e25c4e2196
commit 43ea3681f7
10 changed files with 126 additions and 7 deletions

View File

@ -33,7 +33,6 @@ In this section, we will show you how to import a GGUF model from [HuggingFace](
Starting from version 0.4.7, Jan supports importing models using an absolute filepath, so you can import models from any location on your computer. Please check the [import models using absolute filepath](../import-models-using-absolute-filepath) guide for more information.
## Manually Importing a Downloaded Model (nightly versions and v0.4.4+)
### 1. Create a Model Folder

View File

@ -17,5 +17,61 @@ keywords:
]
---
In this guide, we will walk you through the process of importing a model using an absolute filepath in Jan.
In this guide, we will walk you through the process of importing a model using an absolute filepath in Jan, using our latest model, [Trinity](https://huggingface.co/janhq/trinity-v1-GGUF), as an example.
### 1. Get the Absolute Filepath of the Model
First, you need to download the model file from the Hugging Face. Then, you can get the absolute filepath of the model file.
### 2. Configure the Model JSON
Navigate to the `~/jan/models` folder. Create a folder named `<modelname>`, for example, `tinyllama` and create a `model.json` file inside the folder including the following configurations:
- Ensure the filename must be `model.json`.
- Ensure the `id` property matches the folder name you created.
- Ensure the `url` property is the direct binary download link ending in `.gguf`. Now, you can use the absolute filepath of the model file.
- Ensure the `engine` property is set to `nitro`.
```json
{
"sources": [
{
"filename": "tinyllama.gguf",
// highlight-next-line
"url": "<absolute-filepath-of-the-model-file>"
}
],
"id": "tinyllama-1.1b",
"object": "model",
"name": "(Absolute Path) TinyLlama Chat 1.1B Q4",
"version": "1.0",
"description": "TinyLlama is a tiny model with only 1.1B. It's a good model for less powerful computers.",
"format": "gguf",
"settings": {
"ctx_len": 4096,
"prompt_template": "<|system|>\n{system_message}<|user|>\n{prompt}<|assistant|>",
"llama_model_path": "tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf"
},
"parameters": {
"temperature": 0.7,
"top_p": 0.95,
"stream": true,
"max_tokens": 2048,
"stop": [],
"frequency_penalty": 0,
"presence_penalty": 0
},
"metadata": {
"author": "TinyLlama",
"tags": ["Tiny", "Foundation Model"],
"size": 669000000
},
"engine": "nitro"
}
```
### 3. Start the Model
Restart Jan and navigate to the Hub. Locate your model and click the Use button.
![Demo](assets/03-demo-absolute-filepath.gif)

View File

@ -88,7 +88,7 @@ You can find your API keys in the [OpenAI Platform](https://platform.openai.com/
Restart Jan and navigate to the Hub. Then, select your configured model and start the model.
![image-01](assets/03-openai-platform-configuration.png)
![image-01](assets/04-openai-platform-configuration.png)
## Engines with OAI Compatible Configuration
@ -159,7 +159,7 @@ Navigate to the `~/jan/models` folder. Create a folder named `mistral-ins-7b-q4`
Restart Jan and navigate to the Hub. Locate your model and click the Use button.
![image-02](assets/03-oai-compatible-configuration.png)
![image-02](assets/04-oai-compatible-configuration.png)
## Assistance and Support

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.8 MiB

View File

@ -1,6 +1,6 @@
---
title: Start Local Server
slug: /guides/using-server/server
slug: /guides/using-server/start-server
description: How to run Jan's built-in API server.
keywords:
[

View File

@ -35,7 +35,7 @@ To get started with Continue in VS Code, please follow this [guide to install Co
### 2. Enable Jan API Server
To configure the Continue to use Jan's Local Server, you need to enable Jan API Server with your preferred model, please follow this [guide to enable Jan API Server](../05-using-server/01-server.md)
To configure the Continue to use Jan's Local Server, you need to enable Jan API Server with your preferred model, please follow this [guide to enable Jan API Server](/guides/using-server/start-server).
### 3. Configure Continue to Use Jan's Local Server

View File

@ -90,7 +90,7 @@ Restart Jan and navigate to the Hub. Locate your model and click the Use button.
![LM Studio Integration Demo](assets/05-lmstudio-integration-demo.gif)
## Steps to Migrate Your Downloaded Model from LM Studio to Jan (Version 0.4.6 and older)
## Steps to Migrate Your Downloaded Model from LM Studio to Jan (version 0.4.6 and older)
### 1. Migrate Your Downloaded Model
@ -107,3 +107,67 @@ Ensure the folder name property is the same as the model name of `.gguf` filenam
Restart Jan and navigate to the Hub. Jan will automatically detect the model and display it in the Hub. Locate your model and click the Use button for trying out the migrating model.
![Demo](assets/05-demo-migrating-model.gif)
## Steps to Pointing the Downloaded Model of LM Studio from Jan (version 0.4.7+)
Starting from version 0.4.7, Jan supports importing models using an absolute filepath, so you can directly use the model from the LM Studio folder.
### 1. Revel the Model Absolute Path
Navigate to `My Models` in the LM Studio application and reveal the model folder. Then, you can get the absolute path of your model.
![Reveal-model-folder-lmstudio](assets/05-reveal-model-folder-lmstudio.gif)
### 2. Modify a Model JSON
Navigate to the `~/jan/models` folder. Create a folder named `<modelname>`, for example, `phi-2.Q4_K_S` and create a `model.json` file inside the folder including the following configurations:
- Ensure the filename must be `model.json`.
- Ensure the `id` property matches the folder name you created.
- Ensure the `url` property is the direct binary download link ending in `.gguf`. Now, you can use the absolute filepath of the model file. In this example, the absolute filepath is `/Users/<username>/.cache/lm-studio/models/TheBloke/phi-2-GGUF/phi-2.Q4_K_S.gguf`.
- Ensure the `engine` property is set to `nitro`.
```json
{
"object": "model",
"version": 1,
"format": "gguf",
"sources": [
{
"filename": "phi-2.Q4_K_S.gguf",
"url": "<absolute-path-of-model-file>"
}
],
"id": "phi-2.Q4_K_S",
"name": "phi-2.Q4_K_S",
"created": 1708308111506,
"description": "phi-2.Q4_K_S - user self import model",
"settings": {
"ctx_len": 4096,
"embedding": false,
"prompt_template": "{system_message}\n### Instruction: {prompt}\n### Response:",
"llama_model_path": "phi-2.Q4_K_S.gguf"
},
"parameters": {
"temperature": 0.7,
"top_p": 0.95,
"stream": true,
"max_tokens": 2048,
"stop": ["<endofstring>"],
"frequency_penalty": 0,
"presence_penalty": 0
},
"metadata": {
"size": 1615568736,
"author": "User",
"tags": []
},
"engine": "nitro"
}
```
### 3. Start the Model
Restart Jan and navigate to the Hub. Jan will automatically detect the model and display it in the Hub. Locate your model and click the Use button for trying out the migrating model.
![Demo](assets/05-demo-pointing-model.gif)

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.7 MiB