docs: update entire docs
This commit is contained in:
parent
e25c4e2196
commit
43ea3681f7
@ -33,7 +33,6 @@ In this section, we will show you how to import a GGUF model from [HuggingFace](
|
||||
|
||||
Starting from version 0.4.7, Jan supports importing models using an absolute filepath, so you can import models from any location on your computer. Please check the [import models using absolute filepath](../import-models-using-absolute-filepath) guide for more information.
|
||||
|
||||
|
||||
## Manually Importing a Downloaded Model (nightly versions and v0.4.4+)
|
||||
|
||||
### 1. Create a Model Folder
|
||||
|
||||
@ -17,5 +17,61 @@ keywords:
|
||||
]
|
||||
---
|
||||
|
||||
In this guide, we will walk you through the process of importing a model using an absolute filepath in Jan.
|
||||
In this guide, we will walk you through the process of importing a model using an absolute filepath in Jan, using our latest model, [Trinity](https://huggingface.co/janhq/trinity-v1-GGUF), as an example.
|
||||
|
||||
### 1. Get the Absolute Filepath of the Model
|
||||
|
||||
First, you need to download the model file from the Hugging Face. Then, you can get the absolute filepath of the model file.
|
||||
|
||||
### 2. Configure the Model JSON
|
||||
|
||||
Navigate to the `~/jan/models` folder. Create a folder named `<modelname>`, for example, `tinyllama` and create a `model.json` file inside the folder including the following configurations:
|
||||
|
||||
- Ensure the filename must be `model.json`.
|
||||
- Ensure the `id` property matches the folder name you created.
|
||||
- Ensure the `url` property is the direct binary download link ending in `.gguf`. Now, you can use the absolute filepath of the model file.
|
||||
- Ensure the `engine` property is set to `nitro`.
|
||||
|
||||
```json
|
||||
{
|
||||
"sources": [
|
||||
{
|
||||
"filename": "tinyllama.gguf",
|
||||
// highlight-next-line
|
||||
"url": "<absolute-filepath-of-the-model-file>"
|
||||
}
|
||||
],
|
||||
"id": "tinyllama-1.1b",
|
||||
"object": "model",
|
||||
"name": "(Absolute Path) TinyLlama Chat 1.1B Q4",
|
||||
"version": "1.0",
|
||||
"description": "TinyLlama is a tiny model with only 1.1B. It's a good model for less powerful computers.",
|
||||
"format": "gguf",
|
||||
"settings": {
|
||||
"ctx_len": 4096,
|
||||
"prompt_template": "<|system|>\n{system_message}<|user|>\n{prompt}<|assistant|>",
|
||||
"llama_model_path": "tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf"
|
||||
},
|
||||
"parameters": {
|
||||
"temperature": 0.7,
|
||||
"top_p": 0.95,
|
||||
"stream": true,
|
||||
"max_tokens": 2048,
|
||||
"stop": [],
|
||||
"frequency_penalty": 0,
|
||||
"presence_penalty": 0
|
||||
},
|
||||
"metadata": {
|
||||
"author": "TinyLlama",
|
||||
"tags": ["Tiny", "Foundation Model"],
|
||||
"size": 669000000
|
||||
},
|
||||
"engine": "nitro"
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Start the Model
|
||||
|
||||
Restart Jan and navigate to the Hub. Locate your model and click the Use button.
|
||||
|
||||

|
||||
@ -88,7 +88,7 @@ You can find your API keys in the [OpenAI Platform](https://platform.openai.com/
|
||||
|
||||
Restart Jan and navigate to the Hub. Then, select your configured model and start the model.
|
||||
|
||||

|
||||

|
||||
|
||||
## Engines with OAI Compatible Configuration
|
||||
|
||||
@ -159,7 +159,7 @@ Navigate to the `~/jan/models` folder. Create a folder named `mistral-ins-7b-q4`
|
||||
|
||||
Restart Jan and navigate to the Hub. Locate your model and click the Use button.
|
||||
|
||||

|
||||

|
||||
|
||||
## Assistance and Support
|
||||
|
||||
|
||||
Binary file not shown.
|
After Width: | Height: | Size: 3.8 MiB |
|
Before Width: | Height: | Size: 348 KiB After Width: | Height: | Size: 348 KiB |
|
Before Width: | Height: | Size: 372 KiB After Width: | Height: | Size: 372 KiB |
@ -1,6 +1,6 @@
|
||||
---
|
||||
title: Start Local Server
|
||||
slug: /guides/using-server/server
|
||||
slug: /guides/using-server/start-server
|
||||
description: How to run Jan's built-in API server.
|
||||
keywords:
|
||||
[
|
||||
|
||||
@ -35,7 +35,7 @@ To get started with Continue in VS Code, please follow this [guide to install Co
|
||||
|
||||
### 2. Enable Jan API Server
|
||||
|
||||
To configure the Continue to use Jan's Local Server, you need to enable Jan API Server with your preferred model, please follow this [guide to enable Jan API Server](../05-using-server/01-server.md)
|
||||
To configure the Continue to use Jan's Local Server, you need to enable Jan API Server with your preferred model, please follow this [guide to enable Jan API Server](/guides/using-server/start-server).
|
||||
|
||||
### 3. Configure Continue to Use Jan's Local Server
|
||||
|
||||
|
||||
@ -90,7 +90,7 @@ Restart Jan and navigate to the Hub. Locate your model and click the Use button.
|
||||
|
||||

|
||||
|
||||
## Steps to Migrate Your Downloaded Model from LM Studio to Jan (Version 0.4.6 and older)
|
||||
## Steps to Migrate Your Downloaded Model from LM Studio to Jan (version 0.4.6 and older)
|
||||
|
||||
### 1. Migrate Your Downloaded Model
|
||||
|
||||
@ -107,3 +107,67 @@ Ensure the folder name property is the same as the model name of `.gguf` filenam
|
||||
Restart Jan and navigate to the Hub. Jan will automatically detect the model and display it in the Hub. Locate your model and click the Use button for trying out the migrating model.
|
||||
|
||||

|
||||
|
||||
## Steps to Pointing the Downloaded Model of LM Studio from Jan (version 0.4.7+)
|
||||
|
||||
Starting from version 0.4.7, Jan supports importing models using an absolute filepath, so you can directly use the model from the LM Studio folder.
|
||||
|
||||
### 1. Revel the Model Absolute Path
|
||||
|
||||
Navigate to `My Models` in the LM Studio application and reveal the model folder. Then, you can get the absolute path of your model.
|
||||
|
||||

|
||||
|
||||
### 2. Modify a Model JSON
|
||||
|
||||
Navigate to the `~/jan/models` folder. Create a folder named `<modelname>`, for example, `phi-2.Q4_K_S` and create a `model.json` file inside the folder including the following configurations:
|
||||
|
||||
- Ensure the filename must be `model.json`.
|
||||
- Ensure the `id` property matches the folder name you created.
|
||||
- Ensure the `url` property is the direct binary download link ending in `.gguf`. Now, you can use the absolute filepath of the model file. In this example, the absolute filepath is `/Users/<username>/.cache/lm-studio/models/TheBloke/phi-2-GGUF/phi-2.Q4_K_S.gguf`.
|
||||
- Ensure the `engine` property is set to `nitro`.
|
||||
|
||||
```json
|
||||
{
|
||||
"object": "model",
|
||||
"version": 1,
|
||||
"format": "gguf",
|
||||
"sources": [
|
||||
{
|
||||
"filename": "phi-2.Q4_K_S.gguf",
|
||||
"url": "<absolute-path-of-model-file>"
|
||||
}
|
||||
],
|
||||
"id": "phi-2.Q4_K_S",
|
||||
"name": "phi-2.Q4_K_S",
|
||||
"created": 1708308111506,
|
||||
"description": "phi-2.Q4_K_S - user self import model",
|
||||
"settings": {
|
||||
"ctx_len": 4096,
|
||||
"embedding": false,
|
||||
"prompt_template": "{system_message}\n### Instruction: {prompt}\n### Response:",
|
||||
"llama_model_path": "phi-2.Q4_K_S.gguf"
|
||||
},
|
||||
"parameters": {
|
||||
"temperature": 0.7,
|
||||
"top_p": 0.95,
|
||||
"stream": true,
|
||||
"max_tokens": 2048,
|
||||
"stop": ["<endofstring>"],
|
||||
"frequency_penalty": 0,
|
||||
"presence_penalty": 0
|
||||
},
|
||||
"metadata": {
|
||||
"size": 1615568736,
|
||||
"author": "User",
|
||||
"tags": []
|
||||
},
|
||||
"engine": "nitro"
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Start the Model
|
||||
|
||||
Restart Jan and navigate to the Hub. Jan will automatically detect the model and display it in the Hub. Locate your model and click the Use button for trying out the migrating model.
|
||||
|
||||

|
||||
|
||||
Binary file not shown.
|
After Width: | Height: | Size: 5.7 MiB |
Loading…
x
Reference in New Issue
Block a user