docs: add pointing client remote server
This commit is contained in:
parent
7e88f1c776
commit
39be5bf9c3
@ -28,7 +28,7 @@ Jan is compatible with all GGUF models.
|
|||||||
|
|
||||||
If you can not find the model you want in the Hub or have a custom model you want to use, you can import it manually.
|
If you can not find the model you want in the Hub or have a custom model you want to use, you can import it manually.
|
||||||
|
|
||||||
In this guide, we will show you how to import a GGUF model from [HuggingFace](https://huggingface.co/), using our lastest model, [Trinity](https://huggingface.co/janhq/trinity-v1-GGUF), as an example.
|
In this guide, we will show you how to import a GGUF model from [HuggingFace](https://huggingface.co/), using our latest model, [Trinity](https://huggingface.co/janhq/trinity-v1-GGUF), as an example.
|
||||||
|
|
||||||
> We are fast shipping a UI to make this easier, but it's a bit manual for now. Apologies.
|
> We are fast shipping a UI to make this easier, but it's a bit manual for now. Apologies.
|
||||||
|
|
||||||
@ -126,7 +126,7 @@ Edit `model.json` and include the following configurations:
|
|||||||
- Ensure the filename must be `model.json`.
|
- Ensure the filename must be `model.json`.
|
||||||
- Ensure the `id` property matches the folder name you created.
|
- Ensure the `id` property matches the folder name you created.
|
||||||
- Ensure the GGUF filename should match the `id` property exactly.
|
- Ensure the GGUF filename should match the `id` property exactly.
|
||||||
- Ensure the `source_url` property is the direct binary download link ending in `.gguf`. In HuggingFace, you can find the direct links in `Files and versions` tab.
|
- Ensure the `source_url` property is the direct binary download link ending in `.gguf`. In HuggingFace, you can find the direct links in the `Files and versions` tab.
|
||||||
- Ensure you are using the correct `prompt_template`. This is usually provided in the HuggingFace model's description page.
|
- Ensure you are using the correct `prompt_template`. This is usually provided in the HuggingFace model's description page.
|
||||||
- Ensure the `state` property is set to `ready`.
|
- Ensure the `state` property is set to `ready`.
|
||||||
|
|
||||||
@ -154,9 +154,9 @@ Edit `model.json` and include the following configurations:
|
|||||||
"tags": ["7B", "Merged"],
|
"tags": ["7B", "Merged"],
|
||||||
"size": 4370000000
|
"size": 4370000000
|
||||||
},
|
},
|
||||||
|
"engine": "nitro",
|
||||||
// highlight-next-line
|
// highlight-next-line
|
||||||
"state": "ready",
|
"state": "ready"
|
||||||
"engine": "nitro"
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -168,6 +168,70 @@ Restart Jan and navigate to the Hub. Locate your model and click the `Download`
|
|||||||
|
|
||||||
Your model is now ready to use in Jan.
|
Your model is now ready to use in Jan.
|
||||||
|
|
||||||
|
## Configuring Client Connection to Remote/Local Server
|
||||||
|
|
||||||
|
In this guide, we will show you how to configure a client connection to a remote/local server, using LM Studio as an example.
|
||||||
|
|
||||||
|
At the moment, you can only connect to one compatible server at a time (e.g OpenAI Platform, Azure OpenAI, LM Studio, etc)
|
||||||
|
|
||||||
|
### 1. Configure Local Server in Engine
|
||||||
|
|
||||||
|
Create `lmstudio.json` in the `~/jan/engines` folder. Configure `full_url` properties with the endpoint server that you want to connect. For example, if you want to connect to LM Studio, you can configure as follows:
|
||||||
|
|
||||||
|
```js
|
||||||
|
{
|
||||||
|
// highlight-next-line
|
||||||
|
"full_url": "http://<REMOTE_LMSTUDIO_IP>:<REMOTE_LMSTUDIO_PORT>/v1/chat/completions",
|
||||||
|
// Skip api_key if your local server does not require authentication
|
||||||
|
// "api_key": "sk-<your key here>"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Create a Model JSON
|
||||||
|
|
||||||
|
Navigate to the `~/jan/models` folder. Create a folder named `remote-lmstudio` and create a `model.json` file inside the folder including the following configurations:
|
||||||
|
|
||||||
|
- Ensure the filename must be `model.json`.
|
||||||
|
- Ensure the `id` property matches the folder name you created.
|
||||||
|
- Ensure the `format` property is set to `api`.
|
||||||
|
- Ensure the `engine` property is set to as the filename that you recently created in `~/jan/models`. In this example, it is `lmstudio`.
|
||||||
|
- Ensure the `state` property is set to `ready`.
|
||||||
|
|
||||||
|
```js
|
||||||
|
{
|
||||||
|
"source": [
|
||||||
|
{
|
||||||
|
"filename": "lmstudio",
|
||||||
|
"url": "https://lmstudio.ai"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
// highlight-next-line
|
||||||
|
"id": "remote-lmstudio",
|
||||||
|
"object": "model",
|
||||||
|
"name": "remote lmstudio",
|
||||||
|
"version": "1.0",
|
||||||
|
"description": "Jan integration with remote LMstudio server",
|
||||||
|
// highlight-next-line
|
||||||
|
"format": "api",
|
||||||
|
"settings": {},
|
||||||
|
"parameters": {},
|
||||||
|
"metadata": {
|
||||||
|
"author": "LMstudio",
|
||||||
|
"tags": ["remote", "awesome"]
|
||||||
|
},
|
||||||
|
// highlight-start
|
||||||
|
"engine": "lmstudio",
|
||||||
|
"state": "ready"
|
||||||
|
// highlight-end
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Start the Model
|
||||||
|
|
||||||
|
Restart Jan and navigate to the Hub. Locate your model and click the Use button.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
## Assistance and Support
|
## Assistance and Support
|
||||||
|
|
||||||
If you have questions or are looking for more preconfigured GGUF models, please feel free to join our [Discord community](https://discord.gg/Dt7MxDyNNZ) for support, updates, and discussions.
|
If you have questions or are looking for more preconfigured GGUF models, please feel free to join our [Discord community](https://discord.gg/Dt7MxDyNNZ) for support, updates, and discussions.
|
||||||
|
|||||||
Binary file not shown.
|
After Width: | Height: | Size: 344 KiB |
Loading…
x
Reference in New Issue
Block a user