diff --git a/docs/docs/guides/04-using-models/02-import-manually.mdx b/docs/docs/guides/04-using-models/02-import-manually.mdx index 6fc7e04a3..9e6f7f0f8 100644 --- a/docs/docs/guides/04-using-models/02-import-manually.mdx +++ b/docs/docs/guides/04-using-models/02-import-manually.mdx @@ -28,7 +28,7 @@ Jan is compatible with all GGUF models. If you can not find the model you want in the Hub or have a custom model you want to use, you can import it manually. -In this guide, we will show you how to import a GGUF model from [HuggingFace](https://huggingface.co/), using our lastest model, [Trinity](https://huggingface.co/janhq/trinity-v1-GGUF), as an example. +In this guide, we will show you how to import a GGUF model from [HuggingFace](https://huggingface.co/), using our latest model, [Trinity](https://huggingface.co/janhq/trinity-v1-GGUF), as an example. > We are fast shipping a UI to make this easier, but it's a bit manual for now. Apologies. @@ -126,7 +126,7 @@ Edit `model.json` and include the following configurations: - Ensure the filename must be `model.json`. - Ensure the `id` property matches the folder name you created. - Ensure the GGUF filename should match the `id` property exactly. -- Ensure the `source_url` property is the direct binary download link ending in `.gguf`. In HuggingFace, you can find the direct links in `Files and versions` tab. +- Ensure the `source_url` property is the direct binary download link ending in `.gguf`. In HuggingFace, you can find the direct links in the `Files and versions` tab. - Ensure you are using the correct `prompt_template`. This is usually provided in the HuggingFace model's description page. - Ensure the `state` property is set to `ready`. @@ -154,9 +154,9 @@ Edit `model.json` and include the following configurations: "tags": ["7B", "Merged"], "size": 4370000000 }, + "engine": "nitro", // highlight-next-line - "state": "ready", - "engine": "nitro" + "state": "ready" } ``` @@ -168,6 +168,70 @@ Restart Jan and navigate to the Hub. Locate your model and click the `Download` Your model is now ready to use in Jan. +## Configuring Client Connection to Remote/Local Server + +In this guide, we will show you how to configure a client connection to a remote/local server, using LM Studio as an example. + +At the moment, you can only connect to one compatible server at a time (e.g OpenAI Platform, Azure OpenAI, LM Studio, etc) + +### 1. Configure Local Server in Engine + +Create `lmstudio.json` in the `~/jan/engines` folder. Configure `full_url` properties with the endpoint server that you want to connect. For example, if you want to connect to LM Studio, you can configure as follows: + +```js +{ + // highlight-next-line + "full_url": "http://:/v1/chat/completions", + // Skip api_key if your local server does not require authentication + // "api_key": "sk-" +} +``` + +### 2. Create a Model JSON + +Navigate to the `~/jan/models` folder. Create a folder named `remote-lmstudio` and create a `model.json` file inside the folder including the following configurations: + +- Ensure the filename must be `model.json`. +- Ensure the `id` property matches the folder name you created. +- Ensure the `format` property is set to `api`. +- Ensure the `engine` property is set to as the filename that you recently created in `~/jan/models`. In this example, it is `lmstudio`. +- Ensure the `state` property is set to `ready`. + +```js +{ + "source": [ + { + "filename": "lmstudio", + "url": "https://lmstudio.ai" + } + ], + // highlight-next-line + "id": "remote-lmstudio", + "object": "model", + "name": "remote lmstudio", + "version": "1.0", + "description": "Jan integration with remote LMstudio server", + // highlight-next-line + "format": "api", + "settings": {}, + "parameters": {}, + "metadata": { + "author": "LMstudio", + "tags": ["remote", "awesome"] + }, + // highlight-start + "engine": "lmstudio", + "state": "ready" + // highlight-end +} +``` + +### 3. Start the Model + +Restart Jan and navigate to the Hub. Locate your model and click the Use button. + +![start-model](assets/configure-local-server.png) + ## Assistance and Support If you have questions or are looking for more preconfigured GGUF models, please feel free to join our [Discord community](https://discord.gg/Dt7MxDyNNZ) for support, updates, and discussions. diff --git a/docs/docs/guides/04-using-models/assets/configure-local-server.png b/docs/docs/guides/04-using-models/assets/configure-local-server.png new file mode 100644 index 000000000..14a18d152 Binary files /dev/null and b/docs/docs/guides/04-using-models/assets/configure-local-server.png differ