diff --git a/docs/docs/quickstart/models/customize-engine.mdx b/docs/docs/quickstart/models/customize-engine.mdx index 8c571d2b1..cf128f30c 100644 --- a/docs/docs/quickstart/models/customize-engine.mdx +++ b/docs/docs/quickstart/models/customize-engine.mdx @@ -1,16 +1,25 @@ --- +title: Customize Engine Settings sidebar_position: 1 +description: A step-by-step guide to change your engine's settings. +keywords: + [ + Jan AI, + Jan, + ChatGPT alternative, + local AI, + private AI, + conversational AI, + no-subscription fee, + large language model, + import-models-manually, + customize-engine-settings, + ] --- import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; -# Customize Engine Settings - -A step-by-step guide to change your engine's settings. - ---- - In this guide, we'll walk you through the process of customizing your engine settings by tweaking the `nitro.json` file 1. Navigate to the `App Settings` > `Advanced` > `Open App Directory` > `~/jan/engine` folder. diff --git a/docs/docs/quickstart/models/import-models.mdx b/docs/docs/quickstart/models/import-models.mdx index 899ea03ca..15b4ef3d4 100644 --- a/docs/docs/quickstart/models/import-models.mdx +++ b/docs/docs/quickstart/models/import-models.mdx @@ -1,23 +1,28 @@ --- +title: Manual Import sidebar_position: 3 +description: A step-by-step guide on how to perform manual import feature. +keywords: + [ + Jan AI, + Jan, + ChatGPT alternative, + local AI, + private AI, + conversational AI, + no-subscription fee, + large language model, + import-models-manually, + absolute-filepath, + ] --- import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; import janModel from './assets/jan-model-hub.png'; -# Manual Import -A step-by-step guide on how to perform manual import feature. - ---- -:::warning - -This is currently under development. - -::: - -This section will show you how to perform manual import. In this guide, we are using a GGUF model from [HuggingFace](https://huggingface.co/) and our latest model, [Trinity](https://huggingface.co/janhq/trinity-v1-GGUF), as an example. +This guide will show you how to perform manual import. In this guide, we are using a GGUF model from [HuggingFace](https://huggingface.co/) and our latest model, [Trinity](https://huggingface.co/janhq/trinity-v1-GGUF), as an example. ## Newer versions - nightly versions and v0.4.4+ @@ -25,18 +30,18 @@ This section will show you how to perform manual import. In this guide, we are u 1. Navigate to the `App Settings` > `Advanced` > `Open App Directory` > `~/jan/models` folder. - - + + ```sh cd ~/jan/models ``` - + ```sh C:/Users//jan/models ``` - + ```sh cd ~/jan/models ``` @@ -57,24 +62,24 @@ Drag and drop your model binary into this folder, ensuring the `modelname.gguf` If your model doesn't show up in the **Model Selector** in conversations, **restart the app** or contact us via our [Discord community](https://discord.gg/Dt7MxDyNNZ). -## Older versions - before v0.4.4 +## Older versions - before v0.44 ### 1. Create a Model Folder 1. Navigate to the `App Settings` > `Advanced` > `Open App Directory` > `~/jan/models` folder. - - + + ```sh cd ~/jan/models ``` - + ```sh C:/Users//jan/models ``` - + ```sh cd ~/jan/models ``` @@ -93,20 +98,20 @@ Jan follows a folder-based, [standard model template](https://jan.ai/docs/engine This means that you can easily reconfigure your models, export them, and share your preferences transparently. - - + + ```sh cd trinity-v1-7b touch model.json ``` - + ```sh cd trinity-v1-7b echo {} > model.json ``` - + ```sh cd trinity-v1-7b touch model.json @@ -151,14 +156,15 @@ To update `model.json`: "engine": "nitro" } ``` + #### Regarding `model.json` - In `settings`, two crucial values are: - `ctx_len`: Defined based on the model's context size. - `prompt_template`: Defined based on the model's trained template (e.g., ChatML, Alpaca). - To set up the `prompt_template`: - 1. Visit Hugging Face. - 2. Locate the model (e.g., [Gemma 7b it](https://huggingface.co/google/gemma-7b-it)). + 1. Visit [Hugging Face](https://huggingface.co/), an open-source machine learning platform. + 2. Find the current model that you're using (e.g., [Gemma 7b it](https://huggingface.co/google/gemma-7b-it)). 3. Review the text and identify the template. - In `parameters`, consider the following options. The fields in `parameters` are typically general and can be the same across models. An example is provided below: @@ -179,8 +185,8 @@ To update `model.json`: 2. Locate your model. 3. Click **Download** button to download the model binary. -
- jan-model-hub +
+ jan-model-hub
:::info[Assistance and Support] diff --git a/docs/docs/quickstart/models/integrate-remote.mdx b/docs/docs/quickstart/models/integrate-remote.mdx index 510d34701..352476e3f 100644 --- a/docs/docs/quickstart/models/integrate-remote.mdx +++ b/docs/docs/quickstart/models/integrate-remote.mdx @@ -1,19 +1,23 @@ --- +title: Remote Server Integration sidebar_position: 2 +description: A step-by-step guide on how to set up Jan to connect with any remote or local API server. +keywords: + [ + Jan AI, + Jan, + ChatGPT alternative, + local AI, + private AI, + conversational AI, + no-subscription fee, + large language model, + import-models-manually, + remote server, + OAI compatible, + ] --- -# Remote Server Integration - -A step-by-step guide on how to set up Jan to connect with any remote or local API server. - ---- - -:::warning - -This is currently under development. - -::: - This guide will show you how to configure Jan as a client and point it to any remote & local (self-hosted) API server. ## OpenAI Platform Configuration @@ -156,11 +160,34 @@ Please note that currently, the code that supports any OpenAI-compatible endpoin }, "engine": "openai" } + +``` +### Regarding `model.json` + +- In `settings`, two crucial values are: + - `ctx_len`: Defined based on the model's context size. + - `prompt_template`: Defined based on the model's trained template (e.g., ChatML, Alpaca). + - To set up the `prompt_template`: + 1. Visit [Hugging Face](https://huggingface.co/), an open-source machine learning platform. + 2. Find the current model that you're using (e.g., [Gemma 7b it](https://huggingface.co/google/gemma-7b-it)). + 3. Review the text and identify the template. +- In `parameters`, consider the following options. The fields in `parameters` are typically general and can be the same across models. An example is provided below: + +```json +"parameters":{ + "temperature": 0.7, + "top_p": 0.95, + "stream": true, + "max_tokens": 4096, + "frequency_penalty": 0, + "presence_penalty": 0 +} ``` ### 3. Start the Model -Restart Jan and navigate to the **Hub**. Locate your model and click the **Use** button. +1. Restart Jan and navigate to the **Hub**. +2. Locate your model and click the **Use** button. :::info[Assistance and Support]