docs: update model content and add keywords

This commit is contained in:
Arista Indrajaya 2024-02-29 23:10:50 +07:00
parent 7004a8b936
commit e4ff358bfa
3 changed files with 89 additions and 47 deletions

View File

@ -1,16 +1,25 @@
--- ---
title: Customize Engine Settings
sidebar_position: 1 sidebar_position: 1
description: A step-by-step guide to change your engine's settings.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
import-models-manually,
customize-engine-settings,
]
--- ---
import Tabs from '@theme/Tabs'; import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem'; import TabItem from '@theme/TabItem';
# Customize Engine Settings
A step-by-step guide to change your engine's settings.
---
In this guide, we'll walk you through the process of customizing your engine settings by tweaking the `nitro.json` file In this guide, we'll walk you through the process of customizing your engine settings by tweaking the `nitro.json` file
1. Navigate to the `App Settings` > `Advanced` > `Open App Directory` > `~/jan/engine` folder. 1. Navigate to the `App Settings` > `Advanced` > `Open App Directory` > `~/jan/engine` folder.

View File

@ -1,23 +1,28 @@
--- ---
title: Manual Import
sidebar_position: 3 sidebar_position: 3
description: A step-by-step guide on how to perform manual import feature.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
import-models-manually,
absolute-filepath,
]
--- ---
import Tabs from '@theme/Tabs'; import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem'; import TabItem from '@theme/TabItem';
import janModel from './assets/jan-model-hub.png'; import janModel from './assets/jan-model-hub.png';
# Manual Import
A step-by-step guide on how to perform manual import feature. This guide will show you how to perform manual import. In this guide, we are using a GGUF model from [HuggingFace](https://huggingface.co/) and our latest model, [Trinity](https://huggingface.co/janhq/trinity-v1-GGUF), as an example.
---
:::warning
This is currently under development.
:::
This section will show you how to perform manual import. In this guide, we are using a GGUF model from [HuggingFace](https://huggingface.co/) and our latest model, [Trinity](https://huggingface.co/janhq/trinity-v1-GGUF), as an example.
## Newer versions - nightly versions and v0.4.4+ ## Newer versions - nightly versions and v0.4.4+
@ -25,7 +30,7 @@ This section will show you how to perform manual import. In this guide, we are u
1. Navigate to the `App Settings` > `Advanced` > `Open App Directory` > `~/jan/models` folder. 1. Navigate to the `App Settings` > `Advanced` > `Open App Directory` > `~/jan/models` folder.
<Tabs> <Tabs groupId = "operating-systems" >
<TabItem value="mac" label = "MacOS" default> <TabItem value="mac" label = "MacOS" default>
```sh ```sh
cd ~/jan/models cd ~/jan/models
@ -57,13 +62,13 @@ Drag and drop your model binary into this folder, ensuring the `modelname.gguf`
If your model doesn't show up in the **Model Selector** in conversations, **restart the app** or contact us via our [Discord community](https://discord.gg/Dt7MxDyNNZ). If your model doesn't show up in the **Model Selector** in conversations, **restart the app** or contact us via our [Discord community](https://discord.gg/Dt7MxDyNNZ).
## Older versions - before v0.4.4 ## Older versions - before v0.44
### 1. Create a Model Folder ### 1. Create a Model Folder
1. Navigate to the `App Settings` > `Advanced` > `Open App Directory` > `~/jan/models` folder. 1. Navigate to the `App Settings` > `Advanced` > `Open App Directory` > `~/jan/models` folder.
<Tabs> <Tabs groupId = "operating-systems" >
<TabItem value="mac" label = "MacOS" default> <TabItem value="mac" label = "MacOS" default>
```sh ```sh
cd ~/jan/models cd ~/jan/models
@ -93,7 +98,7 @@ Jan follows a folder-based, [standard model template](https://jan.ai/docs/engine
This means that you can easily reconfigure your models, export them, and share your preferences transparently. This means that you can easily reconfigure your models, export them, and share your preferences transparently.
<Tabs> <Tabs groupId = "operating-systems" >
<TabItem value="mac" label = "MacOS" default> <TabItem value="mac" label = "MacOS" default>
```sh ```sh
cd trinity-v1-7b cd trinity-v1-7b
@ -151,14 +156,15 @@ To update `model.json`:
"engine": "nitro" "engine": "nitro"
} }
``` ```
#### Regarding `model.json` #### Regarding `model.json`
- In `settings`, two crucial values are: - In `settings`, two crucial values are:
- `ctx_len`: Defined based on the model's context size. - `ctx_len`: Defined based on the model's context size.
- `prompt_template`: Defined based on the model's trained template (e.g., ChatML, Alpaca). - `prompt_template`: Defined based on the model's trained template (e.g., ChatML, Alpaca).
- To set up the `prompt_template`: - To set up the `prompt_template`:
1. Visit Hugging Face. 1. Visit [Hugging Face](https://huggingface.co/), an open-source machine learning platform.
2. Locate the model (e.g., [Gemma 7b it](https://huggingface.co/google/gemma-7b-it)). 2. Find the current model that you're using (e.g., [Gemma 7b it](https://huggingface.co/google/gemma-7b-it)).
3. Review the text and identify the template. 3. Review the text and identify the template.
- In `parameters`, consider the following options. The fields in `parameters` are typically general and can be the same across models. An example is provided below: - In `parameters`, consider the following options. The fields in `parameters` are typically general and can be the same across models. An example is provided below:

View File

@ -1,19 +1,23 @@
--- ---
title: Remote Server Integration
sidebar_position: 2 sidebar_position: 2
description: A step-by-step guide on how to set up Jan to connect with any remote or local API server.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
import-models-manually,
remote server,
OAI compatible,
]
--- ---
# Remote Server Integration
A step-by-step guide on how to set up Jan to connect with any remote or local API server.
---
:::warning
This is currently under development.
:::
This guide will show you how to configure Jan as a client and point it to any remote & local (self-hosted) API server. This guide will show you how to configure Jan as a client and point it to any remote & local (self-hosted) API server.
## OpenAI Platform Configuration ## OpenAI Platform Configuration
@ -156,11 +160,34 @@ Please note that currently, the code that supports any OpenAI-compatible endpoin
}, },
"engine": "openai" "engine": "openai"
} }
```
### Regarding `model.json`
- In `settings`, two crucial values are:
- `ctx_len`: Defined based on the model's context size.
- `prompt_template`: Defined based on the model's trained template (e.g., ChatML, Alpaca).
- To set up the `prompt_template`:
1. Visit [Hugging Face](https://huggingface.co/), an open-source machine learning platform.
2. Find the current model that you're using (e.g., [Gemma 7b it](https://huggingface.co/google/gemma-7b-it)).
3. Review the text and identify the template.
- In `parameters`, consider the following options. The fields in `parameters` are typically general and can be the same across models. An example is provided below:
```json
"parameters":{
"temperature": 0.7,
"top_p": 0.95,
"stream": true,
"max_tokens": 4096,
"frequency_penalty": 0,
"presence_penalty": 0
}
``` ```
### 3. Start the Model ### 3. Start the Model
Restart Jan and navigate to the **Hub**. Locate your model and click the **Use** button. 1. Restart Jan and navigate to the **Hub**.
2. Locate your model and click the **Use** button.
:::info[Assistance and Support] :::info[Assistance and Support]