Update quickstart content

Add quickstart content to the Jan docs and update the Docusaurus config and Sidebars
This commit is contained in:
Arista Indrajaya 2024-02-23 17:26:24 +07:00
parent 56be7742e7
commit 4e89732419
44 changed files with 1227 additions and 0 deletions

View File

@ -0,0 +1,8 @@
{
"label": "Extensions",
"position": 5,
"link": {
"type": "generated-index",
"description": "More info regarding Extensions for Jan"
}
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 88 KiB

View File

@ -0,0 +1,28 @@
---
sidebar_position: 2
---
import extensionsURL from './img/jan-ai-extensions.png';
# Import Extensions
Besides default extensions, you can import extensions into Jan by following the steps below:
1. Navigate to **Settings** > **Extensions** > Click Select under **Manual Installation**.
2. Then, the ~/jan/extensions/extensions.json file will be updated automatically.
:::caution
You need to prepare the extension file in `.tgz` format to install.
:::
<div class="text--center">
<img src="https://jan.ai/assets/images/02-import-extensions-1da9727340cdd1e76521936c648af0d6.gif" width={800} alt="jan-ai-extensions" />
</div>
:::info[Assistance and Support]
If you have questions, please join our [Discord community](https://discord.gg/Dt7MxDyNNZ) for support, updates, and discussions.
:::

View File

@ -0,0 +1,137 @@
---
sidebar_position: 1
---
import janExtensionSetup from './img/extension-setup.png';
# Extension Setup
The current Jan Desktop Client has some default extensions built on top of this framework to enhance the user experience. In this guide, we will show you the list of default extensions and how to configure extension settings.
## Default Extensions
You can find the default extensions in the `Settings` > `Extensions`.
<div class="text--center">
<img src={janExtensionSetup} width={800} alt="jan-extension-setup" />
</div>
## List of Default Extensions
| Extension Name | Version | Description | Source Code Link |
| -------------- | ------- | ----------- | ---------------- |
| Assistant Extension | `v1.0.0` | This extension enables assistants, including Jan, a default assistant that can call all downloaded models. | [Link to Source](https://github.com/janhq/jan/tree/dev/extensions/assistant-extension ) |
| Conversational Extension | `v1.0.0` | This extension enables conversations and state persistence via your filesystem. | [Link to Source](https://github.com/janhq/jan/tree/dev/extensions/conversational-extension) |
| Inference Nitro Extension | `v1.0.0` | This extension embeds Nitro, a lightweight (3 MB) inference engine in C++. See nitro.jan.ai. | [Link to Source](https://github.com/janhq/jan/tree/dev/extensions/inference-nitro-extension) |
| Inference Openai Extension | `v1.0.0` | This extension enables OpenAI chat completion API calls. | [Link to Source](https://github.com/janhq/jan/tree/dev/extensions/inference-openai-extension) |
| Inference Triton Trt Llm Extension | `v1.0.0` | This extension enables Nvidia's TensorRT-LLM as an inference engine option. | [Link to Source](https://github.com/janhq/jan/tree/dev/extensions/inference-triton-trtllm-extension) |
| Model Extension | `v1.0.22` | Model Management Extension provides model exploration and seamless downloads. | [Link to Source](https://github.com/janhq/jan/tree/dev/extensions/model-extension) |
| Monitoring Extension | `v1.0.9` | This extension offers system health and OS-level data. | [Link to Source](https://github.com/janhq/jan/tree/dev/extensions/monitoring-extension) |
## Configure Extension Settings
To configure extension settings:
1. Navigate to the `~/jan/extensions`.
2. Open the `extensions.json` file
3. Edit the file with options including:
| Option | Description |
|-----------------|-------------------------------------------------|
| `_active` | Enable/disable the extension. |
| `listeners` | Default listener setting. |
| `origin` | Extension file path. |
| `installOptions`| Version and metadata configuration. |
| `name` | Extension name. |
| `version` | Extension version. |
| `main` | Main file path. |
| `description` | Extension description. |
| `url` | Extension URL. |
```json title="~/jan/extensions/extensions.json"
{
"@janhq/assistant-extension": {
"_active": true,
"listeners": {},
"origin": "/Applications/Jan.app/Contents/Resources/app.asar.unpacked/pre-install/janhq-assistant-extension-1.0.0.tgz",
"installOptions": { "version": false, "fullMetadata": false },
"name": "@janhq/assistant-extension",
"version": "1.0.0",
"main": "dist/index.js",
"description": "This extension enables assistants, including Jan, a default assistant that can call all downloaded models",
"url": "extension://@janhq/assistant-extension/dist/index.js"
},
"@janhq/conversational-extension": {
"_active": true,
"listeners": {},
"origin": "/Applications/Jan.app/Contents/Resources/app.asar.unpacked/pre-install/janhq-conversational-extension-1.0.0.tgz",
"installOptions": { "version": false, "fullMetadata": false },
"name": "@janhq/conversational-extension",
"version": "1.0.0",
"main": "dist/index.js",
"description": "This extension enables conversations and state persistence via your filesystem",
"url": "extension://@janhq/conversational-extension/dist/index.js"
},
"@janhq/inference-nitro-extension": {
"_active": true,
"listeners": {},
"origin": "/Applications/Jan.app/Contents/Resources/app.asar.unpacked/pre-install/janhq-inference-nitro-extension-1.0.0.tgz",
"installOptions": { "version": false, "fullMetadata": false },
"name": "@janhq/inference-nitro-extension",
"version": "1.0.0",
"main": "dist/index.js",
"description": "This extension embeds Nitro, a lightweight (3mb) inference engine written in C++. See nitro.jan.ai",
"url": "extension://@janhq/inference-nitro-extension/dist/index.js"
},
"@janhq/inference-openai-extension": {
"_active": true,
"listeners": {},
"origin": "/Applications/Jan.app/Contents/Resources/app.asar.unpacked/pre-install/janhq-inference-openai-extension-1.0.0.tgz",
"installOptions": { "version": false, "fullMetadata": false },
"name": "@janhq/inference-openai-extension",
"version": "1.0.0",
"main": "dist/index.js",
"description": "This extension enables OpenAI chat completion API calls",
"url": "extension://@janhq/inference-openai-extension/dist/index.js"
},
"@janhq/inference-triton-trt-llm-extension": {
"_active": true,
"listeners": {},
"origin": "/Applications/Jan.app/Contents/Resources/app.asar.unpacked/pre-install/janhq-inference-triton-trt-llm-extension-1.0.0.tgz",
"installOptions": { "version": false, "fullMetadata": false },
"name": "@janhq/inference-triton-trt-llm-extension",
"version": "1.0.0",
"main": "dist/index.js",
"description": "This extension enables Nvidia's TensorRT-LLM as an inference engine option",
"url": "extension://@janhq/inference-triton-trt-llm-extension/dist/index.js"
},
"@janhq/model-extension": {
"_active": true,
"listeners": {},
"origin": "/Applications/Jan.app/Contents/Resources/app.asar.unpacked/pre-install/janhq-model-extension-1.0.22.tgz",
"installOptions": { "version": false, "fullMetadata": false },
"name": "@janhq/model-extension",
"version": "1.0.22",
"main": "dist/index.js",
"description": "Model Management Extension provides model exploration and seamless downloads",
"url": "extension://@janhq/model-extension/dist/index.js"
},
"@janhq/monitoring-extension": {
"_active": true,
"listeners": {},
"origin": "/Applications/Jan.app/Contents/Resources/app.asar.unpacked/pre-install/janhq-monitoring-extension-1.0.9.tgz",
"installOptions": { "version": false, "fullMetadata": false },
"name": "@janhq/monitoring-extension",
"version": "1.0.9",
"main": "dist/index.js",
"description": "This extension provides system health and OS level data",
"url": "extension://@janhq/monitoring-extension/dist/index.js"
}
}
```
:::info[Assistance and Support]
If you have questions, please join our [Discord community](https://discord.gg/Dt7MxDyNNZ) for support, updates, and discussions.
:::

View File

@ -0,0 +1,237 @@
---
sidebar_position: 2
hide_table_of_contents: true
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import installImageURL from '../../static/img/homepage-new/jan-ai-download.png';
# Installation
<Tabs>
<TabItem value="mac" label="Mac" default>
:::warning
Ensure that your MacOS version is 13 or higher to run Jan.
:::
### Stable Releases
To download stable releases, go to [Jan.ai](https://jan.ai/) > select **Download for Mac**.
The download should be available as a `.dmg`.
### Nightly Releases
We provide the Nightly Release so that you can test new features and see what might be coming in a future stable release. Please be aware that there might be bugs!
You can download it from [Jan's Discord](https://discord.gg/FTk2MvZwJH) in the [`#nightly-builds`](https://discord.gg/q8szebnxZ7) channel.
### Experimental Model
To enable the experimental mode, go to **Settings** > **Advanced Settings** and toggle the **Experimental Mode**
### Uninstall for Troubleshooting
:::note
If you are stuck in a broken build, try to uninstall Jan by following the steps below.
:::
To uninstall:
1. Delete Jan from your `/Applications` folder.
2. Delete Application data.
```sh
# Newer versions
rm -rf ~/Library/Application\ Support/jan
# Versions 0.2.0 and older
rm -rf ~/Library/Application\ Support/jan-electron
```
3. Clear Application cache.
```sh
rm -rf ~/Library/Caches/jan*
```
4. Use the following commands to remove any dangling backend processes:
```sh
ps aux | grep nitro
```
Look for processes like "nitro" and "nitro_arm_64", and kill them one by one with:
```sh
kill -9 <PID>
```
</TabItem>
<TabItem value="windows" label="Windows">
:::warning
Ensure that your system meets the following requirements:
- Windows 10 or higher is required to run Jan.
To enable GPU support, you will need:
- NVIDIA GPU with CUDA Toolkit 11.7 or higher
- NVIDIA driver 470.63.01 or higher
:::
### Stable Releases
To download stable releases, go to [Jan.ai](https://jan.ai/) > select **Download for Windows**.
The download should be available as a `.exe` file.
### Nightly Releases
We provide the Nightly Release so that you can test new features and see what might be coming in a future stable release. Please be aware that there might be bugs!
You can download it from [Jan's Discord](https://discord.gg/FTk2MvZwJH) in the [`#nightly-builds`](https://discord.gg/q8szebnxZ7) channel.
### Experimental Model
To enable the experimental mode, go to **Settings** > **Advanced Settings** and toggle the **Experimental Mode**
### Default Installation Directory
By default, Jan is installed in the following directory:
```sh
# Default installation directory
C:\Users\{username}\AppData\Local\Programs\Jan
```
### Uninstall for Troubleshooting
:::note
If you are stuck in a broken build, try to uninstall Jan by following the steps below.
:::
To uninstall Jan on Windows, uninstall it via [Windows Control Panel](https://support.microsoft.com/en-us/windows/uninstall-or-remove-apps-and-programs-in-windows-4b55f974-2cc6-2d2b-d092-5905080eaf98).
To remove all user data associated with Jan, you can delete the `/jan` directory in Windows' [AppData directory](https://superuser.com/questions/632891/what-is-appdata).
```sh
cd C:\Users\%USERNAME%\AppData\Roaming
rmdir /S jan
```
</TabItem>
<TabItem value="linux" label="Linux">
:::warning
Ensure that your system meets the following requirements:
- glibc 2.27 or higher (check with `ldd --version`)
- gcc 11, g++ 11, cpp 11, or higher, refer to this link for more information.
To enable GPU support, you will need:
- NVIDIA GPU with CUDA Toolkit 11.7 or higher
- NVIDIA driver 470.63.01 or higher
:::
### Stable Releases
To download stable releases, go to [Jan.ai](https://jan.ai/) > select **Download for Linux**.
The download should be available as a `.AppImage` file or a `.deb` file.
### Nightly Releases
We provide the Nightly Release so that you can test new features and see what might be coming in a future stable release. Please be aware that there might be bugs!
You can download it from [Jan's Discord](https://discord.gg/FTk2MvZwJH) in the [`#nightly-builds`](https://discord.gg/q8szebnxZ7) channel.
### Experimental Model
To enable the experimental mode, go to **Settings** > **Advanced Settings** and toggle the **Experimental Mode**
<Tabs groupId="linux_type">
<TabItem value="linux_main" label="Linux">
To install Jan, you should use your package manager's install or `dpkg`.
</TabItem>
<TabItem value="deb_ub" label="Debian / Ubuntu">
To install Jan, run the following command:
```sh
# Install Jan using dpkg
sudo dpkg -i jan-linux-amd64-{version}.deb
# Install Jan using apt-get
sudo apt-get install ./jan-linux-amd64-{version}.deb
# where jan-linux-amd64-{version}.deb is path to the Jan package
```
</TabItem>
<TabItem value="other" label="Others">
To install Jan, run the following commands:
```sh
# Install Jan using AppImage
chmod +x jan-linux-x86_64-{version}.AppImage
./jan-linux-x86_64-{version}.AppImage
# where jan-linux-x86_64-{version}.AppImage is path to the Jan package
```
</TabItem>
</Tabs>
### Uninstall for Troubleshooting
:::note
If you are stuck in a broken build, try to uninstall Jan by following the steps below.
:::
<Tabs groupId="linux_type">
<TabItem value="linux_main" label="Linux">
To uninstall Jan, you should use your package manager's uninstall or remove option.
This will return your system to its state before the installation of Jan.
This method can also reset all settings if you are experiencing any issues with Jan.
</TabItem>
<TabItem value="deb_ub" label="Debian / Ubuntu">
To uninstall Jan, run the following command.MDXContent
```sh
sudo apt-get remove jan
# where jan is the name of Jan package
```
This will return your system to its state before the installation of Jan.
This method can also be used to reset all settings if you are experiencing any issues with Jan.
</TabItem>
<TabItem value="other" label="Others">
To uninstall Jan, you can uninstall Jan by deleting the `.AppImage` file.
If you wish to completely remove all user data associated with Jan after uninstallation, you can delete the user data at `~/jan`.
This method can also reset all settings if you are experiencing any issues with Jan.
</TabItem>
</Tabs>
</TabItem>
</Tabs>

View File

@ -0,0 +1,8 @@
{
"label": "Integrations",
"position": 6,
"link": {
"type": "generated-index",
"description": "More info regarding Jan.ai integrations"
}
}

View File

@ -0,0 +1,71 @@
---
sidebar_position: 3
---
import azure from './img/azure.png';
# Azure Raycast
## Overview
This guide will show you how to integrate Azure OpenAI Service with Jan. The [Azure OpenAI Service](https://learn.microsoft.com/en-us/azure/ai-services/openai/overview?source=docs) offers robust APIs, making it simple for you to incorporate OpenAI's language models into your applications.
## How to Integrate Azure
<div class="text--center">
<img src={azure} width={800} alt="azure" />
</div>
### Step 1: Configure Azure OpenAI Service API Key
1. Set yp and deploy the Azure OpenAI Service.
2. Once you've set up and deployed Azure OpenAI Service, you can find the endpoint and API key in [Azure OpenAI Studio](https://oai.azure.com/) under `Chat` > `View code`.
3. Set up the endpoint and API key for Azure OpenAI Service in the `~/jan/engines/openai.json` file.
```json title="~/jan/engines/openai.json"
{
// https://hieujan.openai.azure.com/openai/deployments/gpt-35-hieu-jan/chat/completions?api-version=2023-07-01-preview
"full_url": "https://<your-resource-name>.openai.azure.com/openai/deployments/<your-deployment-name>/chat/completions?api-version=<api-version>",
"api_key": "<your-api-key>"
}
```
### Step 2: Modify a JSON Model
1. Go to the `~/jan/models` directory.
2. Make a new folder called `(your-deployment-name)`, like `gpt-35-hieu-jan`.
3. Create a `model.json` file inside the folder with the specified configurations:
- Ensure the file is named `model.json`.
- Match the `id` property with both the folder name and your deployment name.
- Set the `format` property as `api`.
- Choose `openai` for the `engine` property.
- Set the `state` property as `ready`.
```json title="~/jan/models/gpt-35-hieu-jan/model.json"
{
"sources": [
{
"filename": "azure_openai",
"url": "https://hieujan.openai.azure.com"
}
],
"id": "gpt-35-hieu-jan",
"object": "model",
"name": "Azure OpenAI GPT 3.5",
"version": "1.0",
"description": "Azure Open AI GPT 3.5 model is extremely good",
"format": "api",
"settings": {},
"parameters": {},
"metadata": {
"author": "OpenAI",
"tags": ["General", "Big Context Length"]
},
"engine": "openai"
}
```
### Step 3: Start the Model
Restart Jan and go to the Hub. Find your model and click on the Use button.

Binary file not shown.

After

Width:  |  Height:  |  Size: 111 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 258 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.3 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 111 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 644 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 128 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 144 KiB

View File

@ -0,0 +1,68 @@
---
sidebar_position: 2
---
import openrouterGIF from './img/jan-ai-openrouter.gif';
import openrouter from './img/openrouter.png';
# OpenRouter
## Overview
This guide will show you how to integrate OpenRouter with Jan, allowing you to utilize remote LLMs accessible through OpenRouter. [OpenRouter](https://openrouter.ai/docs#quick-start) is a tool that gathers AI models. Developers can utilize its API to engage with diverse large language models, generative image models, and generative 3D object models.
## How to Integrate OpenRouter
<div class="text--center">
<img src={openrouter} width={800} alt="openrouter" />
</div>
### Step 1: Configure OpenRouter API key
1. Find your API keys in the OpenRouter API Key.
2. Set the OpenRouter API key in `~/jan/engines/openai.json` file.
### Step 2: Modify a JSON Model
1. Go to the directory ~/jan/models.
2. Make a new folder called openrouter-(modelname), like openrouter-dolphin-mixtral-8x7b.
3. Inside the folder, create a model.json file with the following settings:
- Make sure the filename is model.json.
- Set the id property to the model id obtained from OpenRouter.
- Set the format property to api.
- Set the engine property to openai.
- Ensure the state property is set to ready.
```json title="~/jan/models/openrouter-dolphin-mixtral-8x7b/model.json"
{
"sources": [
{
"filename": "openrouter",
"url": "https://openrouter.ai/"
}
],
"id": "cognitivecomputations/dolphin-mixtral-8x7b",
"object": "model",
"name": "Dolphin 2.6 Mixtral 8x7B",
"version": "1.0",
"description": "This is a 16k context fine-tune of Mixtral-8x7b. It excels in coding tasks due to extensive training with coding data and is known for its obedience, although it lacks DPO tuning. The model is uncensored and is stripped of alignment and bias. It requires an external alignment layer for ethical use. Users are cautioned to use this highly compliant model responsibly, as detailed in a blog post about uncensored models at erichartford.com/uncensored-models.",
"format": "api",
"settings": {},
"parameters": {},
"metadata": {
"tags": ["General", "Big Context Length"]
},
"engine": "openai"
}
```
### Step 3 : Start the Model
Restart Jan and go to the Hub. Find your model and click on the Use button.
## Use Cases for Jan Integration with OpenRouter
Below are examples of the integration:
<div class="text--center">
<img src={openrouterGIF} width={800} alt="jan-ai-openrouter" />
</div>

View File

@ -0,0 +1,38 @@
---
sidebar_position: 4
---
import raycast from './img/raycast.png';
import raycastImage from './img/raycast-image.png';
# Raycast
## Overview
[Raycast](https://www.raycast.com/) is a productivity tool designed for macOS that enhances workflow efficiency by providing quick access to various tasks and functionalities through a keyboard-driven interface.
## How to Integrate Raycast
<div class="text--center">
<img src={raycast} width={800} alt="raycast" />
</div>
### Step 1: Download the TinyLlama model from Jan
Go to the **Hub** and download the TinyLlama model. The model will be available at `~jan/models/tinyllama-1.1b`.
### Step 2: Clone and Run the Program
1. Clone this [GitHub repository](https://github.com/InNoobWeTrust/nitro-raycast).
2. Execute the project using the following command:
```sh
npm i && npm run dev
```
### Step 3: Search for Nitro
Search for `Nitro` using the program and you can use the models from Jan in RayCast.
<div class="text--center">
<img src={raycastImage} width={800} alt="raycast" />
</div>

View File

@ -0,0 +1,100 @@
---
sidebar_position: 1
---
import continue_ask from './img/jan-ai-continue-ask.png';
import continue_comment from './img/jan-ai-continue-comment.gif';
import vscode from './img/vscode.png';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
# Continue Integration for Visual Studio Code
## Overview
This guide showcases integrating Continue with Jan and VS Code to boost your coding using the local AI language model's features. [Continue](https://continue.dev/docs/intro) is an open-source autopilot compatible with Visual Studio Code and JetBrains, offering the simplest method to code with any LLM (Local Language Model).
## How to Integrate with Continue
<div class="text--center">
<img src={vscode} width={800} alt="vscode" />
</div>
### Step 1: Installing Continue on Visal Studio Code
Follow this [guide to install the Continue extension on Visual Studio Code](https://continue.dev/docs/quickstart)
### Step 2: Enable the Jan API Server
To set up Continue for use with Jan's Local Server, you must activate the Jan API Server with your chosen model.
1. Press the `<>` button. Jan will take you to the **Local API Server** section.
2. Setup the server, which includes the **IP Port**, **Cross-Origin-Resource-Sharing (CORS)** and **Verbose Server Logs**.
3. Press the **Start Server** button
### Step 3: Configure Continue to Use Jan's Local Server
1. Go to the `~/.continue` directory.
<Tabs>
<TabItem value="mac" label="Mac" default>
```sh
cd ~/.continue
```
</TabItem>
<TabItem value="windows" label="Windows">
```sh
C:/Users/<your_user_name>/.continue
```
</TabItem>
<TabItem value="linux" label="Linux">
```sh
cd ~/.continue
```
</TabItem>
</Tabs>
```json title="~/.continue/config.json"
{
"models": [
{
"title": "Jan",
"provider": "openai",
"model": "mistral-ins-7b-q4",
"apiKey": "EMPTY",
"apiBase": "http://localhost:1337"
}
]
}
```
2. Ensure the file has the following configurations:
- Ensure `openai` is selected as the `provider`.
- Match the `model` with the one enabled in the Jan API Server.
- Set `apiBase` to `http://localhost:1337`.
- Leave the `apiKey` field to `EMPTY`.
### Step 4: Ensure the Using Model Is Activated in Jan
1. Navigate to `Settings` > `Models`.
2. Activate the model you want to use in Jan by clicking the **three dots (⋮)** and select **Start Model**.
## Use cases for Jan integration with Visual Studio Code
Below are examples of the integration:
### 1. Asking questions about the code
1. Highlight a code snippet and press `Command + Shift + M` to open the Left Panel.
2. Select Jan at the bottom and ask a question about the code, for example, `Explain this code`.
<div class="text--center">
<img src={continue_ask} width={800} alt="jan-ai-continue-ask" />
</div>
### 2. Editing the code with the help of a large language model
1. Select a code snippet and use `Command + Shift + L`.
2. Enter your editing request, such as `Add comments to this code`.
<div class="text--center">
<img src={continue_comment} width={800} alt="jan-ai-continue-comment" />
</div>

View File

@ -0,0 +1,81 @@
---
sidebar_position: 3
---
# Pre-configured Models
## Overview
Jan provides various pre-configured AI models with different capabilities. Please see the following list for details.
| Model | Description |
| ----- | ----------- |
| Mistral Instruct 7B Q4 | A model designed for a comprehensive understanding through training on extensive internet data |
| OpenHermes Neural 7B Q4 | A merged model using the TIES method. It performs well in various benchmarks |
| Stealth 7B Q4 | This is a new experimental family designed to enhance Mathematical and Logical abilities |
| Trinity-v1.2 7B Q4 | An experimental model merge using the Slerp method |
| Openchat-3.5 7B Q4 | An open-source model that has the performance that surpasses that of ChatGPT-3.5 and Grok-1 across various benchmarks |
| Wizard Coder Python 13B Q5 | A Python coding model that demonstrates high proficiency in specific domains like coding and mathematics |
| OpenAI GPT 3.5 Turbo | The latest GPT-3.5 Turbo model with higher accuracy at responding in requested formats and a fix for a bug which caused a text encoding issue for non-English language function calls |
| OpenAI GPT 3.5 Turbo 16k 0613 | A Snapshot model of gpt-3.5-16k-turbo from June 13th 2023 |
| OpenAI GPT 4 | The latest GPT-4 model intended to reduce cases of “laziness” where the model doesn't complete a task |
| TinyLlama Chat 1.1B Q4 | A tiny model with only 1.1B. It's a good model for less powerful computers |
| Deepseek Coder 1.3B Q8 | A model that excelled in project-level code completion with advanced capabilities across multiple programming languages |
| Phi-2 3B Q8 | a 2.7B model, excelling in common sense and logical reasoning benchmarks, trained with synthetic texts and filtered websites |
| Llama 2 Chat 7B Q4 | A model that is specifically designed for a comprehensive understanding through training on extensive internet data |
| CodeNinja 7B Q4 | A model that is is good for coding tasks and can handle various languages including Python, C, C++, Rust, Java, JavaScript, and more |
| Noromaid 7B Q5 | A model that is designed for role-playing with human-like behavior. |
| Starling alpha 7B Q4 | An upgrade of Openchat 3.5 using RLAIF, is really good at various benchmarks, especially with GPT-4 judging its performance |
| Yarn Mistral 7B Q4 | A language model for long context and supports a 128k token context window |
| LlaVa 1.5 7B Q5 K | A model can bring vision understanding to Jan |
| BakLlava 1 | A model can bring vision understanding to Jan |
| Solar Slerp 10.7B Q4 | A model that uses the Slerp merge method from SOLAR Instruct and Pandora-v1 |
| LlaVa 1.5 13B Q5 K | A model can bring vision understanding to Jan |
| Deepseek Coder 33B Q5 | A model that excelled in project-level code completion with advanced capabilities across multiple programming languages |
| Phind 34B Q5 | A multi-lingual model that is fine-tuned on 1.5B tokens of high-quality programming data, excels in various programming languages, and is designed to be steerable and user-friendly |
| Yi 34B Q5 | A specialized chat model, is known for its diverse and creative responses and excels across various NLP tasks and benchmarks |
| Capybara 200k 34B Q5 | A long context length model that supports 200K tokens |
| Dolphin 8x7B Q4 | An uncensored model built on Mixtral-8x7b and it is good at programming tasks |
| Mixtral 8x7B Instruct Q4 | A pretrained generative Sparse Mixture of Experts, which outperforms 70B models on most benchmarks |
| Tulu 2 70B Q4 | A strong model alternative to Llama 2 70b Chat to act as helpful assistants |
| Llama 2 Chat 70B Q4 | A model that is specifically designed for a comprehensive understanding through training on extensive internet data |
:::note
OpenAI GPT models requires a subscription in order to use them further. To learn more, [click here](https://openai.com/pricing).
:::
## Model details
| Model | Author | Model ID | Format | Size |
| ----- | ------ | -------- | ------ | ---- |
| Mistral Instruct 7B Q4 | MistralAI, The Bloke | `mistral-ins-7b-q4` | **GGUF** | 4.07GB |
| OpenHermes Neural 7B Q4 | Intel, Jan | `openhermes-neural-7b` | **GGUF** | 4.07GB |
| Stealth 7B Q4 | Jan | `stealth-v1.2-7b` | **GGUF** | 4.07GB |
| Trinity-v1.2 7B Q4 | Jan | `trinity-v1.2-7b` | **GGUF** | 4.07GB |
| Openchat-3.5 7B Q4 | Openchat | `openchat-3.5-7b` | **GGUF** | 4.07GB |
| Wizard Coder Python 13B Q5 | WizardLM, The Bloke | `wizardcoder-13b` | **GGUF** | 7.33GB | - |
| OpenAI GPT 3.5 Turbo | OpenAI | `gpt-3.5-turbo` | **GGUF** | - |
| OpenAI GPT 3.5 Turbo 16k 0613 | OpenAI | `gpt-3.5-turbo-16k-0613` | **GGUF** | - |
| OpenAI GPT 4 | OpenAI | `gpt-4` | **GGUF** | - |
| TinyLlama Chat 1.1B Q4 | TinyLlama | `tinyllama-1.1b` | **GGUF** | 638.01MB |
| Deepseek Coder 1.3B Q8 | Deepseek, The Bloke | `deepseek-coder-1.3b` | **GGUF** | 1.33GB |
| Phi-2 3B Q8 | Microsoft | `phi-2-3b` | **GGUF** | 2.76GB |
| Llama 2 Chat 7B Q4 | MetaAI, The Bloke | `llama2-chat-7b-q4` | **GGUF** | 3.80GB |
| CodeNinja 7B Q4 | Beowolx | `codeninja-1.0-7b` | **GGUF** | 4.07GB |
| Noromaid 7B Q5 | NeverSleep | `noromaid-7b` | **GGUF** | 4.07GB |
| Starling alpha 7B Q4 | Berkeley-nest, The Bloke | `starling-7b` | **GGUF** | 4.07GB |
| Yarn Mistral 7B Q4 | NousResearch, The Bloke | `yarn-mistral-7b` | **GGUF** | 4.07GB |
| LlaVa 1.5 7B Q5 K | Mys | `llava-1.5-7b-q5` | **GGUF** | 5.03GB |
| BakLlava 1 | Mys | `bakllava-1` | **GGUF** | 5.36GB |
| Solar Slerp 10.7B Q4 | Jan | `solar-10.7b-slerp` | **GGUF** | 5.92GB |
| LlaVa 1.5 13B Q5 K | Mys | `llava-1.5-13b-q5` | **GGUF** | 9.17GB |
| Deepseek Coder 33B Q5 | Deepseek, The Bloke | `deepseek-coder-34b` | **GGUF** | 18.57GB |
| Phind 34B Q5 | Phind, The Bloke | `phind-34b` | **GGUF** | 18.83GB |
| Yi 34B Q5 | 01-ai, The Bloke | `yi-34b` | **GGUF** | 19.24GB |
| Capybara 200k 34B Q5 | NousResearch, The Bloke | `capybara-34b` | **GGUF** | 22.65GB |
| Dolphin 8x7B Q4 | Cognitive Computations, TheBloke | `dolphin-2.7-mixtral-8x7b` | **GGUF** | 24.62GB |
| Mixtral 8x7B Instruct Q4 | MistralAI, TheBloke | `mixtral-8x7b-instruct` | **GGUF** | 24.62GB |
| Tulu 2 70B Q4 | Lizpreciatior, The Bloke | `tulu-2-70b` | **GGUF** | 38.56GB |
| Llama 2 Chat 70B Q4 | MetaAI, The Bloke | `llama2-chat-70b-q4` | **GGUF** | 40.90GB |

View File

@ -0,0 +1,8 @@
{
"label": "Advanced Models Setup",
"position": 4,
"link": {
"type": "generated-index",
"description": "More info regarding AI models for Jan"
}
}

View File

@ -0,0 +1,65 @@
---
sidebar_position: 1
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
# Customize Engine Settings
In this guide, we'll walk you through the process of customizing your engine settings by tweaking the `nitro.json` file
1. Navigate to the `App Settings` > `Advanced` > `Open App Directory` > `~/jan/engine` folder.
<Tabs>
<TabItem value="mac" label="MacOS" default>
```sh
cd ~/jan/engines
```
</TabItem>
<TabItem value="windows" label="Windows" default>
```sh
C:/Users/<your_user_name>/jan/engines
```
</TabItem>
<TabItem value="linux" label="Linux" default>
```sh
cd ~/jan/engines
```
</TabItem>
</Tabs>
2. Modify the `nitro.json` file based on your needs. The default settings are shown below.
```json title="~/jan/engines/nitro.json"
{
"ctx_len": 2048,
"ngl": 100,
"cpu_threads": 1,
"cont_batching": false,
"embedding": false
}
```
The table below describes the parameters in the `nitro.json` file.
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ctx_len` | **Integer** | The context length for the model operations. |
| `ngl` | **Integer** | The number of GPU layers to use. |
| `cpu_threads` | **Integer** | The number of threads to use for inferencing (CPU mode only) |
| `cont_batching` | **Boolean** | Whether to use continuous batching. |
| `embedding` | **Boolean** | Whether to use embedding in the model. |
:::tip
- By default, the value of `ngl` is set to 100, which indicates that it will offload all. If you wish to offload only 50% of the GPU, you can set `ngl` to 15 because most models on Mistral or Llama are around ~ 30 layers.
- To utilize the embedding feature, include the JSON parameter `"embedding": true`. It will enable Nitro to process inferences with embedding capabilities. Please refer to the [Embedding in the Nitro documentation](https://nitro.jan.ai/features/embed) for a more detailed explanation.
- To utilize the continuous batching feature for boosting throughput and minimizing latency in large language model (LLM) inference, include `cont_batching: true`. For details, please refer to the [Continuous Batching in the Nitro documentation](https://nitro.jan.ai/features/cont-batch).
:::
:::info[Assistance and Support]
If you have questions, please join our [Discord community](https://discord.gg/Dt7MxDyNNZ) for support, updates, and discussions.
:::

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 MiB

View File

@ -0,0 +1,165 @@
---
sidebar_position: 3
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import janModel from './img/jan-model-hub.png';
# Manual Import
:::warning
This is currently under development.
:::
This section will show you how to perform manual import. In this guide, we are using a GGUF model from [HuggingFace](https://huggingface.co/) and our latest model, [Trinity](https://huggingface.co/janhq/trinity-v1-GGUF), as an example.
## Newer versions - nightly versions and v0.4.4+
### 1. Create a Model Folder
1. Navigate to the `App Settings` > `Advanced` > `Open App Directory` > `~/jan/models` folder.
<Tabs>
<TabItem value="mac" label="MacOS" default>
```sh
cd ~/jan/models
```
</TabItem>
<TabItem value="windows" label="Windows" default>
```sh
C:/Users/<your_user_name>/jan/models
```
</TabItem>
<TabItem value="linux" label="Linux" default>
```sh
cd ~/jan/models
```
</TabItem>
</Tabs>
2. In the `models` folder, create a folder with the name of the model.
```sh
mkdir trinity-v1-7b
```
### 2. Drag & Drop the Model
Drag and drop your model binary into this folder, ensuring the `modelname.gguf` is the same name as the folder name, e.g. `models/modelname`.
### 3. Done!
If your model doesn't show up in the **Model Selector** in conversations, **restart the app** or contact us via our [Discord community](https://discord.gg/Dt7MxDyNNZ).
## Older versions - before v0.44
### 1. Create a Model Folder
1. Navigate to the `App Settings` > `Advanced` > `Open App Directory` > `~/jan/models` folder.
<Tabs>
<TabItem value="mac" label="MacOS" default>
```sh
cd ~/jan/models
```
</TabItem>
<TabItem value="windows" label="Windows" default>
```sh
C:/Users/<your_user_name>/jan/models
```
</TabItem>
<TabItem value="linux" label="Linux" default>
```sh
cd ~/jan/models
```
</TabItem>
</Tabs>
2. In the `models` folder, create a folder with the name of the model.
```sh
mkdir trinity-v1-7b
```
### 2. Create a Model JSON
Jan follows a folder-based, [standard model template](https://jan.ai/docs/engineering/models/) called a `model.json` to persist the model configurations on your local filesystem.
This means that you can easily reconfigure your models, export them, and share your preferences transparently.
<Tabs>
<TabItem value="mac" label="MacOS" default>
```sh
cd trinity-v1-7b
touch model.json
```
</TabItem>
<TabItem value="windows" label="Windows" default>
```sh
cd trinity-v1-7b
echo {} > model.json
```
</TabItem>
<TabItem value="linux" label="Linux" default>
```sh
cd trinity-v1-7b
touch model.json
```
</TabItem>
</Tabs>
To update `model.json`:
- Match `id` with folder name.
- Ensure GGUF filename matches `id`.
- Set `source.url` to direct download link ending in `.gguf`. In HuggingFace, you can find the direct links in the `Files and versions` tab.
- Verify that you are using the correct `prompt_template`. This is usually provided in the HuggingFace model's description page.
```json title="model.json"
{
"sources": [
{
"filename": "trinity-v1.Q4_K_M.gguf",
"url": "https://huggingface.co/janhq/trinity-v1-GGUF/resolve/main/trinity-v1.Q4_K_M.gguf"
}
],
"id": "trinity-v1-7b",
"object": "model",
"name": "Trinity-v1 7B Q4",
"version": "1.0",
"description": "Trinity is an experimental model merge of GreenNodeLM & LeoScorpius using the Slerp method. Recommended for daily assistance purposes.",
"format": "gguf",
"settings": {
"ctx_len": 4096,
"prompt_template": "{system_message}\n### Instruction:\n{prompt}\n### Response:",
"llama_model_path": "trinity-v1.Q4_K_M.gguf"
},
"parameters": {
"max_tokens": 4096
},
"metadata": {
"author": "Jan",
"tags": ["7B", "Merged"],
"size": 4370000000
},
"engine": "nitro"
}
```
### 3. Download the Model
1. Restart Jan and navigate to the Hub.
2. Locate your model.
3. Click **Download** button to download the model binary.
<div class="text--center">
<img src={janModel} width={800} alt="jan-model-hub" />
</div>
:::info[Assistance and Support]
If you have questions, please join our [Discord community](https://discord.gg/Dt7MxDyNNZ) for support, updates, and discussions.
:::

View File

@ -0,0 +1,143 @@
---
sidebar_position: 2
---
# Remote Server Integration
:::warning
This is currently under development.
:::
This guide will show you how to configure Jan as a client and point it to any remote & local (self-hosted) API server.
## OpenAI Platform Configuration
### 1. Create a Model JSON
1. In `~/jan/models`, create a folder named `gpt-3.5-turbo-16k`.
2. In this folder, add a `model.json` file with Filename as `model.json`, `id` matching folder name, `Format` as `api`, `Engine` as `openai`, and `State` as `ready`.
```json title="~/jan/models/gpt-3.5-turbo-16k/model.json"
{
"sources": [
{
"filename": "openai",
"url": "https://openai.com"
}
],
"id": "gpt-3.5-turbo-16k",
"object": "model",
"name": "OpenAI GPT 3.5 Turbo 16k",
"version": "1.0",
"description": "OpenAI GPT 3.5 Turbo 16k model is extremely good",
"format": "api",
"settings": {},
"parameters": {},
"metadata": {
"author": "OpenAI",
"tags": ["General", "Big Context Length"]
},
"engine": "openai"
}
```
:::tip
- You can find the list of available models in the [OpenAI Platform](https://platform.openai.com/docs/models/overview).
- The `id` property needs to match the model name in the list.
- For example, if you want to use the [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo), you must set the `id` property to `gpt-4-1106-preview`.
:::
### 2. Configure OpenAI API Keys
1. Find your API keys in the [OpenAI Platform](https://platform.openai.com/api-keys).
2. Set the OpenAI API keys in `~/jan/engines/openai.json` file.
```json title="~/jan/engines/openai.json"
{
"full_url": "https://api.openai.com/v1/chat/completions",
"api_key": "sk-<your key here>"
}
```
### 3. Start the Model
Restart Jan and navigate to the Hub. Then, select your configured model and start the model.
## Engines with OAI Compatible Configuration
This section will show you how to configure a client connection to a remote/local server using Jan's API server running model `mistral-ins-7b-q4` as an example.
:::note
Currently, you can only connect to one OpenAI-compatible endpoint at a time.
:::
### 1. Configure a Client Connection
1. Navigate to the `~/jan/engines` folder.
2. Modify the `openai.json file`.
:::note
Please note that currently, the code that supports any OpenAI-compatible endpoint only reads `engine/openai.json` file. Thus, it will not search any other files in this directory.
:::
3. Configure `full_url` properties with the endpoint server that you want to connect. For example, if you're going to communicate to Jan's API server, you can configure it as follows:
```json title="~/jan/engines/openai.json"
{
// "full_url": "https://<server-ip-address>:<port>/v1/chat/completions"
"full_url": "https://<server-ip-address>:1337/v1/chat/completions"
// Skip api_key if your local server does not require authentication
// "api_key": "sk-<your key here>"
}
```
### 2. Create a Model JSON
1. In `~/jan/models`, create a folder named `mistral-ins-7b-q4`.
2. In this folder, add a `model.json` file with Filename as `model.json`, `id` matching folder name, `Format` as `api`, `Engine` as `openai`, and `State` as `ready`.
```json title="~/jan/models/mistral-ins-7b-q4/model.json"
{
"sources": [
{
"filename": "janai",
"url": "https://jan.ai"
}
],
"id": "mistral-ins-7b-q4",
"object": "model",
"name": "Mistral Instruct 7B Q4 on Jan API Server",
"version": "1.0",
"description": "Jan integration with remote Jan API server",
"format": "api",
"settings": {},
"parameters": {},
"metadata": {
"author": "MistralAI, The Bloke",
"tags": ["remote", "awesome"]
},
"engine": "openai"
}
```
### 3. Start the Model
Restart Jan and navigate to the **Hub**. Locate your model and click the **Use** button.
:::info[Assistance and Support]
If you have questions or want more preconfigured GGUF models, please join our [Discord community](https://discord.gg/Dt7MxDyNNZ) for support, updates, and discussions.
:::

View File

@ -0,0 +1,50 @@
---
sidebar_position: 1
hide_table_of_contents: true
---
import installImageURL from '../../static/img/homepage-new/jan-ai-quickstart.png';
# Quickstart
{/* After finish installing, here are steps for using Jan
## Run Jan
<Tabs>
<TabItem value="mac" label="MacOS" default>
1. Search Jan in the Dock and run the program.
</TabItem>
<TabItem value="windows" label="Windows" default>
1. Search Jan in the Start menu and run the program.
</TabItem>
<TabItem value="linux" label="Linux" default>
1. Go to the Jan directory and run the program.
</TabItem>
</Tabs>
2. After you run Jan, the program will take you to the Threads window, with list of threads and each thread is a chatting box between you and the AI model.
3. Go to the **Hub** under the **Thread** section and select the AI model that you want to use. For more info, go to the [Using Models](category/using-models) section.
4. A new thread will be added. You can use Jan in the thread with the AI model that you selected before. */}
### Step 1: Install Jan
Go to [Jan.ai](https://jan.ai/) > Select your operating system > Install the program.
To learn more about system requirements for your operating system, go to [Installation guide](/docs/install).
### Step 2: Select AI Model
Before using Jan, you need to select an AI model that based on your hardware capabilities and specifications.
Each model has their purposes, capabilities, and different requirements.
To select AI models: Go to the **Hub** > select the models that you would like to install.
For more info, go to [list of supported models](/docs/models-list/).
### Step 3: Use the AI Model
After you install the AI model, you use it immediately under **Thread** tab.

View File

@ -4,6 +4,7 @@
require("dotenv").config();
const darkCodeTheme = require("prism-react-renderer/themes/dracula");
const path = require('path');
/** @type {import('@docusaurus/types').Config} */
const config = {
@ -76,6 +77,9 @@ const config = {
],
},
],
//To input custom Plugin
path.resolve(__dirname, 'plugins', 'changelog-plugin'),
],
// The classic preset will relay each option entry to the respective sub plugin/theme.
@ -291,6 +295,11 @@ const config = {
label: "Docs",
position: "right",
items: [
{
type: "docSidebar",
sidebarId: "quickstartSidebar",
label: "Quickstart",
},
{
type: "docSidebar",
sidebarId: "guidesSidebar",
@ -337,6 +346,11 @@ const config = {
respectPrefersColorScheme: false,
},
},
customFields: {
githubAccessToken: process.env.GITHUB_ACCESS_TOKEN || "XXXX",
},
themes: ["@docusaurus/theme-live-codeblock", "@docusaurus/theme-mermaid"],
};

View File

@ -167,6 +167,12 @@ const sidebars = {
dirName: "docs",
},
],
quickstartSidebar: [
{
type: "autogenerated",
dirName: "quickstart",
},
]
};
module.exports = sidebars;

BIN
docs/static/img/homepage-new/9.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

BIN
docs/static/img/homepage-new/bg-book.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
docs/static/img/homepage-new/bg.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 120 KiB

BIN
docs/static/img/homepage-new/buku.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 525 B

BIN
docs/static/img/homepage-new/chat.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.5 KiB

BIN
docs/static/img/homepage-new/discord.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 KiB

BIN
docs/static/img/homepage-new/doa.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.7 KiB

BIN
docs/static/img/homepage-new/jan-50.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

BIN
docs/static/img/homepage-new/logo.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.3 KiB

BIN
docs/static/img/homepage-new/rocket.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.5 KiB

BIN
docs/static/img/homepage-new/roket.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.9 KiB

BIN
docs/static/img/homepage-new/setting.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.6 KiB