docs: update the content of integration & add discord + openinterpreter

This commit is contained in:
Arista Indrajaya 2024-02-27 16:55:33 +07:00
parent b4e2ee72bb
commit b081a91ca3
6 changed files with 147 additions and 33 deletions

View File

@ -2,19 +2,15 @@
sidebar_position: 3
---
import azure from './img/azure.png';
import azure from './assets/azure.png';
# Azure Raycast
# Azure OpenAI
## Overview
This guide will show you how to integrate Azure OpenAI Service with Jan. The [Azure OpenAI Service](https://learn.microsoft.com/en-us/azure/ai-services/openai/overview?source=docs) offers robust APIs, making it simple for you to incorporate OpenAI's language models into your applications.
## How to Integrate Azure
<div class="text--center">
<img src={azure} width={800} alt="azure" />
</div>
## How to Integrate Azure OpenAI with Jan
### Step 1: Configure Azure OpenAI Service API Key
@ -31,12 +27,11 @@ This guide will show you how to integrate Azure OpenAI Service with Jan. The [Az
}
```
### Step 2: Modify a JSON Model
### Step 2: Model Configuration
1. Go to the `~/jan/models` directory.
2. Make a new folder called `(your-deployment-name)`, like `gpt-35-hieu-jan`.
3. Create a `model.json` file inside the folder with the specified configurations:
- Ensure the file is named `model.json`.
- Match the `id` property with both the folder name and your deployment name.
- Set the `format` property as `api`.
- Choose `openai` for the `engine` property.
@ -66,6 +61,28 @@ This guide will show you how to integrate Azure OpenAI Service with Jan. The [Az
}
```
#### Regarding `model.json`
- In `settings`, two crucial values are:
- `ctx_len`: Defined based on the model's context size.
- `prompt_template`: Defined based on the model's trained template (e.g., ChatML, Alpaca).
- To set up the `prompt_template`:
1. Visit Hugging Face.
2. Locate the model (e.g., [Gemma 7b it](https://huggingface.co/google/gemma-7b-it)).
3. Review the text and identify the template.
- In `parameters`, consider the following options. The fields in `parameters` are typically general and can be the same across models. An example is provided below:
```json
"parameters":{
"temperature": 0.7,
"top_p": 0.95,
"stream": true,
"max_tokens": 4096,
"frequency_penalty": 0,
"presence_penalty": 0
}
```
### Step 3: Start the Model
Restart Jan and go to the Hub. Find your model and click on the Use button.

View File

@ -0,0 +1,60 @@
---
sidebar_position: 5
---
import discord_repo from './assets/jan-ai-discord-repo.png';
# Discord
## Overview
This tutorial demonstrates the process of integrating with a Discord bot using Jan.
Using a Discord bot enhances server interaction. Integrating Jan with it can significantly boost responsiveness and user engagement.
## How to Integrate Discord Bot with Jan
### Step 1: Clone the repository
To make this integration successful, it is necessary to clone the discord bot's [repository](https://github.com/jakobdylanc/discord-llm-chatbot).
<div class="text--center">
<img src={discord_repo} width={600} alt="jan-ai-discord-repo" />
</div>
### Step 2: Install the requirement libraries
After cloning the repository, run the following command:
```sh
pip install -r requirements.txt
```
### Step 3: Create a copy of `.env.example`, named `.env`, and set it up
| Setting | Instructions |
| ------- | ------------ |
| DISCORD_BOT_TOKEN | Generate a new Discord application at [discord.com/developers/applications](https://discord.com/developers/applications), obtain a token from the Bot tab, and enable MESSAGE CONTENT INTENT. |
| LLM | For [Jan](https://jan.ai/), set to `local/openai/(MODEL_NAME)`, where `(MODEL_NAME)` is your loaded model's name. |
| CUSTOM_SYSTEM_PROMPT | Adjust the bot's behavior as needed. |
| CUSTOM_DISCORD_STATUS | Set a custom message for the bot's Discord profile. (Max 128 characters) |
| ALLOWED_CHANNEL_IDS | Enter Discord channel IDs where the bot can send messages, separated by commas. Leave blank to allow all channels. |
| ALLOWED_ROLE_IDS | Enter Discord role IDs allowed to use the bot, separated by commas. Leave blank to allow everyone. Including at least one role also disables DMs. |
| MAX_IMAGES | Max number of image attachments allowed per message when using a vision model. (Default: `5`) |
| MAX_MESSAGES | Max messages allowed in a reply chain. (Default: `20`) |
| LOCAL_SERVER_URL | URL of your local API server for LLMs starting with `local/`. (Default: `http://localhost:5000/v1`) |
| LOCAL_API_KEY | API key for your local API server with LLMs starting with `local/`. Usually safe to leave blank. |
### Step 4: Invite the bot to your Discord server using this URL (replace `CLIENT_ID` with your Discord application's client ID from the OAuth2 tab)
```
https://discord.com/api/oauth2/authorize?client_id=(CLIENT_ID)&permissions=412317273088&scope=bot
```
### STep 5: Run the bot
Run the bot by using the following command in your command prompt:
```sh
python llmcord.py
```

View File

@ -0,0 +1,49 @@
---
sidebar_position: 6
---
# Open Interpreter
## Overview
This tutorial illustrates how to integrate with Open Interpreter using Jan. [Open Interpreter](https://github.com/KillianLucas/open-interpreter/) lets LLMs run code (Python, Javascript, Shell, and more) locally. You can chat with Open Interpreter through a ChatGPT-like interface in your terminal by running `interpreter` after installing.
## How to Integrate Open Interpreter with Jan
### Step 1: Install Open Interpreter
Install Open Interpreter by running:
```sh
pip install open-interpreter
```
A Rust compiler is required to install Open Interpreter. If not already installed, run the following command or go to [this page](https://rustup.rs/) if you are running on windows:
```zsh
sudo apt install rustc
```
### Step 2: Configure Jan's Local API Server
Before using Open Interpreter, configure the model in `Settings` > `My Model` for Jan and activate its local API server.
#### Enabling Jan API Server
1. Click the `<>` button to access the **Local API Server** section in Jan.
2. Configure the server settings, including **IP Port**, **Cross-Origin-Resource-Sharing (CORS)**, and **Verbose Server Logs**.
3. Click **Start Server**.
### Step 3: Run Open Interpreter with Specific Parameters
For integration, provide the API Base (`http://localhost:1337/v1`) and the model ID (e.g., `mistral-ins-7b-q4`) when running Open Interpreter.
For instance, if using **Mistral Instruct 7B Q4** as the model, execute:
```zsh
interpreter --api_base http://localhost:1337/v1 --model mistral-ins-7b-q4
```
Open Interpreter is now ready for use!

View File

@ -2,8 +2,8 @@
sidebar_position: 2
---
import openrouterGIF from './img/jan-ai-openrouter.gif';
import openrouter from './img/openrouter.png';
import openrouterGIF from './assets/jan-ai-openrouter.gif';
import openrouter from './assets/openrouter.png';
# OpenRouter
@ -13,10 +13,6 @@ This guide will show you how to integrate OpenRouter with Jan, allowing you to u
## How to Integrate OpenRouter
<div class="text--center">
<img src={openrouter} width={800} alt="openrouter" />
</div>
### Step 1: Configure OpenRouter API key
1. Find your API keys in the OpenRouter API Key.

View File

@ -2,8 +2,8 @@
sidebar_position: 4
---
import raycast from './img/raycast.png';
import raycastImage from './img/raycast-image.png';
import raycast from './assets/raycast.png';
import raycastImage from './assets/raycast-image.png';
# Raycast
@ -12,10 +12,6 @@ import raycastImage from './img/raycast-image.png';
## How to Integrate Raycast
<div class="text--center">
<img src={raycast} width={800} alt="raycast" />
</div>
### Step 1: Download the TinyLlama model from Jan
Go to the **Hub** and download the TinyLlama model. The model will be available at `~jan/models/tinyllama-1.1b`.

View File

@ -1,9 +1,9 @@
---
sidebar_position: 1
---
import continue_ask from './img/jan-ai-continue-ask.png';
import continue_comment from './img/jan-ai-continue-comment.gif';
import vscode from './img/vscode.png';
import continue_ask from './assets/jan-ai-continue-ask.png';
import continue_comment from './assets/jan-ai-continue-comment.gif';
import vscode from './assets/vscode.png';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
@ -13,11 +13,7 @@ import TabItem from '@theme/TabItem';
This guide showcases integrating Continue with Jan and VS Code to boost your coding using the local AI language model's features. [Continue](https://continue.dev/docs/intro) is an open-source autopilot compatible with Visual Studio Code and JetBrains, offering the simplest method to code with any LLM (Local Language Model).
## How to Integrate with Continue
<div class="text--center">
<img src={vscode} width={800} alt="vscode" />
</div>
## How to Integrate with Continue VS Code
### Step 1: Installing Continue on Visal Studio Code