docs: update content integration section by add keywords and description
This commit is contained in:
parent
774db85606
commit
8e77307b59
@ -17,17 +17,9 @@ keywords:
|
||||
]
|
||||
---
|
||||
|
||||
import azure from './assets/azure.png';
|
||||
|
||||
## How to Integrate Azure OpenAI with Jan
|
||||
|
||||
The [Azure OpenAI Service](https://learn.microsoft.com/en-us/azure/ai-services/openai/overview?source=docs) offers robust APIs, making it simple for you to incorporate OpenAI's language models into your applications.
|
||||
|
||||
<div class="text--center" >
|
||||
<img src={ azure } width = { 800} alt = "azure" />
|
||||
</div>
|
||||
|
||||
You can integrate Azure OpenAI with Jan by following the steps below:
|
||||
The [Azure OpenAI Service](https://learn.microsoft.com/en-us/azure/ai-services/openai/overview?source=docs) offers robust APIs, making it simple for you to incorporate OpenAI's language models into your applications. You can integrate Azure OpenAI with Jan by following the steps below:
|
||||
|
||||
### Step 1: Configure Azure OpenAI Service API Key
|
||||
|
||||
@ -103,4 +95,4 @@ You can integrate Azure OpenAI with Jan by following the steps below:
|
||||
### Step 3: Start the Model
|
||||
|
||||
1. Restart Jan and go to the Hub.
|
||||
2. Find your model in Jan application and click on the Use button.
|
||||
2. Find your model in Jan application and click on the Use button.
|
||||
@ -4,27 +4,16 @@ sidebar_position: 5
|
||||
description: A step-by-step guide on how to integrate Jan with a Discord bot.
|
||||
---
|
||||
|
||||
import discord_repo from './assets/jan-ai-discord-repo.png';
|
||||
import flow from './assets/discordflow.png';
|
||||
|
||||
## How to Integrate Discord Bot with Jan
|
||||
|
||||
Discord bot can enhances your discord server interactions. By integrating Jan with it, you can significantly boost responsiveness and user engaggement in your discord server.
|
||||
|
||||
<div class="text--center" >
|
||||
<img src={ flow } width = { 800} alt = "discord" />
|
||||
</div>
|
||||
|
||||
To integrate Jan with a Discord bot, follow the steps below:
|
||||
|
||||
### Step 1: Clone the repository
|
||||
|
||||
To make this integration successful, it is necessary to clone the discord bot's [repository](https://github.com/jakobdylanc/discord-llm-chatbot).
|
||||
|
||||
<div class="text--center">
|
||||
<img src={discord_repo} width={600} alt="jan-ai-discord-repo" />
|
||||
</div>
|
||||
|
||||
### Step 2: Install the Required Libraries
|
||||
|
||||
After cloning the repository, run the following command:
|
||||
|
||||
@ -15,7 +15,6 @@ keywords:
|
||||
LM Studio integration,
|
||||
]
|
||||
---
|
||||
import flow from './assets/lmstudio.png';
|
||||
|
||||
## How to Integrate LM Studio with Jan
|
||||
|
||||
@ -23,10 +22,6 @@ import flow from './assets/lmstudio.png';
|
||||
1. Integrate the LM Studio server with Jan UI
|
||||
2. Migrate your downloaded model from LM Studio to Jan.
|
||||
|
||||
<div class="text--center" >
|
||||
<img src={ flow } width = { 800} alt = "LM Studio" />
|
||||
</div>
|
||||
|
||||
To integrate LM Studio with Jan follow the steps below:
|
||||
|
||||
:::note
|
||||
|
||||
@ -16,18 +16,12 @@ keywords:
|
||||
]
|
||||
---
|
||||
|
||||
import flow from './assets/mistral.png';
|
||||
|
||||
## How to Integrate Mistral AI with Jan
|
||||
|
||||
[Mistral AI](https://docs.mistral.ai/) provides two ways to use their Large Language Models (LLM):
|
||||
1. API
|
||||
2. Open-source models on Hugging Face.
|
||||
|
||||
<div class="text--center" >
|
||||
<img src={ flow } width = { 800} alt = "Mistral" />
|
||||
</div>
|
||||
|
||||
To integrate Jan with Mistral AI, follow the steps below:
|
||||
|
||||
:::note
|
||||
|
||||
@ -16,18 +16,12 @@ keywords:
|
||||
]
|
||||
---
|
||||
|
||||
import flow from './assets/ollama.png';
|
||||
|
||||
## How to Integrate Ollama with Jan
|
||||
|
||||
Ollama provides you with largen language that you can run locally. There are two methods to integrate Ollama with Jan:
|
||||
1. Integrate Ollama server with Jan.
|
||||
2. Migrate the downloaded model from Ollama to Jan.
|
||||
|
||||
<div class="text--center" >
|
||||
<img src={ flow } width = { 800} alt = "Ollama" />
|
||||
</div>
|
||||
|
||||
To integrate Ollama with Jan, follow the steps below:
|
||||
|
||||
:::note
|
||||
|
||||
@ -4,17 +4,10 @@ sidebar_position: 6
|
||||
description: A step-by-step guide on how to integrate Jan with Open Interpreter.
|
||||
---
|
||||
|
||||
import flow from './assets/interpreter.png';
|
||||
|
||||
## How to Integrate Open Interpreter with Jan
|
||||
|
||||
[Open Interpreter](https://github.com/KillianLucas/open-interpreter/) lets LLMs run code (Python, Javascript, Shell, and more) locally. You can chat with Open Interpreter through a ChatGPT-like interface in your terminal by running `interpreter` after installing.
|
||||
|
||||
<div class="text--center" >
|
||||
<img src={ flow } width = { 800} alt = "Open Interpreter" />
|
||||
</div>
|
||||
|
||||
To integrate Open Interpreter with Jan, follow the steps below:
|
||||
[Open Interpreter](https://github.com/KillianLucas/open-interpreter/) lets LLMs run code (Python, Javascript, Shell, and more) locally. You can chat with Open Interpreter through a ChatGPT-like interface in your terminal by running `interpreter` after installing. To integrate Open Interpreter with Jan, follow the steps below:
|
||||
|
||||
### Step 1: Install Open Interpreter
|
||||
|
||||
|
||||
@ -4,17 +4,11 @@ sidebar_position: 2
|
||||
description: A step-by-step guide on how to integrate Jan with OpenRouter.
|
||||
---
|
||||
|
||||
import openrouterGIF from './assets/jan-ai-openrouter.gif';
|
||||
import openrouter from './assets/openrouter.png';
|
||||
|
||||
## How to Integrate OpenRouter with Jan
|
||||
|
||||
[OpenRouter](https://openrouter.ai/docs#quick-start) is a tool that gathers AI models. Developers can utilize its API to engage with diverse large language models, generative image models, and generative 3D object models.
|
||||
|
||||
<div class="text--center" >
|
||||
<img src={ openrouter } width = { 800} alt = "openrouter" />
|
||||
</div>
|
||||
|
||||
To connect Jan with OpenRouter for accessing remote Large Language Models (LLMs) through OpenRouter, you can follow the steps below:
|
||||
|
||||
### Step 1: Configure OpenRouter API key
|
||||
@ -78,6 +72,5 @@ To connect Jan with OpenRouter for accessing remote Large Language Models (LLMs)
|
||||
```
|
||||
|
||||
### Step 3 : Start the Model
|
||||
|
||||
1. Restart Jan and go to the **Hub**.
|
||||
2. Find your model and click on the **Use** button.
|
||||
@ -1,83 +1,27 @@
|
||||
---
|
||||
title: OpenRouter
|
||||
sidebar_position: 2
|
||||
description: A step-by-step guide on how to integrate Jan with OpenRouter.
|
||||
title: Raycast
|
||||
sidebar_position: 4
|
||||
description: A step-by-step guide on how to integrate Jan with Raycast.
|
||||
---
|
||||
|
||||
import openrouterGIF from './assets/jan-ai-openrouter.gif';
|
||||
import openrouter from './assets/openrouter.png';
|
||||
|
||||
## How to Integrate OpenRouter with Jan
|
||||
## How to Integrate Raycast
|
||||
[Raycast](https://www.raycast.com/) is a productivity tool designed for macOS that enhances workflow efficiency by providing quick access to various tasks and functionalities through a keyboard-driven interface. To integrate Raycast with Jan, follow the steps below:
|
||||
|
||||
[OpenRouter](https://openrouter.ai/docs#quick-start) is a tool that gathers AI models. Developers can utilize its API to engage with diverse large language models, generative image models, and generative 3D object models.
|
||||
### Step 1: Download the TinyLlama Model
|
||||
|
||||
<div class="text--center" >
|
||||
<img src={ openrouter } width = { 800} alt = "openrouter" />
|
||||
</div>
|
||||
1. Go to the **Hub** and download the TinyLlama model.
|
||||
2. The model will be available at `~jan/models/tinyllama-1.1b`.
|
||||
|
||||
To connect Jan with OpenRouter for accessing remote Large Language Models (LLMs) through OpenRouter, you can follow the steps below:
|
||||
### Step 2: Clone and Run the Program
|
||||
|
||||
### Step 1: Configure OpenRouter API key
|
||||
1. Clone this [GitHub repository](https://github.com/InNoobWeTrust/nitro-raycast).
|
||||
2. Execute the project using the following command:
|
||||
|
||||
1. Find your API keys in the [OpenRouter API Key](https://openrouter.ai/keys).
|
||||
2. Set the OpenRouter API key in `~/jan/engines/openai.json` file.
|
||||
|
||||
### Step 2: MModel Configuration
|
||||
|
||||
1. Go to the directory `~/jan/models`.
|
||||
2. Make a new folder called `openrouter-(modelname)`, like `openrouter-dolphin-mixtral-8x7b`.
|
||||
3. Inside the folder, create a `model.json` file with the following settings:
|
||||
- Set the `id` property to the model id obtained from OpenRouter.
|
||||
- Set the `format` property to `api`.
|
||||
- Set the `engine` property to `openai`.
|
||||
- Ensure the `state` property is set to `ready`.
|
||||
|
||||
```json title="~/jan/models/openrouter-dolphin-mixtral-8x7b/model.json"
|
||||
{
|
||||
"sources": [
|
||||
{
|
||||
"filename": "openrouter",
|
||||
"url": "https://openrouter.ai/"
|
||||
}
|
||||
],
|
||||
"id": "cognitivecomputations/dolphin-mixtral-8x7b",
|
||||
"object": "model",
|
||||
"name": "Dolphin 2.6 Mixtral 8x7B",
|
||||
"version": "1.0",
|
||||
"description": "This is a 16k context fine-tune of Mixtral-8x7b. It excels in coding tasks due to extensive training with coding data and is known for its obedience, although it lacks DPO tuning. The model is uncensored and is stripped of alignment and bias. It requires an external alignment layer for ethical use. Users are cautioned to use this highly compliant model responsibly, as detailed in a blog post about uncensored models at erichartford.com/uncensored-models.",
|
||||
"format": "api",
|
||||
"settings": {},
|
||||
"parameters": {},
|
||||
"metadata": {
|
||||
"tags": ["General", "Big Context Length"]
|
||||
},
|
||||
"engine": "openai"
|
||||
}
|
||||
```sh title="Node.js"
|
||||
npm i && npm run dev
|
||||
```
|
||||
|
||||
### Regarding `model.json`
|
||||
### Step 3: Search for Nitro and Run the Model
|
||||
|
||||
- In `settings`, two crucial values are:
|
||||
- `ctx_len`: Defined based on the model's context size.
|
||||
- `prompt_template`: Defined based on the model's trained template (e.g., ChatML, Alpaca).
|
||||
- To set up the `prompt_template`:
|
||||
1. Visit [Hugging Face](https://huggingface.co/), an open-source machine learning platform.
|
||||
2. Find the current model that you're using (e.g., [Gemma 7b it](https://huggingface.co/google/gemma-7b-it)).
|
||||
3. Review the text and identify the template.
|
||||
- In `parameters`, consider the following options. The fields in `parameters` are typically general and can be the same across models. An example is provided below:
|
||||
|
||||
```json
|
||||
"parameters":{
|
||||
"temperature": 0.7,
|
||||
"top_p": 0.95,
|
||||
"stream": true,
|
||||
"max_tokens": 4096,
|
||||
"frequency_penalty": 0,
|
||||
"presence_penalty": 0
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3 : Start the Model
|
||||
|
||||
1. Restart Jan and go to the **Hub**.
|
||||
2. Find your model and click on the **Use** button.
|
||||
Search for `Nitro` using the program and you can use the models from Jan in RayCast.
|
||||
@ -16,10 +16,7 @@ keywords:
|
||||
VSCode integration,
|
||||
]
|
||||
---
|
||||
import continue_ask from './assets/jan-ai-continue-ask.png';
|
||||
import continue_comment from './assets/jan-ai-continue-comment.gif';
|
||||
import vscode from './assets/vscode.png';
|
||||
import flow from './assets/cont.png';
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
@ -27,10 +24,6 @@ import TabItem from '@theme/TabItem';
|
||||
|
||||
[Continue](https://continue.dev/docs/intro) is an open-source autopilot compatible with Visual Studio Code and JetBrains, offering the simplest method to code with any LLM (Local Language Model).
|
||||
|
||||
<div class="text--center" >
|
||||
<img src={ flow } width = { 800} alt = "Continue" />
|
||||
</div>
|
||||
|
||||
To integrate Jan with a local AI language model, follow the steps below:
|
||||
|
||||
### Step 1: Installing Continue on Visal Studio Code
|
||||
@ -99,15 +92,8 @@ To set up Continue for use with Jan's Local Server, you must activate the Jan AP
|
||||
1. Highlight a code snippet and press `Command + Shift + M` to open the Left Panel.
|
||||
2. Select Jan at the bottom and ask a question about the code, for example, `Explain this code`.
|
||||
|
||||
<div class="text--center">
|
||||
<img src={continue_ask} width={800} alt="jan-ai-continue-ask" />
|
||||
</div>
|
||||
|
||||
### 2. Editing the code with the help of a large language model
|
||||
|
||||
1. Select a code snippet and use `Command + Shift + L`.
|
||||
2. Enter your editing request, such as `Add comments to this code`.
|
||||
|
||||
<div class="text--center">
|
||||
<img src={continue_comment} width={800} alt="jan-ai-continue-comment" />
|
||||
</div>
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user