docs: update integration content & add keywords
This commit is contained in:
parent
d4490787f9
commit
7004a8b936
BIN
docs/docs/quickstart/integration/assets/cont.png
Normal file
BIN
docs/docs/quickstart/integration/assets/cont.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 145 KiB |
BIN
docs/docs/quickstart/integration/assets/discordflow.png
Normal file
BIN
docs/docs/quickstart/integration/assets/discordflow.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 155 KiB |
BIN
docs/docs/quickstart/integration/assets/interpreter.png
Normal file
BIN
docs/docs/quickstart/integration/assets/interpreter.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 109 KiB |
@ -1,24 +1,37 @@
|
||||
---
|
||||
title: Azure OpenAI
|
||||
sidebar_position: 3
|
||||
description: A step-by-step guide on how to integrate Jan with Azure OpenAI.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
integration,
|
||||
Azure OpenAI Service,
|
||||
]
|
||||
---
|
||||
|
||||
import azure from './assets/azure.png';
|
||||
|
||||
# Azure OpenAI
|
||||
|
||||
A step-by-step guide on how to integrate Jan with Azure OpenAI.
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
This guide will show you how to integrate Azure OpenAI Service with Jan. The [Azure OpenAI Service](https://learn.microsoft.com/en-us/azure/ai-services/openai/overview?source=docs) offers robust APIs, making it simple for you to incorporate OpenAI's language models into your applications.
|
||||
|
||||
## How to Integrate Azure OpenAI with Jan
|
||||
|
||||
The [Azure OpenAI Service](https://learn.microsoft.com/en-us/azure/ai-services/openai/overview?source=docs) offers robust APIs, making it simple for you to incorporate OpenAI's language models into your applications.
|
||||
|
||||
<div class="text--center" >
|
||||
<img src={ azure } width = { 800} alt = "azure" />
|
||||
</div>
|
||||
|
||||
You can integrate Azure OpenAI with Jan by following the steps below:
|
||||
|
||||
### Step 1: Configure Azure OpenAI Service API Key
|
||||
|
||||
1. Set yp and deploy the Azure OpenAI Service.
|
||||
1. Set up and deploy the Azure OpenAI Service.
|
||||
2. Once you've set up and deployed Azure OpenAI Service, you can find the endpoint and API key in [Azure OpenAI Studio](https://oai.azure.com/) under `Chat` > `View code`.
|
||||
|
||||
3. Set up the endpoint and API key for Azure OpenAI Service in the `~/jan/engines/openai.json` file.
|
||||
@ -34,7 +47,7 @@ This guide will show you how to integrate Azure OpenAI Service with Jan. The [Az
|
||||
### Step 2: Model Configuration
|
||||
|
||||
1. Go to the `~/jan/models` directory.
|
||||
2. Make a new folder called `(your-deployment-name)`, like `gpt-35-hieu-jan`.
|
||||
2. Make a new folder called `(your-deployment-name)`, for example `gpt-35-hieu-jan`.
|
||||
3. Create a `model.json` file inside the folder with the specified configurations:
|
||||
- Match the `id` property with both the folder name and your deployment name.
|
||||
- Set the `format` property as `api`.
|
||||
@ -65,6 +78,29 @@ This guide will show you how to integrate Azure OpenAI Service with Jan. The [Az
|
||||
}
|
||||
```
|
||||
|
||||
### Regarding `model.json`
|
||||
|
||||
- In `settings`, two crucial values are:
|
||||
- `ctx_len`: Defined based on the model's context size.
|
||||
- `prompt_template`: Defined based on the model's trained template (e.g., ChatML, Alpaca).
|
||||
- To set up the `prompt_template`:
|
||||
1. Visit [Hugging Face](https://huggingface.co/), an open-source machine learning platform.
|
||||
2. Find the current model that you're using (e.g., [Gemma 7b it](https://huggingface.co/google/gemma-7b-it)).
|
||||
3. Review the text and identify the template.
|
||||
- In `parameters`, consider the following options. The fields in `parameters` are typically general and can be the same across models. An example is provided below:
|
||||
|
||||
```json
|
||||
"parameters":{
|
||||
"temperature": 0.7,
|
||||
"top_p": 0.95,
|
||||
"stream": true,
|
||||
"max_tokens": 4096,
|
||||
"frequency_penalty": 0,
|
||||
"presence_penalty": 0
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Start the Model
|
||||
|
||||
Restart Jan and go to the Hub. Find your model and click on the Use button.
|
||||
1. Restart Jan and go to the Hub.
|
||||
2. Find your model in Jan application and click on the Use button.
|
||||
|
||||
@ -1,23 +1,22 @@
|
||||
---
|
||||
title: Discord
|
||||
sidebar_position: 5
|
||||
description: A step-by-step guide on how to integrate Jan with a Discord bot.
|
||||
---
|
||||
|
||||
import discord_repo from './assets/jan-ai-discord-repo.png';
|
||||
|
||||
# Discord
|
||||
|
||||
A step-by-step guide on how to integrate Jan with a Discord bot.
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
This tutorial demonstrates the process of integrating with a Discord bot using Jan.
|
||||
|
||||
Using a Discord bot enhances server interaction. Integrating Jan with it can significantly boost responsiveness and user engagement.
|
||||
import flow from './assets/discordflow.png';
|
||||
|
||||
## How to Integrate Discord Bot with Jan
|
||||
|
||||
Discord bot can enhances your discord server interactions. By integrating Jan with it, you can significantly boost responsiveness and user engaggement in your discord server.
|
||||
|
||||
<div class="text--center" >
|
||||
<img src={ flow } width = { 800} alt = "discord" />
|
||||
</div>
|
||||
|
||||
To integrate Jan with a Discord bot, follow the steps below:
|
||||
|
||||
### Step 1: Clone the repository
|
||||
|
||||
To make this integration successful, it is necessary to clone the discord bot's [repository](https://github.com/jakobdylanc/discord-llm-chatbot).
|
||||
@ -26,7 +25,7 @@ To make this integration successful, it is necessary to clone the discord bot's
|
||||
<img src={discord_repo} width={600} alt="jan-ai-discord-repo" />
|
||||
</div>
|
||||
|
||||
### Step 2: Install the requirement libraries
|
||||
### Step 2: Install the Required Libraries
|
||||
|
||||
After cloning the repository, run the following command:
|
||||
|
||||
@ -34,28 +33,33 @@ After cloning the repository, run the following command:
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### Step 3: Create a copy of `.env.example`, named `.env`, and set it up
|
||||
### Step 3: Set the Environment
|
||||
1. Create a copy of `.env.example`.
|
||||
2. Change the name to `.env`.
|
||||
3. Set the environment with the following options:
|
||||
|
||||
| Setting | Instructions |
|
||||
| ------- | ------------ |
|
||||
| DISCORD_BOT_TOKEN | Generate a new Discord application at [discord.com/developers/applications](https://discord.com/developers/applications), obtain a token from the Bot tab, and enable MESSAGE CONTENT INTENT. |
|
||||
| LLM | For [Jan](https://jan.ai/), set to `local/openai/(MODEL_NAME)`, where `(MODEL_NAME)` is your loaded model's name. |
|
||||
| CUSTOM_SYSTEM_PROMPT | Adjust the bot's behavior as needed. |
|
||||
| CUSTOM_DISCORD_STATUS | Set a custom message for the bot's Discord profile. (Max 128 characters) |
|
||||
| ALLOWED_CHANNEL_IDS | Enter Discord channel IDs where the bot can send messages, separated by commas. Leave blank to allow all channels. |
|
||||
| ALLOWED_ROLE_IDS | Enter Discord role IDs allowed to use the bot, separated by commas. Leave blank to allow everyone. Including at least one role also disables DMs. |
|
||||
| MAX_IMAGES | Max number of image attachments allowed per message when using a vision model. (Default: `5`) |
|
||||
| MAX_MESSAGES | Max messages allowed in a reply chain. (Default: `20`) |
|
||||
| LOCAL_SERVER_URL | URL of your local API server for LLMs starting with `local/`. (Default: `http://localhost:5000/v1`) |
|
||||
| LOCAL_API_KEY | API key for your local API server with LLMs starting with `local/`. Usually safe to leave blank. |
|
||||
| `DISCORD_BOT_TOKEN` | Generate a new Discord application at [discord.com/developers/applications](https://discord.com/developers/applications), obtain a token from the Bot tab, and enable MESSAGE CONTENT INTENT. |
|
||||
| `LLM` | For [Jan](https://jan.ai/), set to `local/openai/(MODEL_NAME)`, where `(MODEL_NAME)` is your loaded model's name. |
|
||||
| `CUSTOM_SYSTEM_PROMPT` | Adjust the bot's behavior as needed. |
|
||||
| `CUSTOM_DISCORD_STATUS` | Set a custom message for the bot's Discord profile. (Max 128 characters) |
|
||||
| `ALLOWED_CHANNEL_IDS` | Enter Discord channel IDs where the bot can send messages, separated by commas. Leave blank to allow all channels. |
|
||||
| `ALLOWED_ROLE_IDS` | Enter Discord role IDs allowed to use the bot, separated by commas. Leave blank to allow everyone. Including at least one role also disables DMs. |
|
||||
| `MAX_IMAGES` | Max number of image attachments allowed per message when using a vision model. (Default: `5`) |
|
||||
| `MAX_MESSAGES` | Max messages allowed in a reply chain. (Default: `20`) |
|
||||
| `LOCAL_SERVER_URL` | URL of your local API server for LLMs starting with `local/`. (Default: `http://localhost:5000/v1`) |
|
||||
| `LOCAL_API_KEY` | API key for your local API server with LLMs starting with `local/`. Usually safe to leave blank. |
|
||||
|
||||
### Step 4: Invite the bot to your Discord server
|
||||
### Step 4: Insert the Bot
|
||||
Invite the bot to your Discord server using the following URL:
|
||||
|
||||
By using this URL below (replace `CLIENT_ID` with your Discord application's client ID from the OAuth2 tab)
|
||||
```
|
||||
https://discord.com/api/oauth2/authorize?client_id=(CLIENT_ID)&permissions=412317273088&scope=bot
|
||||
```
|
||||
|
||||
:::note
|
||||
Replace `CLIENT_ID` with your Discord application's client ID from the OAuth2 tab
|
||||
:::
|
||||
### Step 5: Run the bot
|
||||
|
||||
Run the bot by using the following command in your command prompt:
|
||||
|
||||
@ -1,28 +1,30 @@
|
||||
---
|
||||
title: Open Interpreter
|
||||
sidebar_position: 6
|
||||
description: A step-by-step guide on how to integrate Jan with Open Interpreter.
|
||||
---
|
||||
|
||||
# Open Interpreter
|
||||
|
||||
A step-by-step guide on how to integrate Jan with Open Interpreter.
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
This tutorial illustrates how to integrate with Open Interpreter using Jan. [Open Interpreter](https://github.com/KillianLucas/open-interpreter/) lets LLMs run code (Python, Javascript, Shell, and more) locally. You can chat with Open Interpreter through a ChatGPT-like interface in your terminal by running `interpreter` after installing.
|
||||
import flow from './assets/interpreter.png';
|
||||
|
||||
## How to Integrate Open Interpreter with Jan
|
||||
|
||||
[Open Interpreter](https://github.com/KillianLucas/open-interpreter/) lets LLMs run code (Python, Javascript, Shell, and more) locally. You can chat with Open Interpreter through a ChatGPT-like interface in your terminal by running `interpreter` after installing.
|
||||
|
||||
<div class="text--center" >
|
||||
<img src={ flow } width = { 800} alt = "Open Interpreter" />
|
||||
</div>
|
||||
|
||||
To integrate Open Interpreter with Jan, follow the steps below:
|
||||
|
||||
### Step 1: Install Open Interpreter
|
||||
|
||||
Install Open Interpreter by running:
|
||||
1. Install Open Interpreter by running:
|
||||
|
||||
```sh
|
||||
pip install open-interpreter
|
||||
```
|
||||
|
||||
A Rust compiler is required to install Open Interpreter. If not already installed, run the following command or go to [this page](https://rustup.rs/) if you are running on windows:
|
||||
2. A Rust compiler is required to install Open Interpreter. If not already installed, run the following command or go to [this page](https://rustup.rs/) if you are running on windows:
|
||||
|
||||
```zsh
|
||||
sudo apt install rustc
|
||||
@ -40,14 +42,12 @@ Before using Open Interpreter, configure the model in `Settings` > `My Model` fo
|
||||
|
||||
3. Click **Start Server**.
|
||||
|
||||
### Step 3: Run Open Interpreter with Specific Parameters
|
||||
### Step 3: Set the Open Interpreter Environment
|
||||
|
||||
For integration, provide the API Base (`http://localhost:1337/v1`) and the model ID (e.g., `mistral-ins-7b-q4`) when running Open Interpreter.
|
||||
|
||||
For instance, if using **Mistral Instruct 7B Q4** as the model, execute:
|
||||
1. For integration, provide the API Base (`http://localhost:1337/v1`) and the model ID (e.g., `mistral-ins-7b-q4`) when running Open Interpreter. For example see the code below:
|
||||
|
||||
```zsh
|
||||
interpreter --api_base http://localhost:1337/v1 --model mistral-ins-7b-q4
|
||||
```
|
||||
|
||||
Open Interpreter is now ready for use!
|
||||
> **Open Interpreter is now ready for use!**
|
||||
@ -1,37 +1,36 @@
|
||||
---
|
||||
title: OpenRouter
|
||||
sidebar_position: 2
|
||||
description: A step-by-step guide on how to integrate Jan with OpenRouter.
|
||||
---
|
||||
|
||||
import openrouterGIF from './assets/jan-ai-openrouter.gif';
|
||||
import openrouter from './assets/openrouter.png';
|
||||
|
||||
# OpenRouter
|
||||
## How to Integrate OpenRouter with Jan
|
||||
|
||||
A step-by-step guide on how to integrate Jan with OpenRouter.
|
||||
[OpenRouter](https://openrouter.ai/docs#quick-start) is a tool that gathers AI models. Developers can utilize its API to engage with diverse large language models, generative image models, and generative 3D object models.
|
||||
|
||||
---
|
||||
<div class="text--center" >
|
||||
<img src={ openrouter } width = { 800} alt = "openrouter" />
|
||||
</div>
|
||||
|
||||
## Overview
|
||||
|
||||
This guide will show you how to integrate OpenRouter with Jan, allowing you to utilize remote LLMs accessible through OpenRouter. [OpenRouter](https://openrouter.ai/docs#quick-start) is a tool that gathers AI models. Developers can utilize its API to engage with diverse large language models, generative image models, and generative 3D object models.
|
||||
|
||||
## How to Integrate OpenRouter
|
||||
To connect Jan with OpenRouter for accessing remote Large Language Models (LLMs) through OpenRouter, you can follow the steps below:
|
||||
|
||||
### Step 1: Configure OpenRouter API key
|
||||
|
||||
1. Find your API keys in the OpenRouter API Key.
|
||||
1. Find your API keys in the [OpenRouter API Key](https://openrouter.ai/keys).
|
||||
2. Set the OpenRouter API key in `~/jan/engines/openai.json` file.
|
||||
|
||||
### Step 2: Modify a JSON Model
|
||||
### Step 2: MModel Configuration
|
||||
|
||||
1. Go to the directory ~/jan/models.
|
||||
2. Make a new folder called openrouter-(modelname), like openrouter-dolphin-mixtral-8x7b.
|
||||
3. Inside the folder, create a model.json file with the following settings:
|
||||
- Make sure the filename is model.json.
|
||||
- Set the id property to the model id obtained from OpenRouter.
|
||||
- Set the format property to api.
|
||||
- Set the engine property to openai.
|
||||
- Ensure the state property is set to ready.
|
||||
1. Go to the directory `~/jan/models`.
|
||||
2. Make a new folder called `openrouter-(modelname)`, like `openrouter-dolphin-mixtral-8x7b`.
|
||||
3. Inside the folder, create a `model.json` file with the following settings:
|
||||
- Set the `id` property to the model id obtained from OpenRouter.
|
||||
- Set the `format` property to `api`.
|
||||
- Set the `engine` property to `openai`.
|
||||
- Ensure the `state` property is set to `ready`.
|
||||
|
||||
```json title="~/jan/models/openrouter-dolphin-mixtral-8x7b/model.json"
|
||||
{
|
||||
@ -55,14 +54,30 @@ This guide will show you how to integrate OpenRouter with Jan, allowing you to u
|
||||
"engine": "openai"
|
||||
}
|
||||
```
|
||||
|
||||
### Regarding `model.json`
|
||||
|
||||
- In `settings`, two crucial values are:
|
||||
- `ctx_len`: Defined based on the model's context size.
|
||||
- `prompt_template`: Defined based on the model's trained template (e.g., ChatML, Alpaca).
|
||||
- To set up the `prompt_template`:
|
||||
1. Visit [Hugging Face](https://huggingface.co/), an open-source machine learning platform.
|
||||
2. Find the current model that you're using (e.g., [Gemma 7b it](https://huggingface.co/google/gemma-7b-it)).
|
||||
3. Review the text and identify the template.
|
||||
- In `parameters`, consider the following options. The fields in `parameters` are typically general and can be the same across models. An example is provided below:
|
||||
|
||||
```json
|
||||
"parameters":{
|
||||
"temperature": 0.7,
|
||||
"top_p": 0.95,
|
||||
"stream": true,
|
||||
"max_tokens": 4096,
|
||||
"frequency_penalty": 0,
|
||||
"presence_penalty": 0
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3 : Start the Model
|
||||
|
||||
Restart Jan and go to the Hub. Find your model and click on the Use button.
|
||||
|
||||
## Use Cases for Jan Integration with OpenRouter
|
||||
Below are examples of the integration:
|
||||
|
||||
<div class="text--center">
|
||||
<img src={openrouterGIF} width={800} alt="jan-ai-openrouter" />
|
||||
</div>
|
||||
1. Restart Jan and go to the **Hub**.
|
||||
2. Find your model and click on the **Use** button.
|
||||
@ -1,38 +1,83 @@
|
||||
---
|
||||
sidebar_position: 4
|
||||
title: OpenRouter
|
||||
sidebar_position: 2
|
||||
description: A step-by-step guide on how to integrate Jan with OpenRouter.
|
||||
---
|
||||
|
||||
import raycast from './assets/raycast.png';
|
||||
import raycastImage from './assets/raycast-image.png';
|
||||
import openrouterGIF from './assets/jan-ai-openrouter.gif';
|
||||
import openrouter from './assets/openrouter.png';
|
||||
|
||||
# Raycast
|
||||
## How to Integrate OpenRouter with Jan
|
||||
|
||||
A step-by-step guide on how to integrate Jan with Raycast.
|
||||
[OpenRouter](https://openrouter.ai/docs#quick-start) is a tool that gathers AI models. Developers can utilize its API to engage with diverse large language models, generative image models, and generative 3D object models.
|
||||
|
||||
---
|
||||
<div class="text--center" >
|
||||
<img src={ openrouter } width = { 800} alt = "openrouter" />
|
||||
</div>
|
||||
|
||||
## Overview
|
||||
[Raycast](https://www.raycast.com/) is a productivity tool designed for macOS that enhances workflow efficiency by providing quick access to various tasks and functionalities through a keyboard-driven interface.
|
||||
To connect Jan with OpenRouter for accessing remote Large Language Models (LLMs) through OpenRouter, you can follow the steps below:
|
||||
|
||||
## How to Integrate Raycast
|
||||
### Step 1: Configure OpenRouter API key
|
||||
|
||||
### Step 1: Download the TinyLlama model from Jan
|
||||
1. Find your API keys in the [OpenRouter API Key](https://openrouter.ai/keys).
|
||||
2. Set the OpenRouter API key in `~/jan/engines/openai.json` file.
|
||||
|
||||
Go to the **Hub** and download the TinyLlama model. The model will be available at `~jan/models/tinyllama-1.1b`.
|
||||
### Step 2: MModel Configuration
|
||||
|
||||
### Step 2: Clone and Run the Program
|
||||
1. Go to the directory `~/jan/models`.
|
||||
2. Make a new folder called `openrouter-(modelname)`, like `openrouter-dolphin-mixtral-8x7b`.
|
||||
3. Inside the folder, create a `model.json` file with the following settings:
|
||||
- Set the `id` property to the model id obtained from OpenRouter.
|
||||
- Set the `format` property to `api`.
|
||||
- Set the `engine` property to `openai`.
|
||||
- Ensure the `state` property is set to `ready`.
|
||||
|
||||
1. Clone this [GitHub repository](https://github.com/InNoobWeTrust/nitro-raycast).
|
||||
2. Execute the project using the following command:
|
||||
|
||||
```sh
|
||||
npm i && npm run dev
|
||||
```json title="~/jan/models/openrouter-dolphin-mixtral-8x7b/model.json"
|
||||
{
|
||||
"sources": [
|
||||
{
|
||||
"filename": "openrouter",
|
||||
"url": "https://openrouter.ai/"
|
||||
}
|
||||
],
|
||||
"id": "cognitivecomputations/dolphin-mixtral-8x7b",
|
||||
"object": "model",
|
||||
"name": "Dolphin 2.6 Mixtral 8x7B",
|
||||
"version": "1.0",
|
||||
"description": "This is a 16k context fine-tune of Mixtral-8x7b. It excels in coding tasks due to extensive training with coding data and is known for its obedience, although it lacks DPO tuning. The model is uncensored and is stripped of alignment and bias. It requires an external alignment layer for ethical use. Users are cautioned to use this highly compliant model responsibly, as detailed in a blog post about uncensored models at erichartford.com/uncensored-models.",
|
||||
"format": "api",
|
||||
"settings": {},
|
||||
"parameters": {},
|
||||
"metadata": {
|
||||
"tags": ["General", "Big Context Length"]
|
||||
},
|
||||
"engine": "openai"
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Search for Nitro
|
||||
### Regarding `model.json`
|
||||
|
||||
Search for `Nitro` using the program and you can use the models from Jan in RayCast.
|
||||
- In `settings`, two crucial values are:
|
||||
- `ctx_len`: Defined based on the model's context size.
|
||||
- `prompt_template`: Defined based on the model's trained template (e.g., ChatML, Alpaca).
|
||||
- To set up the `prompt_template`:
|
||||
1. Visit [Hugging Face](https://huggingface.co/), an open-source machine learning platform.
|
||||
2. Find the current model that you're using (e.g., [Gemma 7b it](https://huggingface.co/google/gemma-7b-it)).
|
||||
3. Review the text and identify the template.
|
||||
- In `parameters`, consider the following options. The fields in `parameters` are typically general and can be the same across models. An example is provided below:
|
||||
|
||||
<div class="text--center">
|
||||
<img src={raycastImage} width={800} alt="raycast" />
|
||||
</div>
|
||||
```json
|
||||
"parameters":{
|
||||
"temperature": 0.7,
|
||||
"top_p": 0.95,
|
||||
"stream": true,
|
||||
"max_tokens": 4096,
|
||||
"frequency_penalty": 0,
|
||||
"presence_penalty": 0
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3 : Start the Model
|
||||
|
||||
1. Restart Jan and go to the **Hub**.
|
||||
2. Find your model and click on the **Use** button.
|
||||
@ -1,24 +1,38 @@
|
||||
---
|
||||
title: Continue
|
||||
sidebar_position: 1
|
||||
description: A step-by-step guide on how to integrate Jan with Continue and VS Code.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
Continue integration,
|
||||
VSCode integration,
|
||||
]
|
||||
---
|
||||
import continue_ask from './assets/jan-ai-continue-ask.png';
|
||||
import continue_comment from './assets/jan-ai-continue-comment.gif';
|
||||
import vscode from './assets/vscode.png';
|
||||
import flow from './assets/cont.png';
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
# Continue Integration for Visual Studio Code
|
||||
|
||||
A step-by-step guide on how to integrate Jan with Continue and VS Code.
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
This guide showcases integrating Continue with Jan and VS Code to boost your coding using the local AI language model's features. [Continue](https://continue.dev/docs/intro) is an open-source autopilot compatible with Visual Studio Code and JetBrains, offering the simplest method to code with any LLM (Local Language Model).
|
||||
|
||||
## How to Integrate with Continue VS Code
|
||||
|
||||
[Continue](https://continue.dev/docs/intro) is an open-source autopilot compatible with Visual Studio Code and JetBrains, offering the simplest method to code with any LLM (Local Language Model).
|
||||
|
||||
<div class="text--center" >
|
||||
<img src={ flow } width = { 800} alt = "Continue" />
|
||||
</div>
|
||||
|
||||
To integrate Jan with a local AI language model, follow the steps below:
|
||||
|
||||
### Step 1: Installing Continue on Visal Studio Code
|
||||
|
||||
Follow this [guide to install the Continue extension on Visual Studio Code](https://continue.dev/docs/quickstart)
|
||||
@ -62,7 +76,7 @@ To set up Continue for use with Jan's Local Server, you must activate the Jan AP
|
||||
"provider": "openai",
|
||||
"model": "mistral-ins-7b-q4",
|
||||
"apiKey": "EMPTY",
|
||||
"apiBase": "http://localhost:1337/v1"
|
||||
"apiBase": "http://localhost:1337"
|
||||
}
|
||||
]
|
||||
}
|
||||
@ -70,7 +84,7 @@ To set up Continue for use with Jan's Local Server, you must activate the Jan AP
|
||||
2. Ensure the file has the following configurations:
|
||||
- Ensure `openai` is selected as the `provider`.
|
||||
- Match the `model` with the one enabled in the Jan API Server.
|
||||
- Set `apiBase` to `http://localhost:1337/v1`.
|
||||
- Set `apiBase` to `http://localhost:1337`.
|
||||
- Leave the `apiKey` field to `EMPTY`.
|
||||
|
||||
### Step 4: Ensure the Using Model Is Activated in Jan
|
||||
@ -78,8 +92,7 @@ To set up Continue for use with Jan's Local Server, you must activate the Jan AP
|
||||
1. Navigate to `Settings` > `Models`.
|
||||
2. Activate the model you want to use in Jan by clicking the **three dots (⋮)** and select **Start Model**.
|
||||
|
||||
## Use cases for Jan integration with Visual Studio Code
|
||||
Below are examples of the integration:
|
||||
## Try out Jan integration with Continue in Visual Studio Code
|
||||
|
||||
### 1. Asking questions about the code
|
||||
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user