docs: improve integrate docs

This commit is contained in:
Ho Duc Hieu 2024-01-09 12:47:43 +07:00
parent db550f72ab
commit 70e7057378

View File

@ -12,57 +12,90 @@ keywords:
conversational AI, conversational AI,
no-subscription fee, no-subscription fee,
large language model, large language model,
integrate Continue, Continue integration,
integrate VSCode, VSCode integration,
] ]
--- ---
{/* Imports */}
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
## Quick Introduction ## Quick Introduction
[Continue](https://continue.dev/docs/intro) is an open-source autopilot for VS Code and JetBrains—the easiest way to code with any LLM. [Continue](https://continue.dev/docs/intro) is an open-source autopilot for VS Code and JetBrains—the easiest way to code with any LLM.
In this guide, we will show you how to integrate Continue with Jan and VSCode.
In this guide, we will show you how to integrate Continue with Jan and VSCode, enhancing your coding experience with the power of local AI language model.
## Steps to Integrate Continue with Jan and VSCode ## Steps to Integrate Continue with Jan and VSCode
### 1. Install Continue for VSCode ### 1. Install Continue for VSCode
You need to install Continue for VSCode. You can follow this [guide to install Continue for VSCode](https://continue.dev/docs/quickstart) You need to install Continue for VSCode. You can follow this [guide to install Continue for VSCode](https://continue.dev/docs/quickstart).
### 2. Enable Jan API Server ### 2. Enable Jan API Server
To configure the Continue to use Jan's Local Server, you need to enable Jan API Server with your preferred model, please follow this [guide to enable Jan API Server](../05-using-server/01-server.md) To configure the Continue to use Jan's Local Server, you need to enable Jan API Server with your preferred model, please follow this [guide to enable Jan API Server](../05-using-server/01-server.md)
### 3. Configure Continue to use Jan's Local Server ### 3. Configure Continue to Use Jan's Local Server
```bash Navigate to the `~/.continue` directory.
vim ~/.continue/config.json
```
```bash <Tabs groupId="operating-systems">
<TabItem value="mac" label="macOS">
```sh
cd ~/.continue
```
</TabItem>
<TabItem value="win" label="Windows">
```sh
C:/Users/<your_user_name>/.continue
```
</TabItem>
<TabItem value="linux" label="Linux">
```sh
cd ~/.continue
```
</TabItem>
</Tabs>
Edit the `config.json` file and include the following configuration.
```json title="~/.continue/config.json"
{ {
"models": [ "models": [
{ {
// highlight-next-line
"title": "Jan", "title": "Jan",
"provider": "openai", "provider": "openai",
"model": "openhermes-neural-7b", // highlight-start
"model": "mistral-ins-7b-q4",
"apiKey": "EMPTY", "apiKey": "EMPTY",
"apiBase": "http://localhost:1337" "apiBase": "http://localhost:1337"
// highlight-end
} }
] ]
} }
``` ```
### 4. Make sure the Large Language Models (LLM) that you want to use in Jan is Active - Ensure that the `provider` is `openai`.
- Ensure that the `model` is the same as the one you enabled in Jan API Server.
- Ensure that the `apiBase` is `http://localhost:1337`.
- Ensure that the `apiKey` is `EMPTY`.
Go to **_Settings, Models_** ### 4. Ensure the Using Model Is Activated in Jan
![Active Models](assets/01-activate-model.png) Navigate to `Settings` > `Models`. Activate the model that you want to use in Jan by clicking the **three dots (⋮)** and **start model**.
You can **Activate** the Model you want to use in Jan by clicking the **_Three Dots, Start Model_**
![Active Models](assets/01-start-model.png) ![Active Models](assets/01-start-model.png)
### 5. Try out the integration of Jan and Continue in VSCode ### 5. Try Out the Integration of Jan and Continue in Vscode
1. Highlight a code, and press `Command + Shift + M` 1. Highlight a code, and press `Command + Shift + M`