Updated continue-dev documentation
This commit is contained in:
parent
d05e5a5dae
commit
229bed9955
@ -36,11 +36,15 @@ Follow this [guide](https://continue.dev/docs/quickstart) to install the Continu
|
|||||||
|
|
||||||
To set up Continue for use with Jan's Local Server, you must activate the Jan API Server with your chosen model.
|
To set up Continue for use with Jan's Local Server, you must activate the Jan API Server with your chosen model.
|
||||||
|
|
||||||
1. Press the `<>` button. Jan will take you to the **Local API Server** section.
|
1. Press the `⚙️ Settings` button.
|
||||||
|
|
||||||
2. Setup the server, which includes the **IP Port**, **Cross-Origin-Resource-Sharing (CORS)** and **Verbose Server Logs**.
|
2. Locate `Local API Server`.
|
||||||
|
|
||||||
3. Press the **Start Server** button
|
3. Setup the server, which includes the **IP Port**, **Cross-Origin-Resource-Sharing (CORS)** and **Verbose Server Logs**.
|
||||||
|
|
||||||
|
4. Include your user-defined API Key.
|
||||||
|
|
||||||
|
5. Press the **Start Server** button
|
||||||
|
|
||||||
### Step 3: Configure Continue to Use Jan's Local Server
|
### Step 3: Configure Continue to Use Jan's Local Server
|
||||||
|
|
||||||
@ -64,30 +68,35 @@ To set up Continue for use with Jan's Local Server, you must activate the Jan AP
|
|||||||
</Tabs.Tab>
|
</Tabs.Tab>
|
||||||
</Tabs>
|
</Tabs>
|
||||||
|
|
||||||
```json title="~/.continue/config.json"
|
```yaml title="~/.continue/config.yaml"
|
||||||
{
|
name: Local Assistant
|
||||||
"models": [
|
version: 1.0.0
|
||||||
{
|
schema: v1
|
||||||
"title": "Jan",
|
models:
|
||||||
"provider": "openai",
|
- name: Jan
|
||||||
"model": "mistral-ins-7b-q4",
|
provider: openai
|
||||||
"apiKey": "EMPTY",
|
model: #MODEL_NAME (e.g. qwen3:0.6b)
|
||||||
"apiBase": "http://localhost:1337/v1"
|
apiKey: #YOUR_USER_DEFINED_API_KEY_HERE (e.g. hello)
|
||||||
}
|
apiBase: http://localhost:1337/v1
|
||||||
]
|
context:
|
||||||
}
|
- provider: code
|
||||||
|
- provider: docs
|
||||||
|
- provider: diff
|
||||||
|
- provider: terminal
|
||||||
|
- provider: problems
|
||||||
|
- provider: folder
|
||||||
|
- provider: codebase
|
||||||
```
|
```
|
||||||
|
|
||||||
2. Ensure the file has the following configurations:
|
2. Ensure the file has the following configurations:
|
||||||
- Ensure `openai` is selected as the `provider`.
|
- Ensure `openai` is selected as the `provider`.
|
||||||
- Match the `model` with the one enabled in the Jan API Server.
|
- Match the `model` with the one enabled in the Jan API Server.
|
||||||
- Set `apiBase` to `http://localhost:1337`.
|
- Set `apiBase` to `http://localhost:1337/v1`.
|
||||||
- Leave the `apiKey` field to `EMPTY`.
|
|
||||||
|
|
||||||
### Step 4: Ensure the Using Model Is Activated in Jan
|
### Step 4: Ensure the Using Model Is Activated in Jan
|
||||||
|
|
||||||
1. Navigate to `Settings` > `My Models`.
|
1. Navigate to `Settings` > `Model Providers`.
|
||||||
2. Click the **three dots (⋮)** button.
|
2. Under Llama.cpp, find the model that you would want to use.
|
||||||
3. Select the **Start Model** button to activate the model.
|
3. Select the **Start Model** button to activate the model.
|
||||||
|
|
||||||
</Steps>
|
</Steps>
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user