updated remote engines pages
BIN
docs/src/pages/docs/_assets/anthropic.png
Normal file
|
After Width: | Height: | Size: 148 KiB |
BIN
docs/src/pages/docs/_assets/cohere.png
Normal file
|
After Width: | Height: | Size: 145 KiB |
BIN
docs/src/pages/docs/_assets/groq.png
Normal file
|
After Width: | Height: | Size: 159 KiB |
BIN
docs/src/pages/docs/_assets/martian.png
Normal file
|
After Width: | Height: | Size: 145 KiB |
BIN
docs/src/pages/docs/_assets/mistralai.png
Normal file
|
After Width: | Height: | Size: 146 KiB |
BIN
docs/src/pages/docs/_assets/nvidia-nim.png
Normal file
|
After Width: | Height: | Size: 145 KiB |
BIN
docs/src/pages/docs/_assets/openai.png
Normal file
|
After Width: | Height: | Size: 152 KiB |
BIN
docs/src/pages/docs/_assets/openrouter.png
Normal file
|
After Width: | Height: | Size: 158 KiB |
@ -2,6 +2,5 @@
|
||||
"llama-cpp": {
|
||||
"title": "llama.cpp",
|
||||
"href": "/docs/local-engines/llama-cpp"
|
||||
},
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
@ -17,7 +17,7 @@ keywords:
|
||||
thread history,
|
||||
]
|
||||
---
|
||||
import { Callout, Steps } from 'nextra/components'
|
||||
import { Callout, Steps } from 'nextra/components'
|
||||
import { Settings, EllipsisVertical, Plus, FolderOpen, Pencil } from 'lucide-react'
|
||||
|
||||
|
||||
@ -184,7 +184,8 @@ Modify model parameters under the settings array. Key fields to configure:
|
||||
"stream": true,
|
||||
"max_tokens": 4096,
|
||||
"frequency_penalty": 0,
|
||||
"presence_penalty": 0
|
||||
"presence_penalty": 0,
|
||||
}
|
||||
```
|
||||
</Steps>
|
||||
|
||||
@ -204,8 +205,8 @@ When using cloud models, be aware of any associated costs and rate limits from t
|
||||
</Callout>
|
||||
|
||||
Jan supports connecting to various AI cloud providers that are OpenAI API-compatible, including: OpenAI (GPT-4, o1,...), Anthropic (Claude), Groq, Mistral, and more.
|
||||
1. Open **Settings**
|
||||
2. Under **Model Provider** section in left sidebar (OpenAI, Anthropic, etc.), choose a provider
|
||||
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>)
|
||||
2. Under **Remote Engines** section in the left sidebar, choose your preferred engines (OpenAI, Anthropic, etc.)
|
||||
3. Enter your API key
|
||||
4. The activated cloud models will be available in your model selector in **Threads**
|
||||
|
||||
|
||||
@ -111,10 +111,9 @@ Jan supports both local and cloud AI models. You can connect to cloud AI service
|
||||
See [Remote APIs](/docs/remote-models/openai) for detailed configuration.
|
||||
|
||||
<br/>
|
||||
|
||||

|
||||
|
||||
<br/>
|
||||
|
||||
</Steps>
|
||||
|
||||
## What's Next?
|
||||
|
||||
@ -1,38 +1,39 @@
|
||||
{
|
||||
"openai": {
|
||||
"title": "OpenAI",
|
||||
"href": "/docs/remote-models/openai"
|
||||
},
|
||||
"azure": {
|
||||
"title": "Azure OpenAI API",
|
||||
"href": "/docs/remote-models/azure",
|
||||
"display": "hidden"
|
||||
},
|
||||
"groq": { "title": "Groq", "href": "/docs/remote-models/groq" },
|
||||
"mistralai": {
|
||||
"title": "Mistral AI",
|
||||
"href": "/docs/remote-models/mistralai"
|
||||
},
|
||||
"openrouter": { "title": "OpenRouter", "href": "/docs/remote-models/openrouter" },
|
||||
"generic-openai": { "title": "Any OpenAI Compatible API", "href": "/docs/remote-models/generic-openai", "display": "hidden"},
|
||||
"martian": {
|
||||
"title": "Martian",
|
||||
"href": "/docs/remote-models/martian"
|
||||
"anthropic": {
|
||||
"title": "Anthropic",
|
||||
"href": "/docs/remote-models/anthropic"
|
||||
},
|
||||
"cohere": {
|
||||
"title": "Cohere",
|
||||
"href": "/docs/remote-models/cohere"
|
||||
},
|
||||
"anthropic": {
|
||||
"title": "Anthropic",
|
||||
"href": "/docs/remote-models/anthropic"
|
||||
"groq": {
|
||||
"title": "Groq",
|
||||
"href": "/docs/remote-models/groq"
|
||||
},
|
||||
"martian": {
|
||||
"title": "Martian",
|
||||
"href": "/docs/remote-models/martian"
|
||||
},
|
||||
"mistralai": {
|
||||
"title": "Mistral AI",
|
||||
"href": "/docs/remote-models/mistralai"
|
||||
},
|
||||
"nvidia-nim": {
|
||||
"title": "NVIDIA NIM",
|
||||
"title": "Nvidia NIM",
|
||||
"href": "/docs/remote-models/nvidia-nim"
|
||||
},
|
||||
"openai": {
|
||||
"title": "OpenAI",
|
||||
"href": "/docs/remote-models/openai"
|
||||
},
|
||||
"openrouter": {
|
||||
"title": "OpenRouter",
|
||||
"href": "/docs/remote-models/openrouter"
|
||||
},
|
||||
"triton": {
|
||||
"title": "Triton-TRT-LLM",
|
||||
"href": "/docs/remote-models/triton"
|
||||
"href": "/docs/remote-models/triton",
|
||||
"display": "hidden"
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -16,46 +16,71 @@ keywords:
|
||||
---
|
||||
|
||||
import { Callout, Steps } from 'nextra/components'
|
||||
import { Settings, Plus } from 'lucide-react'
|
||||
|
||||
# Anthropic
|
||||
|
||||
## How to Integrate Anthropic with Jan
|
||||
Jan supports [Anthropic](https://anthropic.com/) API integration, allowing you to use Claude models (Claude 3, Claude 2.1, and more) through Jan's interface.
|
||||
|
||||
This guide provides step-by-step instructions on integrating Anthropic with Jan, enabling users to chat with Claude's LLMs within Jan's conversational interface.
|
||||
|
||||
Before proceeding, ensure you have the following:
|
||||
- Access to the Jan application
|
||||
- Anthropic API credentials
|
||||
|
||||
## Integration Steps
|
||||
## Integrate Anthropic API with Jan
|
||||
|
||||
<Steps>
|
||||
### Step 1: Get Your API Key
|
||||
1. Visit [Anthropic Console](https://console.anthropic.com/settings/keys) and sign in
|
||||
2. Create & copy a new API key or copy your existing one
|
||||
|
||||
### Step 1: Configure Anthropic API Key
|
||||
1. Obtain Anthropic API Keys from your [Anthropic Console](https://console.anthropic.com/).
|
||||
2. Copy your **Anthropic API Key**.
|
||||
3. There are three ways to configure your API Key in Jan app:
|
||||
- Navigate to the **Jan app** > **Gear Icon (⚙️)** > **My Models** tab > **Add Icon (➕)** next to **Anthropic**.
|
||||
- Navigate to the **Jan app** > **Thread** > **Model** tab > **Add Icon (➕)** next to **Anthropic**.
|
||||
- Navigate to the **Jan app** > **Gear Icon (⚙️)** > **Anthropic** section under Model Providers.
|
||||
4. Insert your **Anthropic API Key**.
|
||||
|
||||
### Step 2: Start Chatting with the Model
|
||||
|
||||
1. Select the Anthropic model you want to use.
|
||||
<Callout type='info'>
|
||||
Anthropic is the default extension for the Jan application. All the Anthropic models are automatically installed when you install the Jan application.
|
||||
Ensure your API key has sufficient credits
|
||||
</Callout>
|
||||
2. Specify the model's parameters.
|
||||
3. Start the conversation with the Anthropic model.
|
||||
|
||||
### Step 2: Configure Jan
|
||||
There are two ways to add your Anthropic API keys in Jan:
|
||||
|
||||
**Through Threads:**
|
||||
1. In Threads, click **Model** tab in the **right sidebar** or **model selector** in input field
|
||||
2. Once the selector is poped up, choose the **Cloud** tab
|
||||
3. Click **Add** (<Plus width={16} height={16} style={{display:"inline"}}/>) icon next to **Anthropic**
|
||||
4. Once you are directed to Anthropic settings, insert your **API Key**
|
||||
|
||||
**Through Settings:**
|
||||
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>)
|
||||
2. Under **Remote Engines**, select **Anthropic**
|
||||
3. Insert your **API Key**
|
||||
|
||||
<br/>
|
||||

|
||||
<br/>
|
||||
|
||||
### Step 3: Start Using Anthropic's Models
|
||||
|
||||
1. In any existing **Threads** or create a new one
|
||||
2. Select an Anthropic model from **model selector**
|
||||
3. Start chatting
|
||||
</Steps>
|
||||
|
||||
## Available Anthropic Models
|
||||
|
||||
Jan automatically includes Anthropic's available models. In case you want to use a specific Anthropic model that you cannot find in **Jan**, follow instructions in [Manual Setup](/docs/models/manage-models#4-manual-setup) to add custom models:
|
||||
- See list of available models in [Anthropic Models](https://docs.anthropic.com/claude/docs/models-overview).
|
||||
- The `id` property must match the model name in the list. For example, `claude-3-opus-20240229`, `claude-3-sonnet-20240229`, or `claude-2.1`.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If you encounter any issues during the integration process or while using Anthropic with Jan, consider the following troubleshooting steps:
|
||||
Common issues and solutions:
|
||||
|
||||
- Double-check your API credentials to ensure they are correct.
|
||||
- Check for error messages or logs that may provide insight into the issue.
|
||||
- Reach out to Anthropic API support for assistance if needed.
|
||||
**1. API Key Issues**
|
||||
- Verify your API key is correct and not expired
|
||||
- Check if you have billing set up on your Anthropic account
|
||||
- Ensure you have access to the model you're trying to use
|
||||
|
||||
**2. Connection Problems**
|
||||
- Check your internet connection
|
||||
- Verify Anthropic's system status
|
||||
- Look for error messages in [Jan's logs](/docs/troubleshooting#how-to-get-error-logs)
|
||||
|
||||
**3. Model Unavailable**
|
||||
- Confirm your API key has access to the model
|
||||
- Check if you're using the correct model ID
|
||||
- Verify your Anthropic account has the necessary permissions
|
||||
|
||||
Need more help? Join our [Discord community](https://discord.gg/FTk2MvZwJH) or check the [Anthropic documentation](https://docs.anthropic.com/claude/docs).
|
||||
@ -1,58 +0,0 @@
|
||||
---
|
||||
title: Azure OpenAI
|
||||
description: A step-by-step guide on integrating Jan with Azure OpenAI.
|
||||
keywords:
|
||||
[
|
||||
Jan,
|
||||
Customizable Intelligence, LLM,
|
||||
local AI,
|
||||
privacy focus,
|
||||
free and open source,
|
||||
private and offline,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language models,
|
||||
integration,
|
||||
Azure OpenAI Service,
|
||||
]
|
||||
---
|
||||
|
||||
import { Callout, Steps } from 'nextra/components'
|
||||
|
||||
|
||||
|
||||
# Azure OpenAI API
|
||||
|
||||
## How to Integrate Azure OpenAI API with Jan
|
||||
This guide provides step-by-step instructions for integrating the Azure OpenAI API with Jan, allowing users to utilize Azure's capabilities within Jan's conversational interface.
|
||||
|
||||
## Integration Steps
|
||||
<Steps>
|
||||
### Step 1: Configure OpenAI API Key
|
||||
1. Obtain OpenAI API Key from your [OpenAI Platform](https://platform.openai.com/api-keys) dashboard.
|
||||
2. Copy your **OpenAI API Key**.
|
||||
3. There are three ways to configure your API Key in Jan app:
|
||||
- Navigate to the **Jan app** > **Gear Icon (⚙️)** > **My Models** tab > **Add Icon (➕)** next to **OpenAI**.
|
||||
- Navigate to the **Jan app** > **Thread** > **Model** tab > **Add Icon (➕)** next to **OpenAI**.
|
||||
- Navigate to the **Jan app** > **Gear Icon (⚙️)** > **OpenAI** section under Model Providers.
|
||||
4. Insert your **OpenAI API Key**.
|
||||
|
||||
<Callout type='info'>
|
||||
The **OpenAI** fields can be used for any OpenAI-compatible API.
|
||||
</Callout>
|
||||
|
||||
### Step 2: Start Chatting with the Model
|
||||
|
||||
1. Select the OpenAI model you want to use.
|
||||
2. Specify the model's parameters.
|
||||
3. Start the conversation with the OpenAI model.
|
||||
|
||||
</Steps>
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If you encounter any issues during the integration process or while using OpenAI with Jan, consider the following troubleshooting steps:
|
||||
|
||||
- Double-check your API credentials to ensure they are correct.
|
||||
- Check for error messages or logs that may provide insight into the issue.
|
||||
- Reach out to Azure OpenAI API support for assistance if needed.
|
||||
@ -16,46 +16,71 @@ keywords:
|
||||
---
|
||||
|
||||
import { Callout, Steps } from 'nextra/components'
|
||||
import { Settings, Plus } from 'lucide-react'
|
||||
|
||||
# Cohere
|
||||
|
||||
## How to Integrate Cohere with Jan
|
||||
Jan supports [Cohere](https://cohere.com/) API integration, allowing you to use Cohere's models (Command, Command-R and more) through Jan's interface.
|
||||
|
||||
This guide provides step-by-step instructions on integrating Cohere with Jan, enabling users to chat with Cohere's LLMs within Jan's conversational interface.
|
||||
|
||||
Before proceeding, ensure you have the following:
|
||||
- Access to the Jan application
|
||||
- Cohere API credentials
|
||||
|
||||
## Integration Steps
|
||||
## Integrate Cohere API with Jan
|
||||
|
||||
<Steps>
|
||||
### Step 1: Get Your API Key
|
||||
1. Visit [Cohere Dashboard](https://dashboard.cohere.com/api-keys) and sign in
|
||||
2. Create & copy a new API key or copy your existing one
|
||||
|
||||
### Step 1: Configure Cohere API Key
|
||||
1. Obtain Cohere API Keys from your [Cohere Dashboard](https://dashboard.cohere.com/).
|
||||
2. Copy your **Cohere API Key**.
|
||||
3. There are three ways to configure your API Key in Jan app:
|
||||
- Navigate to the **Jan app** > **Gear Icon (⚙️)** > **My Models** tab > **Add Icon (➕)** next to **Cohere**.
|
||||
- Navigate to the **Jan app** > **Thread** > **Model** tab > **Add Icon (➕)** next to **Cohere**.
|
||||
- Navigate to the **Jan app** > **Gear Icon (⚙️)** > **Cohere** section under Model Providers.
|
||||
4. Insert your **Cohere API Key**.
|
||||
|
||||
|
||||
### Step 2: Start Chatting with the Model
|
||||
|
||||
1. Select the Cohere model you want to use.
|
||||
<Callout type='info'>
|
||||
Cohere is the default extension for the Jan application. All the Cohere models are automatically installed when you install the Jan application.
|
||||
Ensure your API key has sufficient credits
|
||||
</Callout>
|
||||
2. Specify the model's parameters.
|
||||
3. Start the conversation with the Cohere model.
|
||||
|
||||
### Step 2: Configure Jan
|
||||
There are two ways to add your Cohere API keys in Jan:
|
||||
|
||||
**Through Threads:**
|
||||
1. In Threads, click **Model** tab in the **right sidebar** or **model selector** in input field
|
||||
2. Once the selector is poped up, choose the **Cloud** tab
|
||||
3. Click **Add** (<Plus width={16} height={16} style={{display:"inline"}}/>) icon next to **Cohere**
|
||||
4. Once you are directed to Cohere settings, insert your **API Key**
|
||||
|
||||
**Through Settings:**
|
||||
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>)
|
||||
2. Under **Remote Engines**, select **Cohere**
|
||||
3. Insert your **API Key**
|
||||
|
||||
<br/>
|
||||

|
||||
<br/>
|
||||
|
||||
### Step 3: Start Using Cohere's Models
|
||||
|
||||
1. In any existing **Threads** or create a new one
|
||||
2. Select a Cohere model from **model selector**
|
||||
3. Start chatting
|
||||
</Steps>
|
||||
|
||||
## Available Cohere Models
|
||||
|
||||
Jan automatically includes Cohere's available models. In case you want to use a specific Cohere model that you cannot find in **Jan**, follow instructions in [Manual Setup](/docs/models/manage-models#4-manual-setup) to add custom models:
|
||||
- See list of available models in [Cohere Documentation](https://docs.cohere.com/v2/docs/models).
|
||||
- The `id` property must match the model name in the list. For example, `command-nightly` or `command-light`.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If you encounter any issues during the integration process or while using Cohere with Jan, consider the following troubleshooting steps:
|
||||
Common issues and solutions:
|
||||
|
||||
- Double-check your API credentials to ensure they are correct.
|
||||
- Check for error messages or logs that may provide insight into the issue.
|
||||
- Reach out to Cohere API support for assistance if needed.
|
||||
**1. API Key Issues**
|
||||
- Verify your API key is correct and not expired
|
||||
- Check if you have billing set up on your Cohere account
|
||||
- Ensure you have access to the model you're trying to use
|
||||
|
||||
**2. Connection Problems**
|
||||
- Check your internet connection
|
||||
- Verify Cohere's [system status](https://status.cohere.com/)
|
||||
- Look for error messages in [Jan's logs](/docs/troubleshooting#how-to-get-error-logs)
|
||||
|
||||
**3. Model Unavailable**
|
||||
- Confirm your API key has access to the model
|
||||
- Check if you're using the correct model ID
|
||||
- Verify your Cohere account has the necessary permissions
|
||||
|
||||
Need more help? Join our [Discord community](https://discord.gg/FTk2MvZwJH) or check the [Cohere documentation](https://docs.cohere.com).
|
||||
@ -1,66 +0,0 @@
|
||||
---
|
||||
title: Any OpenAI Compatible API
|
||||
description: A step-by-step guide on how to set up Jan to connect with any remote or local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan,
|
||||
Customizable Intelligence, LLM,
|
||||
local AI,
|
||||
privacy focus,
|
||||
free and open source,
|
||||
private and offline,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language models,
|
||||
import-models-manually,
|
||||
remote server,
|
||||
OAI compatible,
|
||||
]
|
||||
---
|
||||
|
||||
import { Callout, Steps } from 'nextra/components'
|
||||
|
||||
# Any OpenAI-compatible API
|
||||
This guide outlines the process for configuring Jan as a client for both remote and local API servers, using the `mistral-ins-7b-q4` model for illustration. We'll show how to connect to Jan's API-hosting servers.
|
||||
|
||||
<Callout type='info'>
|
||||
Currently, you can only connect to one OpenAI-compatible endpoint at a time.
|
||||
</Callout>
|
||||
|
||||
<Steps>
|
||||
### Step 1: Configure a Client Connection
|
||||
|
||||
1. Navigate to the **Jan app** > **Settings**.
|
||||
2. Select the **OpenAI**.
|
||||
|
||||
<Callout type='info'>
|
||||
The **OpenAI** fields can be used for any OpenAI-compatible API.
|
||||
</Callout>
|
||||
|
||||
3. Insert the **API Key** and the **endpoint URL** into their respective fields. For example, if you're going to communicate to Jan's API server, you can configure it as follows:
|
||||
```json
|
||||
"full_url": "https://<server-ip-address>:1337/v1/chat/completions"
|
||||
```
|
||||
<Callout type='info'>
|
||||
Please note that currently, the code that supports any OpenAI-compatible endpoint only reads the `~/jan/data/extensions/@janhq/inference-openai-extension/settings.json` file, which is OpenAI Inference Engines in the extensions page. Thus, it will not search any other files in this directory.
|
||||
</Callout>
|
||||
|
||||
### Step 2: Start Chatting with the Model
|
||||
|
||||
1. Navigate to the **Hub** section.
|
||||
2. Select the model you want to use.
|
||||
3. Specify the model's parameters.
|
||||
4. Start the conversation with the model.
|
||||
|
||||
</Steps>
|
||||
<Callout type='info'>
|
||||
If you have questions or want more preconfigured GGUF models, please join our [Discord community](https://discord.gg/Dt7MxDyNNZ) for support, updates, and discussions.
|
||||
</Callout>
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If you encounter any issues during the integration process or while using OpenAI with Jan, consider the following troubleshooting steps:
|
||||
|
||||
- Double-check your API credentials to ensure they are correct.
|
||||
- Check for error messages or logs that may provide insight into the issue.
|
||||
- Reach out to their API support for assistance if needed.
|
||||
@ -16,47 +16,72 @@ keywords:
|
||||
---
|
||||
|
||||
import { Callout, Steps } from 'nextra/components'
|
||||
import { Settings, Plus } from 'lucide-react'
|
||||
|
||||
# Groq API
|
||||
# Groq
|
||||
|
||||
## How to Integrate Groq API with Jan
|
||||
Jan supports [Groq](https://groq.com/) API integration, allowing you to use Groq's high-performance LLM models (LLaMA 2, Mixtral and more) through Jan's interface.
|
||||
|
||||
This guide provides step-by-step instructions on integrating the Groq API with Jan, enabling users to leverage Groq's capabilities within Jan's conversational interface.
|
||||
|
||||
Before proceeding, ensure you have the following:
|
||||
- Access to the Jan application
|
||||
- Groq API credentials
|
||||
|
||||
## Integration Steps
|
||||
## Integrate Groq API with Jan
|
||||
|
||||
<Steps>
|
||||
|
||||
### Step 1: Configure Groq API Key
|
||||
1. Obtain Groq API Keys from your [Groq Console](https://console.groq.com/keys) dashboard.
|
||||
2. Copy your **Groq API Key**.
|
||||
3. There are three ways to configure your API Key in Jan app:
|
||||
- Navigate to the **Jan app** > **Gear Icon (⚙️)** > **My Models** tab > **Add Icon (➕)** next to **Groq**.
|
||||
- Navigate to the **Jan app** > **Thread** > **Model** tab > **Add Icon (➕)** next to **Groq**.
|
||||
- Navigate to the **Jan app** > **Gear Icon (⚙️)** > **Groq** section under Model Providers.
|
||||
4. Insert your **Groq API Key**.
|
||||
|
||||
### Step 2: Start Chatting with the Model
|
||||
|
||||
1. Select the Groq model you want to use.
|
||||
### Step 1: Get Your API Key
|
||||
1. Visit [Groq Console](https://console.groq.com/keys) and sign in
|
||||
2. Create & copy a new API key or copy your existing one
|
||||
|
||||
<Callout type='info'>
|
||||
The Groq Inference Engine is the default extension for the Jan application. All the Groq models are automatically installed when you install the Jan application.
|
||||
Ensure your API key has sufficient credits
|
||||
</Callout>
|
||||
2. Specify the model's parameters.
|
||||
3. Start the conversation with the Groq model.
|
||||
|
||||
### Step 2: Configure Jan
|
||||
There are two ways to add your Groq API keys in Jan:
|
||||
|
||||
**Through Threads:**
|
||||
1. In Threads, click **Model** tab in the **right sidebar** or **model selector** in input field
|
||||
2. Once the selector is poped up, choose the **Cloud** tab
|
||||
3. Click **Add** (<Plus width={16} height={16} style={{display:"inline"}}/>) icon next to **Groq**
|
||||
4. Once you are directed to Groq settings, insert your **API Key**
|
||||
|
||||
**Through Settings:**
|
||||
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>)
|
||||
2. Under **Remote Engines**, select **Groq**
|
||||
3. Insert your **API Key**
|
||||
|
||||
<br/>
|
||||

|
||||
<br/>
|
||||
|
||||
|
||||
### Step 3: Start Using Groq's Models
|
||||
|
||||
1. In any existing **Threads** or create a new one
|
||||
2. Select a Groq model from **model selector**
|
||||
3. Start chatting
|
||||
</Steps>
|
||||
|
||||
## Available Models Through Groq
|
||||
|
||||
Jan automatically includes Groq's available models. In case you want to use a specific Groq model that you cannot find in **Jan**, follow instructions in [Manual Setup](/docs/models/manage-models#4-manual-setup) to add custom models:
|
||||
- See list of available models in [Groq Documentation](https://console.groq.com/docs/models).
|
||||
- The `id` property must match the model name in the list. For example, if you want to use Llama3.3 70B, you must set the `id` property to `llama-3.3-70b-versatile`.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If you encounter any issues during the integration process or while using Groq with Jan, consider the following troubleshooting steps:
|
||||
Common issues and solutions:
|
||||
|
||||
- Double-check your API credentials to ensure they are correct.
|
||||
- Check for error messages or logs that may provide insight into the issue.
|
||||
- Reach out to Groq API support for assistance if needed.
|
||||
**1. API Key Issues**
|
||||
- Verify your API key is correct and not expired
|
||||
- Check if you have billing set up on your Groq account
|
||||
- Ensure you have access to the model you're trying to use
|
||||
|
||||
**2. Connection Problems**
|
||||
- Check your internet connection
|
||||
- Verify Groq's system status
|
||||
- Look for error messages in [Jan's logs](/docs/troubleshooting#how-to-get-error-logs)
|
||||
|
||||
**3. Model Unavailable**
|
||||
- Confirm your API key has access to the model
|
||||
- Check if you're using the correct model ID
|
||||
- Verify your Groq account has the necessary permissions
|
||||
|
||||
Need more help? Join our [Discord community](https://discord.gg/FTk2MvZwJH) or check the [Groq documentation](https://console.groq.com/docs).
|
||||
@ -14,48 +14,71 @@ keywords:
|
||||
API integration
|
||||
]
|
||||
---
|
||||
|
||||
import { Callout, Steps } from 'nextra/components'
|
||||
import { Settings, Plus } from 'lucide-react'
|
||||
|
||||
# Martian
|
||||
|
||||
## How to Integrate Martian with Jan
|
||||
Jan supports [Martian](https://withmartian.com/) API integration, allowing you to use Martian's models through Jan's interface.
|
||||
|
||||
This guide provides step-by-step instructions on integrating Martian with Jan, enabling users to leverage Martian's capabilities within Jan's conversational interface.
|
||||
|
||||
Before proceeding, ensure you have the following:
|
||||
- Access to the Jan application
|
||||
- Martian API credentials
|
||||
|
||||
## Integration Steps
|
||||
## Integrate Martian with Jan
|
||||
|
||||
<Steps>
|
||||
### Step 1: Get Your API Key
|
||||
1. Visit [Martian API Keys](https://www.withmartian.com/dashboard/undefined/api-keys) and sign in
|
||||
2. Create & copy a new API key or copy your existing one
|
||||
|
||||
### Step 1: Configure Martian API Key
|
||||
1. Obtain Martian API Keys from your [Martian Dashboard](https://auth.withmartian.com/).
|
||||
2. Copy your **Martian API Key**.
|
||||
3. There are three ways to configure your API Key in Jan app:
|
||||
- Navigate to the **Jan app** > **Gear Icon (⚙️)** > **My Models** tab > **Add Icon (➕)** next to **Martian**.
|
||||
- Navigate to the **Jan app** > **Thread** > **Model** tab > **Add Icon (➕)** next to **Martian**.
|
||||
- Navigate to the **Jan app** > **Gear Icon (⚙️)** > **Martian** section under Model Providers.
|
||||
4. Insert your **Martian API Key**.
|
||||
|
||||
|
||||
### Step 2: Start Chatting with the Model
|
||||
|
||||
1. Select the Martian model you want to use.
|
||||
<Callout type='info'>
|
||||
Martian is the default extension for the Jan application. All the Martian models are automatically installed when you install the Jan application.
|
||||
Ensure your API key has sufficient credits
|
||||
</Callout>
|
||||
2. Specify the model's parameters.
|
||||
3. Start the conversation with the Martian model.
|
||||
|
||||
### Step 2: Configure Jan
|
||||
There are two ways to add your Martian key in Jan:
|
||||
|
||||
**Through Threads:**
|
||||
1. In Threads, click **Model** tab in the **right sidebar** or **model selector** in input field
|
||||
2. Once the selector is poped up, choose the **Cloud** tab
|
||||
3. Click **Add** (<Plus width={16} height={16} style={{display:"inline"}}/>) icon next to **Martian**
|
||||
4. Once you are directed to Martian settings, insert your **API Key**
|
||||
|
||||
**Through Settings:**
|
||||
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>)
|
||||
2. Under **Remote Engines**, select **Martian**
|
||||
3. Insert your **API Key**
|
||||
|
||||
<br/>
|
||||

|
||||
<br/>
|
||||
|
||||
### Step 3: Start Using Martian Models
|
||||
|
||||
1. In any existing **Threads** or create a new one
|
||||
2. Select a Martian model from **model selector**
|
||||
3. Start chatting
|
||||
</Steps>
|
||||
|
||||
## Available Models
|
||||
|
||||
Jan includes the Martian Model Router which automatically selects the best model for your use case. You can start using it right away after configuring your API key. See list of available models in [Martian Documentation](https://docs.withmartian.com/martian-model-router/getting-started/supported-models-gateway).
|
||||
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If you encounter any issues during the integration process or while using Martian with Jan, consider the following troubleshooting steps:
|
||||
Common issues and solutions:
|
||||
|
||||
- Double-check your API credentials to ensure they are correct.
|
||||
- Check for error messages or logs that may provide insight into the issue.
|
||||
- Reach out to Martian API support for assistance if needed.
|
||||
**1. API Key Issues**
|
||||
- Verify your API key is correct and not expired
|
||||
- Check if you have billing set up on your Martian account
|
||||
- Ensure you have access to the model you're trying to use
|
||||
|
||||
**2. Connection Problems**
|
||||
- Check your internet connection
|
||||
- Verify Martian's system status
|
||||
- Look for error messages in [Jan's logs](/docs/troubleshooting#how-to-get-error-logs)
|
||||
|
||||
**3. Model Unavailable**
|
||||
- Confirm your API key has access to the model
|
||||
- Check if you're using the correct model ID
|
||||
- Verify your Martian account has the necessary permissions
|
||||
|
||||
Need more help? Join our [Discord community](https://discord.gg/FTk2MvZwJH) or check the [Martian documentation](https://docs.withmartian.com/martian-model-router).
|
||||
@ -17,49 +17,71 @@ keywords:
|
||||
---
|
||||
|
||||
import { Callout, Steps } from 'nextra/components'
|
||||
import { Settings, Plus } from 'lucide-react'
|
||||
|
||||
# Mistral AI API
|
||||
# Mistral AI
|
||||
|
||||
## How to Integrate Mistral AI with Jan
|
||||
This guide provides step-by-step instructions for integrating the Mistral API with Jan, enabling users to utilize Mistral's capabilities within Jan's conversational interface.
|
||||
Jan supports [Mistral AI](https://mistral.ai/) API integration, allowing you to use Mistral's powerful models (Mistral Large, Mistral Medium, Mistral Small and more) through Jan's interface.
|
||||
|
||||
Before proceeding, ensure you have the following:
|
||||
- Access to the Jan Application
|
||||
- Mistral API credentials
|
||||
## Integrate Mistral AI with Jan
|
||||
|
||||
## Integration Steps
|
||||
<Steps>
|
||||
|
||||
### Step 1: Configure Mistral API Key
|
||||
|
||||
1. Obtain the Mistral API Key from your [Mistral](https://console.mistral.ai/user/api-keys/) dashboard.
|
||||
2. Copy your **Mistral API Key**.
|
||||
3. There are three ways to configure your API Key in Jan app:
|
||||
- Navigate to the **Jan app** > **Gear Icon (⚙️)** > **My Models** tab > **Add Icon (➕)** next to **Mistral**.
|
||||
- Navigate to the **Jan app** > **Thread** > **Model** tab > **Add Icon (➕)** next to **Mistral**.
|
||||
- Navigate to the **Jan app** > **Gear Icon (⚙️)** > **Mistral** section under Model Providers.
|
||||
4. Insert your **Mistral API Key**.
|
||||
### Step 1: Get Your API Key
|
||||
1. Visit [Mistral AI Platform](https://console.mistral.ai/api-keys/) and sign in
|
||||
2. Create & copy a new API key or copy your existing one
|
||||
|
||||
<Callout type='info'>
|
||||
- Mistral AI offers various endpoints. Refer to their [endpoint documentation](https://docs.mistral.ai/platform/endpoints/) to select the one that fits your requirements.
|
||||
Ensure your API key has sufficient credits
|
||||
</Callout>
|
||||
|
||||
### Step 2: Start Chatting with the Model
|
||||
### Step 2: Configure Jan
|
||||
There are two ways to add your Mistral AI keys in Jan:
|
||||
|
||||
1. Select the Mistral model you want to use.
|
||||
**Through Threads:**
|
||||
1. In Threads, click **Model** tab in the **right sidebar** or **model selector** in input field
|
||||
2. Once the selector is poped up, choose the **Cloud** tab
|
||||
3. Click **Add** (<Plus width={16} height={16} style={{display:"inline"}}/>) icon next to **Mistral AI**
|
||||
4. Once you are directed to Mistral AI settings, insert your **API Key**
|
||||
|
||||
<Callout type='info'>
|
||||
The MistralAI is the default extension for the Jan application. All the Mistral models are automatically installed when you install the Jan application.
|
||||
</Callout>
|
||||
2. Specify the model's parameters.
|
||||
3. Start the conversation with the Mistral model.
|
||||
**Through Settings:**
|
||||
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>)
|
||||
2. Under **Remote Engines**, select **Mistral AI**
|
||||
3. Insert your **API Key**
|
||||
|
||||
<br/>
|
||||

|
||||
<br/>
|
||||
|
||||
### Step 3: Start Using Mistral's Models
|
||||
|
||||
1. In any existing **Threads** or create a new one
|
||||
2. Select a Mistral model from **model selector**
|
||||
3. Start chatting
|
||||
</Steps>
|
||||
|
||||
## Available Mistral Models
|
||||
|
||||
Jan automatically includes Mistral's available models. In case you want to use a specific Mistral model that you cannot find in **Jan**, follow instructions in [Manual Setup](/docs/models/manage-models#4-manual-setup) to add custom models:
|
||||
- See list of available models in [Mistral AI Documentation](https://docs.mistral.ai/platform/endpoints).
|
||||
- The `id` property must match the model name in the list. For example, if you want to use Mistral Large, you must set the `id` property to `mistral-large-latest`
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If you encounter any issues during the integration process or while using Mistral with Jan, consider the following troubleshooting steps:
|
||||
Common issues and solutions:
|
||||
|
||||
- Double-check your API credentials to ensure they are correct.
|
||||
- Check for error messages or logs that may provide insight into the issue.
|
||||
- Reach out to Mistral API support for assistance if needed.
|
||||
**1. API Key Issues**
|
||||
- Verify your API key is correct and not expired
|
||||
- Check if you have billing set up on your Mistral AI account
|
||||
- Ensure you have access to the model you're trying to use
|
||||
|
||||
**2. Connection Problems**
|
||||
- Check your internet connection
|
||||
- Verify Mistral AI's system status
|
||||
- Look for error messages in [Jan's logs](/docs/troubleshooting#how-to-get-error-logs)
|
||||
|
||||
**3. Model Unavailable**
|
||||
- Confirm your API key has access to the model
|
||||
- Check if you're using the correct model ID
|
||||
- Verify your Mistral AI account has the necessary permissions
|
||||
|
||||
Need more help? Join our [Discord community](https://discord.gg/FTk2MvZwJH) or check the [Mistral AI documentation](https://docs.mistral.ai/).
|
||||
@ -16,45 +16,76 @@ keywords:
|
||||
---
|
||||
|
||||
import { Callout, Steps } from 'nextra/components'
|
||||
import { Settings, Plus } from 'lucide-react'
|
||||
|
||||
# NVIDIA NIM
|
||||
|
||||
## How to Integrate NVIDIA NIM with Jan
|
||||
|
||||
This guide provides step-by-step instructions on integrating NVIDIA NIM with Jan, enabling users to leverage Nvidia NIM's capabilities within Jan's conversational interface.
|
||||
Jan supports [NVIDIA NIM](https://www.nvidia.com/en-us/ai/) API integration, allowing you to use NVIDIA's Large Language Models through Jan's interface.
|
||||
|
||||
<Callout type='info'>
|
||||
Nvidia NIM extension is only supported on Jan version 0.5.1 or later.
|
||||
NVIDIA NIM extension is only supported on Jan version 0.5.1 or later.
|
||||
</Callout>
|
||||
|
||||
Before proceeding, ensure you have the following:
|
||||
- Access to the Jan application
|
||||
- NVIDIA NIM API credentials
|
||||
## Integrate Nvidia NIM API with Jan
|
||||
|
||||
## Integration Steps
|
||||
<Steps>
|
||||
### Step 1: Configure Nvidia API Key
|
||||
1. Obtain Nvidia API Keys from your [Nvidia dashboard](https://org.ngc.nvidia.com/setup/personal-keys).
|
||||
2. Copy your **Nvidia API Key**.
|
||||
3. There are three ways to configure your API Key in Jan app:
|
||||
- Navigate to the **Jan app** > **Gear Icon (⚙️)** > **My Models** tab > **Add Icon (➕)** next to **Nvidia**.
|
||||
- Navigate to the **Jan app** > **Thread** > **Model** tab > **Add Icon (➕)** next to **Nvidia**.
|
||||
- Navigate to the **Jan app** > **Gear Icon (⚙️)** > **NVIDIA NIM** section under Model Providers.
|
||||
4. Insert your **Nvidia API Key**.
|
||||
### Step 1: Get Your API Key
|
||||
1. Visit [NVIDIA Docs](https://docs.nvidia.com/nim/nemo-retriever/text-reranking/latest/getting-started.html#generate-an-api-key) and generate an API key
|
||||
2. Copy your API key
|
||||
|
||||
<Callout type='info'>
|
||||
Ensure your API key has sufficient credits
|
||||
</Callout>
|
||||
|
||||
### Step 2: Start Chatting with the Model
|
||||
### Step 2: Configure Jan
|
||||
There are two ways to add your Nvidia NIM API keys in Jan:
|
||||
|
||||
1. Select the model you want to use.
|
||||
2. Specify the model's parameters.
|
||||
3. Start the conversation with the model.
|
||||
**Through Threads:**
|
||||
1. In Threads, click **Model** tab in the **right sidebar** or **model selector** in input field
|
||||
2. Once the selector is poped up, choose the **Cloud** tab
|
||||
3. Click **Add** (<Plus width={16} height={16} style={{display:"inline"}}/>) icon next to **Nvidia NIM**
|
||||
4. Once you are directed to Nvidia NIM settings, insert your **API Key**
|
||||
|
||||
**Through Settings:**
|
||||
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>)
|
||||
2. Under **Remote Engines**, select **Nvidia NIM**
|
||||
3. Insert your **API Key**
|
||||
|
||||
<br/>
|
||||

|
||||
<br/>
|
||||
|
||||
### Step 3: Start Using Nvidia NIM Models
|
||||
|
||||
1. In any existing **Threads** or create a new one
|
||||
2. Select a NVIDIA NIM model from **model selector**
|
||||
3. Start chatting
|
||||
</Steps>
|
||||
|
||||
## Available Nvidia NIM Models
|
||||
|
||||
Jan automatically includes NVIDIA NIM's available models. In case you want to use a specific model that you cannot find in **Jan**, follow instructions in [Manual Setup](/docs/models/manage-models#4-manual-setup) to add custom models:
|
||||
- See list of available models in [Nvidia NIM Documentation](https://build.nvidia.com/models).
|
||||
- The `id` property must match the model name in the list.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If you encounter any issues during the integration process or while using Nvidia with Jan, consider the following troubleshooting steps:
|
||||
Common issues and solutions:
|
||||
|
||||
- Double-check your API credentials to ensure they are correct.
|
||||
- Check for error messages or logs that may provide insight into the issue.
|
||||
- Reach out to Nvidia API support for assistance if needed.
|
||||
**1. API Key Issues**
|
||||
- Verify your API key is correct and not expired
|
||||
- Check if you have billing set up on your NVIDIA account
|
||||
- Ensure you have access to the model you're trying to use
|
||||
|
||||
**2. Connection Problems**
|
||||
- Check your internet connection
|
||||
- Verify NVIDIA's system status
|
||||
- Look for error messages in [Jan's logs](/docs/troubleshooting#how-to-get-error-logs)
|
||||
|
||||
**3. Model Unavailable**
|
||||
- Confirm your API key has access to the model
|
||||
- Check if you're using the correct model ID
|
||||
- Verify your NVIDIA account has the necessary permissions
|
||||
- Make sure you're using Jan version 0.5.1 or later
|
||||
|
||||
Need more help? Join our [Discord community](https://discord.gg/FTk2MvZwJH) or check the [Nvidia documentation](https://docs.nvidia.com/nim/large-language-models/latest/getting-started.html).
|
||||
@ -16,57 +16,73 @@ keywords:
|
||||
Azure OpenAI Service,
|
||||
]
|
||||
---
|
||||
|
||||
import { Callout, Steps } from 'nextra/components'
|
||||
import { Settings, Plus } from 'lucide-react'
|
||||
|
||||
# OpenAI
|
||||
|
||||
Jan supports [OpenAI](https://openai.com/) and OpenAI-compatible APIs, allowing you to use all models from OpenAI (GPT-4, GPT o1 and more) through Jan's interface.
|
||||
|
||||
# OpenAI API
|
||||
## Integrate OpenAI API with Jan
|
||||
|
||||
<Callout type='info'>
|
||||
The OpenAI Extension can be used for any OpenAI-compatible API endpoints.
|
||||
</Callout>
|
||||
|
||||
## How to Integrate OpenAI API with Jan
|
||||
This guide provides step-by-step instructions for integrating the OpenAI API with Jan, allowing users to utilize OpenAI's capabilities within Jan's conversational interface.
|
||||
|
||||
## Integration Steps
|
||||
<Steps>
|
||||
### Step 1: Configure OpenAI API Key
|
||||
1. Obtain OpenAI API Key from your [OpenAI Platform](https://platform.openai.com/api-keys) dashboard.
|
||||
2. Copy your **OpenAI API Key**.
|
||||
3. There are three ways to configure your API Key in Jan app:
|
||||
- Navigate to the **Jan app** > **Gear Icon (⚙️)** > **My Models** tab > **Add Icon (➕)** next to **OpenAI**.
|
||||
- Navigate to the **Jan app** > **Thread** > **Model** tab > **Add Icon (➕)** next to **OpenAI**.
|
||||
- Navigate to the **Jan app** > **Gear Icon (⚙️)** > **OpenAI** section under Model Providers.
|
||||
4. Insert your **OpenAI API Key**.
|
||||
|
||||
### Step 2: Start Chatting with the Model
|
||||
|
||||
1. Select the OpenAI model you want to use.
|
||||
### Step 1: Get Your API Key
|
||||
1. Visit [OpenAI Platform](https://platform.openai.com/api-keys) and sign in
|
||||
2. Create & copy a new API key or copy your existing one
|
||||
|
||||
<Callout type='info'>
|
||||
The OpenAI is the default extension for the Jan application. All the OpenAI models are automatically installed when you install the Jan application.
|
||||
Ensure your API key has sufficient credits
|
||||
</Callout>
|
||||
|
||||
2. Specify the model's parameters.
|
||||
3. Start the conversation with the OpenAI model.
|
||||
### Step 2: Configure Jan
|
||||
There are two ways to add your OpenAI API keys in Jan:
|
||||
|
||||
Through Threads:
|
||||
1. In Threads, click Model tab in the right sidebar or model selector in input field
|
||||
2. Once the selector is poped up, choose the Cloud tab
|
||||
3. Click Add (<Plus width={16} height={16} style={{display:"inline"}}/>) icon next to OpenAI
|
||||
4. Once you are directed to OpenAI settings, insert your API Key
|
||||
|
||||
Through Settings:
|
||||
1. Navigate to Settings (<Settings width={16} height={16} style={{display:"inline"}}/>)
|
||||
2. Under Remote Engines, select OpenAI
|
||||
3. Insert your API Key
|
||||
|
||||
<br/>
|
||||

|
||||
<br/>
|
||||
|
||||
### Step 3: Start Using OpenAI's Models
|
||||
|
||||
In any existing Threads or create a new one
|
||||
Select an OpenAI model from model selector
|
||||
Start chatting
|
||||
|
||||
</Steps>
|
||||
|
||||
### OpenAI Models
|
||||
## Available OpenAI Models
|
||||
|
||||
You can also use specific OpenAI models you cannot find in the **Hub** section by customizing the `model.yaml` file, which you can see in the `~/jan/data/models/`. Follow the steps in the [Manage Models](/docs/models/manage-models) to manually add a model.
|
||||
|
||||
<Callout type='info'>
|
||||
- You can find the list of available models in the [OpenAI Platform](https://platform.openai.com/docs/models/overview).
|
||||
- The `id` property must match the model name in the list.
|
||||
- For example, if you want to use the [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo), you must set the `id` property to `gpt-4-1106-preview`.
|
||||
</Callout>
|
||||
Jan automatically includes popular OpenAI models. In case you want to use a specific OpenAI model that you cannot find in Jan, follow instructions in [Manual Setup](/docs/models/manage-models#4-manual-setup) to add custom models:
|
||||
- See list of available models in [OpenAI Platform](https://platform.openai.com/docs/models/overview).
|
||||
- The id property must match the model name in the list. For example, if you want to use the [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo), you must set the id property to gpt-4-1106-preview.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If you encounter any issues during the integration process or while using OpenAI with Jan, consider the following troubleshooting steps:
|
||||
Common issues and solutions:
|
||||
|
||||
- Double-check your API credentials to ensure they are correct.
|
||||
- Check for error messages or logs that may provide insight into the issue.
|
||||
- Reach out to OpenAI API support for assistance if needed.
|
||||
1. API Key Issues
|
||||
- Verify your API key is correct and not expired
|
||||
- Check if you have billing set up on your OpenAI account
|
||||
- Ensure you have access to the model you're trying to use
|
||||
|
||||
2. Connection Problems
|
||||
- Check your internet connection
|
||||
- Verify OpenAI's [system status](https://status.openai.com)
|
||||
- Look for error messages in [Jan's logs](/docs/troubleshooting#how-to-get-error-logs)
|
||||
|
||||
3. Model Unavailable
|
||||
- Confirm your API key has access to the model
|
||||
- Check if you're using the correct model ID
|
||||
- Verify your OpenAI account has the necessary permissions
|
||||
|
||||
Need more help? Join our [Discord community](https://discord.gg/FTk2MvZwJH) or check the [OpenAI documentation](https://platform.openai.com/docs).
|
||||
@ -18,38 +18,83 @@ keywords:
|
||||
---
|
||||
|
||||
import { Callout, Steps } from 'nextra/components'
|
||||
import { Settings, Plus } from 'lucide-react'
|
||||
|
||||
# OpenRouter
|
||||
|
||||
## Integrate OpenRouter with Jan
|
||||
|
||||
[OpenRouter](https://openrouter.ai/docs#quick-start) is a tool that gathers AI models. Developers can utilize its API to engage with diverse large language models, generative image models, and generative 3D object models.
|
||||
[OpenRouter](https://openrouter.ai/) is a tool that gathers AI models. Developers can utilize its API to engage with diverse large language models, generative image models, and generative 3D object models with a competitive pricing.
|
||||
|
||||
To connect Jan with OpenRouter for accessing remote Large Language Models (LLMs) through OpenRouter, you can follow the steps below:
|
||||
Jan supports OpenRouter API integration, allowing you to use models from various providers (Anthropic, Google, Meta and more) through a single API.
|
||||
|
||||
## Integrate OpenRouter with Jan
|
||||
|
||||
<Steps>
|
||||
### Step 1: Configure OpenRouter API Key
|
||||
### Step 1: Get Your API Key
|
||||
1. Visit [OpenRouter](https://openrouter.ai/keys) and sign in
|
||||
2. Create & copy a new API key or copy your existing one
|
||||
|
||||
1. Find your API Key in the [OpenRouter API Key](https://openrouter.ai/keys).
|
||||
2. Copy your **OpenRouter API Key**.
|
||||
3. There are three ways to configure your API Key in Jan app:
|
||||
- Navigate to the **Jan app** > **Gear Icon (⚙️)** > **My Models** tab > **Add Icon (➕)** next to **OpenRouter**.
|
||||
- Navigate to the **Jan app** > **Thread** > **Model** tab > **Add Icon (➕)** next to **OpenRouter**.
|
||||
- Navigate to the **Jan app** > **Gear Icon (⚙️)** > **OpenRouter** section under Model Providers.
|
||||
4. Insert your **OpenRouter API Key**.
|
||||
5. For **OpenRouter**, specify the model you want to use, or the system will default to the preset model linked to your **OpenRouter API Key**.
|
||||
<Callout type='info'>
|
||||
Ensure your API key has sufficient credits. OpenRouter credits work across all available models.
|
||||
</Callout>
|
||||
|
||||
### Step 2: Start Chatting with the Model
|
||||
### Step 2: Configure Jan
|
||||
There are two ways to add your OpenRouter key in Jan:
|
||||
|
||||
1. Select the OpenRouter model you want to use.
|
||||
2. Specify the model's parameters.
|
||||
3. Start the conversation with the OpenRouter model.
|
||||
**Through Threads:**
|
||||
1. In Threads, click **Model** tab in the **right sidebar** or **model selector** in input field
|
||||
2. Once the selector is poped up, choose the **Cloud** tab
|
||||
3. Click **Add** (<Plus width={16} height={16} style={{display:"inline"}}/>) icon next to **OpenRouter**
|
||||
4. Once you are directed to OpenRouter settings, insert your **API Key**
|
||||
|
||||
**Through Settings:**
|
||||
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>)
|
||||
2. Under **Remote Engines**, select **OpenRouter**
|
||||
3. Insert your **API Key**
|
||||
|
||||
<br/>
|
||||

|
||||
<br/>
|
||||
|
||||
### Step 3: Start Using OpenRouter Models
|
||||
|
||||
1. In any existing **Threads** or create a new one
|
||||
2. Select any model from **model selector** under OpenRouter
|
||||
3. Start chatting
|
||||
</Steps>
|
||||
|
||||
## Available Models Through OpenRouter
|
||||
|
||||
Jan automatically use your default OpenRouter's available models. For custom configurations:
|
||||
|
||||
**Model Field Settings:**
|
||||
- Leave empty to use your account's default model
|
||||
- Specify a model using the format: `organization/model-name`
|
||||
- Available options can be found in [OpenRouter's Model Reference](https://openrouter.ai/models)
|
||||
|
||||
**Examples of Model IDs:**
|
||||
- Claude 3 Opus: `anthropic/claude-3-opus-20240229`
|
||||
- Google Gemini Pro: `google/gemini-pro`
|
||||
- Mistral Large: `mistralai/mistral-large`
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If you encounter any issues during the integration process or while using OpenAI with Jan, consider the following troubleshooting steps:
|
||||
Common issues and solutions:
|
||||
|
||||
- Double-check your API credentials to ensure they are correct.
|
||||
- Check for error messages or logs that may provide insight into the issue.
|
||||
- Reach out to OpenRouter API support for assistance if needed.
|
||||
**1. API Key Issues**
|
||||
- Verify your API key is correct and not expired
|
||||
- Check if you have sufficient credits in your OpenRouter account
|
||||
- Ensure you have access to the model you're trying to use
|
||||
|
||||
**2. Connection Problems**
|
||||
- Check your internet connection
|
||||
- Verify OpenRouter's [system status](https://status.openrouter.ai)
|
||||
- Look for error messages in [Jan's logs](/docs/troubleshooting#how-to-get-error-logs)
|
||||
|
||||
**3. Model Unavailable**
|
||||
- Confirm the model is currently available on OpenRouter
|
||||
- Check if you're using the correct model ID format
|
||||
- Verify the model provider is currently operational
|
||||
|
||||
Need more help? Join our [Discord community](https://discord.gg/FTk2MvZwJH) or check the [OpenRouter documentation](https://openrouter.ai/docs).
|
||||