Merge branch 'dev' into add-blogs-tensorrt-llm
10
README.md
@ -76,31 +76,31 @@ Jan is an open-source ChatGPT alternative that runs 100% offline on your compute
|
||||
<tr style="text-align:center">
|
||||
<td style="text-align:center"><b>Experimental (Nightly Build)</b></td>
|
||||
<td style="text-align:center">
|
||||
<a href='https://delta.jan.ai/latest/jan-win-x64-0.4.9-335.exe'>
|
||||
<a href='https://delta.jan.ai/latest/jan-win-x64-0.4.9-336.exe'>
|
||||
<img src='./docs/static/img/windows.png' style="height:14px; width: 14px" />
|
||||
<b>jan.exe</b>
|
||||
</a>
|
||||
</td>
|
||||
<td style="text-align:center">
|
||||
<a href='https://delta.jan.ai/latest/jan-mac-x64-0.4.9-335.dmg'>
|
||||
<a href='https://delta.jan.ai/latest/jan-mac-x64-0.4.9-336.dmg'>
|
||||
<img src='./docs/static/img/mac.png' style="height:15px; width: 15px" />
|
||||
<b>Intel</b>
|
||||
</a>
|
||||
</td>
|
||||
<td style="text-align:center">
|
||||
<a href='https://delta.jan.ai/latest/jan-mac-arm64-0.4.9-335.dmg'>
|
||||
<a href='https://delta.jan.ai/latest/jan-mac-arm64-0.4.9-336.dmg'>
|
||||
<img src='./docs/static/img/mac.png' style="height:15px; width: 15px" />
|
||||
<b>M1/M2</b>
|
||||
</a>
|
||||
</td>
|
||||
<td style="text-align:center">
|
||||
<a href='https://delta.jan.ai/latest/jan-linux-amd64-0.4.9-335.deb'>
|
||||
<a href='https://delta.jan.ai/latest/jan-linux-amd64-0.4.9-336.deb'>
|
||||
<img src='./docs/static/img/linux.png' style="height:14px; width: 14px" />
|
||||
<b>jan.deb</b>
|
||||
</a>
|
||||
</td>
|
||||
<td style="text-align:center">
|
||||
<a href='https://delta.jan.ai/latest/jan-linux-x86_64-0.4.9-335.AppImage'>
|
||||
<a href='https://delta.jan.ai/latest/jan-linux-x86_64-0.4.9-336.AppImage'>
|
||||
<img src='./docs/static/img/linux.png' style="height:14px; width: 14px" />
|
||||
<b>jan.AppImage</b>
|
||||
</a>
|
||||
|
||||
@ -3,5 +3,4 @@ UMAMI_PROJECT_API_KEY=xxxx
|
||||
UMAMI_APP_URL=xxxx
|
||||
ALGOLIA_API_KEY=xxxx
|
||||
ALGOLIA_APP_ID=xxxx
|
||||
GITHUB_ACCESS_TOKEN=xxxx
|
||||
API_KEY_BREVO=xxxx
|
||||
GITHUB_ACCESS_TOKEN=xxxx
|
||||
@ -1,8 +0,0 @@
|
||||
{
|
||||
"label": "Advanced Settings",
|
||||
"position": 11,
|
||||
"link": {
|
||||
"type": "doc",
|
||||
"id": "guides/advanced-settings/advanced-settings"
|
||||
}
|
||||
}
|
||||
@ -1,131 +0,0 @@
|
||||
---
|
||||
title: HTTPS Proxy
|
||||
sidebar_position: 2
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
advanced-settings,
|
||||
https-proxy,
|
||||
]
|
||||
---
|
||||
|
||||
<head>
|
||||
<title>HTTPS Proxy</title>
|
||||
<meta name="description" content="Learn how to set up an HTTPS proxy server for Jan AI to encrypt data between your browser and the internet, maintain privacy and security, and bypass regional restrictions."/>
|
||||
<meta name="keywords" content="Jan AI, Jan, ChatGPT alternative, local AI, private AI, conversational AI, no-subscription fee, large language model, advanced-settings, https-proxy"/>
|
||||
<meta property="og:title" content="HTTPS Proxy"/>
|
||||
<meta property="og:description" content="Learn how to set up an HTTPS proxy server for Jan AI to encrypt data between your browser and the internet, maintain privacy and security, and bypass regional restrictions."/>
|
||||
<meta property="og:image" content="https://jan.ai/img/https-proxy.png"/>
|
||||
<meta property="og:url" content="https://jan.ai/https-proxy"/>
|
||||
<meta name="twitter:card" content="summary_large_image"/>
|
||||
<meta name="twitter:title" content="HTTPS Proxy"/>
|
||||
<meta name="twitter:description" content="Learn how to set up an HTTPS proxy server for Jan AI to encrypt data between your browser and the internet, maintain privacy and security, and bypass regional restrictions."/>
|
||||
<meta name="twitter:image" content="https://jan.ai/img/https-proxy.png"/>
|
||||
</head>
|
||||
|
||||
## Why HTTPS Proxy?
|
||||
|
||||
HTTPS Proxy encrypts data between your browser and the internet, making it hard for outsiders to intercept or read. It also helps you to maintain your privacy and security while being able to bypass regional restrictions on internet.
|
||||
|
||||
:::note
|
||||
|
||||
- When configuring Jan using an HTTPS proxy, the speed of the downloading model may be affected due to the encryption and decryption process. It also depends on the networking of the cloud service provider.
|
||||
- HTTPS Proxy does not affect the remote model usage.
|
||||
|
||||
:::
|
||||
|
||||
## Setting Up Your Own HTTPS Proxy Server
|
||||
This guide provides a simple overview of setting up an HTTPS proxy server using **Squid**, a widely used open-source proxy software.
|
||||
|
||||
:::note
|
||||
Other software options are also available depending on your requirements.
|
||||
:::
|
||||
|
||||
### Step 1: Choosing a Server
|
||||
1. Firstly, you need to choose a server to host your proxy server.
|
||||
:::note
|
||||
We recommend using a well-known cloud provider service like:
|
||||
- Amazon AWS
|
||||
- Google Cloud
|
||||
- Microsoft Azure
|
||||
- Digital Ocean
|
||||
:::
|
||||
|
||||
2. Ensure that your server has a public IP address and is accessible from the internet.
|
||||
|
||||
### Step 2: Installing Squid
|
||||
Instal **Squid** using the following command:
|
||||
```bash
|
||||
sudo apt-get update
|
||||
sudo apt-get install squid
|
||||
```
|
||||
|
||||
### Step 3: Configure Squid for HTTPS
|
||||
|
||||
To enable HTTPS, you will need to configure Squid with SSL support.
|
||||
|
||||
1. Squid requires an SSL certificate to be able to handle HTTPS traffic. You can generate a self-signed certificate or obtain one from a Certificate Authority (CA). For a self-signed certificate, you can use OpenSSL:
|
||||
|
||||
```bash
|
||||
openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout squid-proxy.pem -out squid-proxy.pem
|
||||
```
|
||||
|
||||
2. Edit the Squid configuration file `/etc/squid/squid.conf` to include the path to your SSL certificate and enable the HTTPS port:
|
||||
|
||||
```bash
|
||||
http_port 3128 ssl-bump cert=/path/to/your/squid-proxy.pem
|
||||
ssl_bump server-first all
|
||||
ssl_bump bump all
|
||||
```
|
||||
|
||||
3. To intercept HTTPS traffic, Squid uses a process called SSL Bumping. This process allows Squid to decrypt and re-encrypt HTTPS traffic. To enable SSL Bumping, ensure the `ssl_bump` directives are configured correctly in your `squid.conf` file.
|
||||
|
||||
### Step 4 (Optional): Configure ACLs and Authentication
|
||||
|
||||
1. You can define rules to control who can access your proxy. This is done by editing the squid.conf file and defining ACLs:
|
||||
|
||||
```bash
|
||||
acl allowed_ips src "/etc/squid/allowed_ips.txt"
|
||||
http_access allow allowed_ips
|
||||
```
|
||||
|
||||
2. If you want to add an authentication layer, Squid supports several authentication schemes. Basic authentication setup might look like this:
|
||||
|
||||
```bash
|
||||
auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/passwords
|
||||
acl authenticated proxy_auth REQUIRED
|
||||
http_access allow authenticated
|
||||
```
|
||||
|
||||
### Step 5: Restart and Test Your Proxy
|
||||
|
||||
1. After configuring, restart Squid to apply the changes:
|
||||
|
||||
```bash
|
||||
sudo systemctl restart squid
|
||||
```
|
||||
|
||||
2. To test, configure your browser or another client to use the proxy server with its IP address and port (default is 3128).
|
||||
3. Check if you can access the internet through your proxy.
|
||||
|
||||
:::tip
|
||||
|
||||
Tips for Secure Your Proxy:
|
||||
- **Firewall rules**: Ensure that only intended users or IP addresses can connect to your proxy server. This can be achieved by setting up appropriate firewall rules.
|
||||
- **Regular updates**: Keep your server and proxy software updated to ensure that you are protected against known vulnerabilities.
|
||||
- **Monitoring and logging**: Monitor your proxy server for unusual activity and enable logging to keep track of the traffic passing through your proxy.
|
||||
|
||||
:::
|
||||
|
||||
## Setting Up Jan to Use Your HTTPS Proxy
|
||||
|
||||
Once you have your HTTPS proxy server set up, you can configure Jan to use it.
|
||||
1. Navigate to `Settings` > `Advanced Settings` and specify the HTTPS proxy (proxy auto-configuration and SOCKS not supported).
|
||||
2. You can turn on the feature `Ignore SSL Certificates` if you are using a self-signed certificate. This feature allows self-signed or unverified certificates.
|
||||
|
Before Width: | Height: | Size: 95 KiB |
|
Before Width: | Height: | Size: 83 KiB |
|
Before Width: | Height: | Size: 479 KiB After Width: | Height: | Size: 479 KiB |
|
Before Width: | Height: | Size: 472 KiB After Width: | Height: | Size: 472 KiB |
|
Before Width: | Height: | Size: 107 KiB |
@ -1,60 +0,0 @@
|
||||
---
|
||||
title: Best Practices
|
||||
sidebar_position: 3
|
||||
description: Comprehensive set of best practices.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
acknowledgements,
|
||||
third-party libraries,
|
||||
]
|
||||
---
|
||||
|
||||
<head>
|
||||
<title>Best Practices - Jan Guides</title>
|
||||
<meta name="description" content="Comprehensive set of best practices for using Jan AI locally. Learn about setting up the right models, configuring Jan, mastering prompt engineering, and integrating Jan with other systems."/>
|
||||
<meta name="keywords" content="Jan AI, Jan, ChatGPT alternative, local AI, private AI, conversational AI, no-subscription fee, large language model, best practices, quickstart guide, prompt engineering, integrations"/>
|
||||
<meta property="og:title" content="Best Practices - Jan Guides"/>
|
||||
<meta property="og:description" content="Comprehensive set of best practices for using Jan AI locally. Learn about setting up the right models, configuring Jan, mastering prompt engineering, and integrating Jan with other systems."/>
|
||||
<meta property="og:url" content="https://jan.ai/guides/best-practices"/>
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Best Practices - Jan Guides"/>
|
||||
<meta name="twitter:description" content="Comprehensive set of best practices for using Jan AI locally. Learn about setting up the right models, configuring Jan, mastering prompt engineering, and integrating Jan with other systems."/>
|
||||
</head>
|
||||
|
||||
Jan is a versatile platform offering solutions for integrating AI locally across various platforms. This guide outlines best practices for developers, analysts, and AI enthusiasts to enhance their experience with Jan when adding AI locally to their computers. Implementing these practices will optimize the performance of AI models.
|
||||
|
||||
## Follow the Quickstart Guide
|
||||
The [quickstart guide](quickstart.mdx) is designed to facilitate a quick setup process. It provides a clear instruction and simple steps to get you up and running with Jan.ai quickly. Even, if you are inexperienced in AI, the quickstart can offer valuable insights and tips to help you get started quickly.
|
||||
|
||||
## Setting up the Right Models
|
||||
Jan offers a range of pre-configured AI models that are tailored to different tasks and industries. You should identify which on that aligns with your objectives. There are factors to be considered:
|
||||
- Capabilities
|
||||
- Accuracy
|
||||
- Processing Speed
|
||||
|
||||
:::note
|
||||
- Some of these factors also depend on your hardware, please see Hardware Requirement.
|
||||
- Choosing the right model is important to achieve the best performance.
|
||||
:::
|
||||
|
||||
## Setting up Jan
|
||||
Ensure that you familiarize yourself with the Jan application. Jan offers advanced settings that you can adjust. These settings may influence how your AI behaves locally. Please see the [Advanced Settings](./advanced-settings/advanced-settings.mdx) article for a complete list of Jan's configurations and instructions on how to configure them.
|
||||
|
||||
## Integrations
|
||||
One of Jan's key features is its ability to integrate with many systems. Whether you are incorporating Jan.ai with any open-source LLM provider or other tools, it is important to understand the integration capabilities and limitations.
|
||||
|
||||
## Mastering the Prompt Engineering
|
||||
Prompt engineering is an important aspect when dealing with AI models to generate the desired outputs. Mastering this skill can significantly enhance the performance and the responses of the AI. Below are some tips that you can do for prompt engineering:
|
||||
- Ask the model to adopt a persona
|
||||
- Be specific and details get a more specific answers
|
||||
- Provide examples or preference text or context at the beginning
|
||||
- Use a clear and concise language
|
||||
- Use certain keywords and phrases
|
||||
@ -1,177 +0,0 @@
|
||||
---
|
||||
title: Broken Build
|
||||
sidebar_position: 1
|
||||
hide_table_of_contents: true
|
||||
description: A step-by-step guide to fix errors that prevent the project from compiling or running successfully.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
troubleshooting,
|
||||
]
|
||||
---
|
||||
|
||||
<head>
|
||||
<title>Broken Build</title>
|
||||
<meta name="description" content="A step-by-step guide to fix errors that prevent the project from compiling or running successfully. Learn how to troubleshoot and resolve issues where Jan gets stuck in a broken build after installation."/>
|
||||
<meta name="keywords" content="Jan AI, Jan, ChatGPT alternative, local AI, private AI, conversational AI, no-subscription fee, large language model, troubleshooting"/>
|
||||
<meta property="og:title" content="Broken Build"/>
|
||||
<meta property="og:description" content="A step-by-step guide to fix errors that prevent the project from compiling or running successfully. Learn how to troubleshoot and resolve issues where Jan gets stuck in a broken build after installation."/>
|
||||
<meta property="og:image" content="https://jan.ai/img/broken-build.png"/>
|
||||
<meta property="og:url" content="https://jan.ai/broken-build"/>
|
||||
<meta name="twitter:card" content="summary_large_image"/>
|
||||
<meta name="twitter:title" content="Broken Build"/>
|
||||
<meta name="twitter:description" content="A step-by-step guide to fix errors that prevent the project from compiling or running successfully. Learn how to troubleshoot and resolve issues where Jan gets stuck in a broken build after installation."/>
|
||||
<meta name="twitter:image" content="https://jan.ai/img/broken-build.png"/>
|
||||
</head>
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
This guide provides you steps to troubleshoot and to resolve the issue where your Jan is stuck in a broken build after installation.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="mac" label="Mac" default>
|
||||
### 1. Uninstall Jan
|
||||
|
||||
Delete Jan from your `/Applications` folder.
|
||||
|
||||
### 2. Delete Application Data, Cache, and User Data
|
||||
|
||||
```zsh
|
||||
# Step 1: Delete the application data
|
||||
## Newer versions
|
||||
rm -rf ~/Library/Application\ Support/jan
|
||||
## Versions 0.2.0 and older
|
||||
rm -rf ~/Library/Application\ Support/jan-electron
|
||||
|
||||
# Step 2: Clear application cache
|
||||
rm -rf ~/Library/Caches/jan*
|
||||
|
||||
# Step 3: Remove all user data
|
||||
rm -rf ~/jan
|
||||
```
|
||||
|
||||
### 3. Additional Step for Versions Before 0.4.2
|
||||
|
||||
If you are using a version before `0.4.2`, you need to run the following commands:
|
||||
|
||||
```zsh
|
||||
ps aux | grep nitro
|
||||
# Looks for processes like `nitro` and `nitro_arm_64`, and kill them one by one by process ID
|
||||
kill -9 <PID>
|
||||
```
|
||||
|
||||
### 4. Download the Latest Version
|
||||
|
||||
Download the latest version of Jan from our [homepage](https://jan.ai/).
|
||||
</TabItem>
|
||||
|
||||
<TabItem value="windows" label="Windows">
|
||||
### 1. Uninstall Jan
|
||||
|
||||
To uninstall Jan on Windows, use the [Windows Control Panel](https://support.microsoft.com/en-us/windows/uninstall-or-remove-apps-and-programs-in-windows-4b55f974-2cc6-2d2b-d092-5905080eaf98).
|
||||
|
||||
### 2. Delete Application Data, Cache, and User Data
|
||||
|
||||
```sh
|
||||
# Delete your own user data
|
||||
cd ~ # Or where you moved the Jan Data Folder to
|
||||
rm -r ./jan
|
||||
|
||||
# Delete Application Cache
|
||||
cd C:\Users\YOUR_USERNAME\AppData\Roaming
|
||||
rm -r ./Jan
|
||||
```
|
||||
|
||||
### 3. Additional Step for Versions Before 0.4.2
|
||||
|
||||
If you are using a version before `0.4.2`, you need to run the following commands:
|
||||
|
||||
```sh
|
||||
# Find the process ID (PID) of the nitro process by filtering the list by process name
|
||||
tasklist | findstr "nitro"
|
||||
# Once you have the PID of the process you want to terminate, run the `taskkill`
|
||||
taskkill /F /PID <PID>
|
||||
```
|
||||
|
||||
### 4. Download the Latest Version
|
||||
|
||||
Download the latest version of Jan from our [homepage](https://jan.ai/).
|
||||
</TabItem>
|
||||
|
||||
<TabItem value="linux" label="Linux">
|
||||
|
||||
### 1. Uninstall Jan
|
||||
|
||||
<Tabs groupId = "linux_type">
|
||||
<TabItem value="linux_main" label = "Linux">
|
||||
|
||||
To uninstall Jan, you should use your package manager's uninstall or remove option.
|
||||
|
||||
This will return your system to its state before the installation of Jan.
|
||||
|
||||
This method can also reset all settings if you are experiencing any issues with Jan.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value = "deb_ub" label = "Debian / Ubuntu">
|
||||
|
||||
To uninstall Jan, run the following command.MDXContent
|
||||
|
||||
```sh
|
||||
sudo apt-get remove jan
|
||||
# where jan is the name of Jan package
|
||||
```
|
||||
|
||||
This will return your system to its state before the installation of Jan.
|
||||
|
||||
This method can also be used to reset all settings if you are experiencing any issues with Jan.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value = "other" label = "Others">
|
||||
|
||||
To uninstall Jan, you can uninstall Jan by deleting the `.AppImage` file.
|
||||
|
||||
If you wish to completely remove all user data associated with Jan after uninstallation, you can delete the user data at `~/jan`.
|
||||
|
||||
This method can also reset all settings if you are experiencing any issues with Jan.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### 2. Delete Application Data, Cache, and User Data
|
||||
|
||||
```sh
|
||||
# You can delete the user data folders located at the following `~/jan`
|
||||
rm -rf ~/jan
|
||||
```
|
||||
|
||||
### 3. Additional Step for Versions Before 0.4.2
|
||||
|
||||
If you are using a version before `0.4.2`, you need to run the following commands:
|
||||
|
||||
```zsh
|
||||
ps aux | grep nitro
|
||||
# Looks for processes like `nitro` and `nitro_arm_64`, and kill them one by one by process ID
|
||||
kill -9 <PID>
|
||||
```
|
||||
|
||||
### 4. Download the Latest Version
|
||||
|
||||
Download the latest version of Jan from our [homepage](https://jan.ai/).
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
By following these steps, you can cleanly uninstall and reinstall Jan, ensuring a smooth and error-free experience with the latest version.
|
||||
|
||||
:::note
|
||||
|
||||
Before reinstalling Jan, ensure it's completely removed from all shared spaces if it's installed on multiple user accounts on your device.
|
||||
|
||||
:::
|
||||
@ -1,175 +0,0 @@
|
||||
---
|
||||
title: Troubleshooting NVIDIA GPU
|
||||
sidebar_position: 2
|
||||
description: A step-by-step guide to enable Jan to properly leverage NVIDIA GPU resources, avoiding performance issues.
|
||||
keywords: [
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
convZ
|
||||
ersational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
troubleshooting,
|
||||
using GPU,
|
||||
]
|
||||
---
|
||||
|
||||
<head>
|
||||
<title>Troubleshooting NVIDIA GPU</title>
|
||||
<meta name="description" content="A step-by-step guide to enable Jan to properly leverage NVIDIA GPU resources, avoiding performance issues. Learn how to troubleshoot and resolve issues when Jan does not utilize the NVIDIA GPU on Windows and Linux systems."/>
|
||||
<meta name="keywords" content="Jan AI, Jan, ChatGPT alternative, local AI, private AI, conversational AI, no-subscription fee, large language model, troubleshooting, using GPU"/>
|
||||
<meta property="og:title" content="Troubleshooting NVIDIA GPU"/>
|
||||
<meta property="og:description" content="A step-by-step guide to enable Jan to properly leverage NVIDIA GPU resources, avoiding performance issues. Learn how to troubleshoot and resolve issues when Jan does not utilize the NVIDIA GPU on Windows and Linux systems."/>
|
||||
<meta property="og:image" content="https://jan.ai/img/troubleshooting-nvidia-gpu.png"/>
|
||||
<meta property="og:url" content="https://jan.ai/troubleshooting-nvidia-gpu"/>
|
||||
<meta name="twitter:card" content="summary_large_image"/>
|
||||
<meta name="twitter:title" content="Troubleshooting NVIDIA GPU"/>
|
||||
<meta name="twitter:description" content="A step-by-step guide to enable Jan to properly leverage NVIDIA GPU resources, avoiding performance issues. Learn how to troubleshoot and resolve issues when Jan does not utilize the NVIDIA GPU on Windows and Linux systems."/>
|
||||
<meta name="twitter:image" content="https://jan.ai/img/troubleshooting-nvidia-gpu.png"/>
|
||||
</head>
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
|
||||
This guide provides steps to troubleshoot and resolve issues when the Jan app does not utilize the NVIDIA GPU on Windows and Linux systems.
|
||||
|
||||
### 1. Ensure GPU Mode Requirements
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="windows" label="Windows">
|
||||
|
||||
#### NVIDIA Driver
|
||||
|
||||
- Install an [NVIDIA Driver](https://www.nvidia.com/Download/index.aspx) supporting CUDA 11.7 or higher.
|
||||
- Use the following command to verify the installation:
|
||||
|
||||
```sh
|
||||
nvidia-smi
|
||||
```
|
||||
|
||||
#### CUDA Toolkit
|
||||
|
||||
- Install a [CUDA toolkit](https://developer.nvidia.com/cuda-downloads) compatible with your NVIDIA driver.
|
||||
- Use the following command to verify the installation:
|
||||
|
||||
```sh
|
||||
nvcc --version
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="linux" label="Linux">
|
||||
|
||||
#### NVIDIA Driver
|
||||
|
||||
- Install an [NVIDIA Driver](https://www.nvidia.com/Download/index.aspx) supporting CUDA 11.7 or higher.
|
||||
- Use the following command to verify the installation:
|
||||
|
||||
```sh
|
||||
nvidia-smi
|
||||
```
|
||||
|
||||
#### CUDA Toolkit
|
||||
|
||||
- Install a [CUDA toolkit](https://developer.nvidia.com/cuda-downloads) compatible with your NVIDIA driver.
|
||||
- Use the following command to verify the installation:
|
||||
|
||||
```sh
|
||||
nvcc --version
|
||||
```
|
||||
#### Linux Specifics
|
||||
|
||||
- Ensure that `gcc-11`, `g++-11`, `cpp-11`, or higher is installed.
|
||||
- See [instructions](https://gcc.gnu.org/projects/cxx-status.html#cxx17) for Ubuntu installation.
|
||||
|
||||
- **Post-Installation Actions**: Add CUDA libraries to `LD_LIBRARY_PATH`.
|
||||
- Follow the [Post-installation Actions](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#post-installation-actions) instructions.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### 2. Switch to GPU Mode
|
||||
|
||||
Jan defaults to CPU mode but automatically switches to GPU mode if your system supports it, selecting the GPU with the highest VRAM. Check this setting in `Settings` > `Advanced Settings`.
|
||||
|
||||
#### Troubleshooting Tips
|
||||
|
||||
If GPU mode isn't enabled by default:
|
||||
|
||||
1. Confirm that you have installed an NVIDIA driver supporting CUDA 11.7 or higher. Refer to [CUDA compatibility](https://docs.nvidia.com/deploy/cuda-compatibility/index.html#binary-compatibility__table-toolkit-driver).
|
||||
2. Ensure compatibility of the CUDA toolkit with your NVIDIA driver. Refer to [CUDA compatibility](https://docs.nvidia.com/deploy/cuda-compatibility/index.html#binary-compatibility__table-toolkit-driver).
|
||||
3. For Linux, add CUDA's `.so` libraries to the `LD_LIBRARY_PATH`. For Windows, ensure that CUDA's `.dll` libraries are in the PATH. Refer to [Windows setup](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html#environment-setup).
|
||||
|
||||
### 3. Check GPU Settings
|
||||
|
||||
1. Navigate to `Settings` > `Advanced Settings` > `Jan Data Folder` to access GPU settings.
|
||||
2. Open the `settings.json` file in the `settings` folder. Here's an example:
|
||||
|
||||
```json title="~/jan/settings/settings.json"
|
||||
{
|
||||
"notify": true,
|
||||
"run_mode": "gpu",
|
||||
"nvidia_driver": {
|
||||
"exist": true,
|
||||
"version": "531.18"
|
||||
},
|
||||
"cuda": {
|
||||
"exist": true,
|
||||
"version": "12"
|
||||
},
|
||||
"gpus": [
|
||||
{
|
||||
"id": "0",
|
||||
"vram": "12282"
|
||||
},
|
||||
{
|
||||
"id": "1",
|
||||
"vram": "6144"
|
||||
},
|
||||
{
|
||||
"id": "2",
|
||||
"vram": "6144"
|
||||
}
|
||||
],
|
||||
"gpu_highest_vram": "0"
|
||||
}
|
||||
```
|
||||
### 4. Restart Jan
|
||||
Restart Jan application to make sure it works.
|
||||
|
||||
#### Troubleshooting Tips
|
||||
|
||||
- Ensure `nvidia_driver` and `cuda` fields indicate installed software.
|
||||
- If `gpus` field is empty or lacks your GPU, check NVIDIA driver and CUDA toolkit installations.
|
||||
- For further assistance, share the `settings.json` file.
|
||||
|
||||
### Tested Configurations
|
||||
|
||||
- **Windows 11 Pro 64-bit:**
|
||||
- GPU: NVIDIA GeForce RTX 4070ti
|
||||
- CUDA: 12.2
|
||||
- NVIDIA driver: 531.18 (Bare metal)
|
||||
|
||||
- **Ubuntu 22.04 LTS:**
|
||||
- GPU: NVIDIA GeForce RTX 4070ti
|
||||
- CUDA: 12.2
|
||||
- NVIDIA driver: 545 (Bare metal)
|
||||
|
||||
- **Ubuntu 20.04 LTS:**
|
||||
- GPU: NVIDIA GeForce GTX 1660ti
|
||||
- CUDA: 12.1
|
||||
- NVIDIA driver: 535 (Proxmox VM passthrough GPU)
|
||||
|
||||
- **Ubuntu 18.04 LTS:**
|
||||
- GPU: NVIDIA GeForce GTX 1660ti
|
||||
- CUDA: 12.1
|
||||
- NVIDIA driver: 535 (Proxmox VM passthrough GPU)
|
||||
|
||||
### Common Issues and Solutions
|
||||
|
||||
1. If the issue persists, try installing the [Nightly version](https://jan.ai/install/nightly/).
|
||||
2. Ensure your (V)RAM is accessible; some users with virtual RAM may require additional configuration.
|
||||
3. Seek assistance in [Jan Discord](https://discord.gg/mY69SZaMaC).
|
||||
@ -1,63 +0,0 @@
|
||||
---
|
||||
title: How to Get Error Logs
|
||||
sidebar_position: 5
|
||||
description: A step-by-step guide to get the Jan app error logs.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
troubleshooting,
|
||||
permission denied,
|
||||
]
|
||||
---
|
||||
|
||||
<head>
|
||||
<title>How to Get Error Logs</title>
|
||||
<meta name="description" content="A step-by-step guide to get the Jan app error logs. Learn how to access error logs for Jan application, UI, and API server, along with precautions to redact sensitive information when sharing logs."/>
|
||||
<meta name="keywords" content="Jan AI, Jan, ChatGPT alternative, local AI, private AI, conversational AI, no-subscription fee, large language model, troubleshooting, permission denied"/>
|
||||
<meta property="og:title" content="How to Get Error Logs"/>
|
||||
<meta property="og:description" content="A step-by-step guide to get the Jan app error logs. Learn how to access error logs for Jan application, UI, and API server, along with precautions to redact sensitive information when sharing logs."/>
|
||||
<meta property="og:image" content="https://jan.ai/img/how-to-get-error-logs.png"/>
|
||||
<meta property="og:url" content="https://jan.ai/how-to-get-error-logs"/>
|
||||
<meta name="twitter:card" content="summary_large_image"/>
|
||||
<meta name="twitter:title" content="How to Get Error Logs"/>
|
||||
<meta name="twitter:description" content="A step-by-step guide to get the Jan app error logs. Learn how to access error logs for Jan application, UI, and API server, along with precautions to redact sensitive information when sharing logs."/>
|
||||
<meta name="twitter:image" content="https://jan.ai/img/how-to-get-error-logs.png"/>
|
||||
</head>
|
||||
|
||||
To get the error logs of your Jan application, follow the steps below:
|
||||
### Jan Application
|
||||
1. Navigate to the main dashboard.
|
||||
2. Click the **gear icon (⚙️)** on the bottom left of your screen.
|
||||
3. Under the **Settings screen**, click the **Advanced Settings**.
|
||||
4. On the **Jan Data Folder** click the **folder icon (📂)** to access the data.
|
||||
5. Click the **logs** folder.
|
||||
|
||||
### Jan UI
|
||||
1. Open your Unix or Linux terminal.
|
||||
2. Use the following commands to get the recent 50 lines of log files:
|
||||
```bash
|
||||
tail -n 50 ~/jan/logs/app.log
|
||||
|
||||
```
|
||||
|
||||
### Jan API Server
|
||||
1. Open your Unix or Linux terminal.
|
||||
2. Use the following commands to get the recent 50 lines of log files:
|
||||
```bash
|
||||
tail -n 50 ~/jan/logs/server.log
|
||||
|
||||
```
|
||||
:::warning
|
||||
Ensure to redact any private or sensitive information when sharing logs or error details.
|
||||
:::
|
||||
|
||||
:::note
|
||||
If you have any questions or are looking for support, please don't hesitate to contact us via our [Discord community](https://discord.gg/Dt7MxDyNNZ) or create a new issue in our [GitHub repository](https://github.com/janhq/jan/issues/new/choose).
|
||||
:::
|
||||
@ -1,45 +0,0 @@
|
||||
---
|
||||
title: No Assistant Available
|
||||
sidebar_position: 7
|
||||
description: Troubleshooting steps to resolve issues no assistant available.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
troubleshooting,
|
||||
no assistant available,
|
||||
]
|
||||
---
|
||||
|
||||
<head>
|
||||
<title>No Assistant Available</title>
|
||||
<meta name="description" content="Troubleshooting steps to resolve issues when encountering the 'No assistant available' error message in Jan. Learn how to identify and remove unintentional files in the /jan/assistants directory to resolve the issue."/>
|
||||
<meta name="keywords" content="Jan AI, Jan, ChatGPT alternative, local AI, private AI, conversational AI, no-subscription fee, large language model, troubleshooting, no assistant available"/>
|
||||
<meta property="og:title" content="No Assistant Available"/>
|
||||
<meta property="og:description" content="Troubleshooting steps to resolve issues when encountering the 'No assistant available' error message in Jan. Learn how to identify and remove unintentional files in the /jan/assistants directory to resolve the issue."/>
|
||||
<meta property="og:image" content="https://jan.ai/img/no-assistant-available.png"/>
|
||||
<meta property="og:url" content="https://jan.ai/no-assistant-available"/>
|
||||
<meta name="twitter:card" content="summary_large_image"/>
|
||||
<meta name="twitter:title" content="No Assistant Available"/>
|
||||
<meta name="twitter:description" content="Troubleshooting steps to resolve issues when encountering the 'No assistant available' error message in Jan. Learn how to identify and remove unintentional files in the /jan/assistants directory to resolve the issue."/>
|
||||
<meta name="twitter:image" content="https://jan.ai/img/no-assistant-available.png"/>
|
||||
</head>
|
||||
|
||||
When you encounter the following error message:
|
||||
```
|
||||
No assistant available.
|
||||
```
|
||||
|
||||
This issue arises when a new, unintentional file appears in `/jan/assistants`.
|
||||
|
||||
It can be resolved through the following steps:
|
||||
|
||||
1. Access the `/jan/assistants` directory using a file manager or terminal.
|
||||
|
||||
2. Within `/jan/assistants`, this directory should only contain a folder named `jan`. Identify any file outside of this folder and remove it.
|
||||
@ -1,53 +0,0 @@
|
||||
---
|
||||
title: Permission Denied
|
||||
sidebar_position: 1
|
||||
description: A step-by-step guide to fix the issue when access is denied due to insufficient permissions.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
troubleshooting,
|
||||
permission denied,
|
||||
]
|
||||
---
|
||||
|
||||
<head>
|
||||
<title>Resolving "Permission Denied" Error in Jan AI</title>
|
||||
<meta charSet="utf-8" />
|
||||
<meta name="description" content="Learn how to resolve the 'Permission Denied' error encountered while running Jan AI by changing ownership of the `~/.npm` directory to the current user." />
|
||||
<meta name="keywords" content="Jan AI, Jan, ChatGPT alternative, local AI, private AI, conversational AI, no-subscription fee, large language model, troubleshooting, permission denied" />
|
||||
<meta name="twitter:card" content="summary" />
|
||||
<link rel="canonical" href="https://jan.ai/troubleshooting/permission-denied" />
|
||||
<meta property="og:title" content="Resolving 'Permission Denied' Error in Jan AI" />
|
||||
<meta property="og:description" content="Learn how to resolve the 'Permission Denied' error encountered while running Jan AI by changing ownership of the `~/.npm` directory to the current user." />
|
||||
<meta property="og:url" content="https://jan.ai/troubleshooting/permission-denied" />
|
||||
<meta property="og:type" content="article" />
|
||||
<meta property="og:image" content="https://jan.ai/img/og-image.png" />
|
||||
</head>
|
||||
|
||||
When you run Jan, you may encounter the following error:
|
||||
|
||||
```
|
||||
Uncaught (in promise) Error: Error invoking layout-480796bff433a3a3.js:538 remote method 'installExtension':
|
||||
Error Package /Applications/Jan.app/Contents/Resources/app.asar.unpacked/pre-install/janhq-assistant-extension-1.0.0.tgz does not contain a valid manifest:
|
||||
Error EACCES: permission denied, mkdtemp '/Users/username/.npm/_cacache/tmp/ueCMn4'
|
||||
```
|
||||
|
||||
This error mainly caused by permission problem during installation. To resolve this issue, follow these steps:
|
||||
|
||||
1. Open your terminal.
|
||||
|
||||
2. Execute the following command to change ownership of the `~/.npm` directory to the current user:
|
||||
|
||||
```sh
|
||||
sudo chown -R $(whoami) ~/.npm
|
||||
```
|
||||
:::note
|
||||
This command ensures that the necessary permissions are granted for Jan installation, resolving the encountered error.
|
||||
:::
|
||||
@ -1,65 +0,0 @@
|
||||
---
|
||||
title: Something's Amiss
|
||||
sidebar_position: 4
|
||||
description: A step-by-step guide to resolve an unspecified or general error.
|
||||
---
|
||||
|
||||
<head>
|
||||
<title>Something's Amiss</title>
|
||||
<meta name="description" content="A step-by-step guide to resolve an unspecified or general error encountered when starting a chat with a model in Jan. Learn how to troubleshoot and resolve common issues, such as ensuring OS updates, selecting appropriate model sizes, installing the latest Nightly release, checking V/RAM accessibility, downloading CUDA for Nvidia GPU users, and verifying port status."/>
|
||||
<meta name="keywords" content="Jan AI, Jan, ChatGPT alternative, local AI, private AI, conversational AI, no-subscription fee, large language model, troubleshooting, something's amiss"/>
|
||||
<meta property="og:title" content="Something's Amiss"/>
|
||||
<meta property="og:description" content="A step-by-step guide to resolve an unspecified or general error encountered when starting a chat with a model in Jan. Learn how to troubleshoot and resolve common issues, such as ensuring OS updates, selecting appropriate model sizes, installing the latest Nightly release, checking V/RAM accessibility, downloading CUDA for Nvidia GPU users, and verifying port status."/>
|
||||
<meta property="og:url" content="https://jan.ai/somethings-amiss"/>
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Something's Amiss"/>
|
||||
<meta name="twitter:description" content="A step-by-step guide to resolve an unspecified or general error encountered when starting a chat with a model in Jan. Learn how to troubleshoot and resolve common issues, such as ensuring OS updates, selecting appropriate model sizes, installing the latest Nightly release, checking V/RAM accessibility, downloading CUDA for Nvidia GPU users, and verifying port status."/>
|
||||
</head>
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
|
||||
When you start a chat with a model and encounter with a Something's Amiss error, here's how to resolve it:
|
||||
1. Ensure your OS is up to date.
|
||||
2. Choose a model smaller than 80% of your hardware's V/RAM. For example, on an 8GB machine, opt for models smaller than 6GB.
|
||||
3. Install the latest [Nightly release](https://jan.ai/install/nightly/) or [clear the application cache](https://jan.ai/troubleshooting/stuck-on-broken-build/) when reinstalling Jan.
|
||||
4. Confirm your V/RAM accessibility, particularly if using virtual RAM.
|
||||
5. Nvidia GPU users should download [CUDA](https://developer.nvidia.com/cuda-downloads).
|
||||
6. Linux users, ensure your system meets the requirements of gcc 11, g++ 11, cpp 11, or higher. Refer to this [link](https://jan.ai/guides/troubleshooting/gpu-not-used/#specific-requirements-for-linux) for details.
|
||||
7. You might use the wrong port when you [check the app logs](https://jan.ai/troubleshooting/how-to-get-error-logs/) and encounter the Bind address failed at 127.0.0.1:3928 error. To check the port status, try use the `netstat` command, like the following:
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="mac" label="MacOS" default>
|
||||
```sh
|
||||
netstat -an | grep 3928
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value="windows" label="Windows" default>
|
||||
```sh
|
||||
netstat -ano | find "3928"
|
||||
tasklist /fi "PID eq 3928"
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value="linux" label="Linux" default>
|
||||
```sh
|
||||
netstat -anpe | grep "3928"
|
||||
```
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
:::note
|
||||
|
||||
`Netstat` displays the contents of various network-related data structures for active connections
|
||||
|
||||
:::
|
||||
|
||||
:::tip
|
||||
|
||||
Jan uses the following ports:
|
||||
|
||||
- Nitro: `3928`
|
||||
- Jan API Server: `1337`
|
||||
- Jan Documentation: `3001`
|
||||
|
||||
:::
|
||||
@ -1,74 +0,0 @@
|
||||
---
|
||||
title: Stuck on Loading Model
|
||||
sidebar_position: 8
|
||||
description: Troubleshooting steps to resolve issues related to the loading model.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
troubleshooting,
|
||||
stuck on loading model,
|
||||
]
|
||||
---
|
||||
|
||||
<head>
|
||||
<title>Stuck on Loading Model</title>
|
||||
<meta name="description" content="Troubleshooting steps to resolve issues related to the loading model in Jan. Learn how to fix problems such as model loading stuck due to missing Windows Management Instrumentation Command-line (WMIC) path in the system's PATH environment variable or due to using CPUs without Advanced Vector Extensions (AVX) support."/>
|
||||
<meta name="keywords" content="Jan AI, Jan, ChatGPT alternative, local AI, private AI, conversational AI, no-subscription fee, large language model, troubleshooting, stuck on loading model"/>
|
||||
<meta property="og:title" content="Stuck on Loading Model"/>
|
||||
<meta property="og:description" content="Troubleshooting steps to resolve issues related to the loading model in Jan. Learn how to fix problems such as model loading stuck due to missing Windows Management Instrumentation Command-line (WMIC) path in the system's PATH environment variable or due to using CPUs without Advanced Vector Extensions (AVX) support."/>
|
||||
<meta property="og:url" content="https://jan.ai/stuck-on-loading-model"/>
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Stuck on Loading Model"/>
|
||||
<meta name="twitter:description" content="Troubleshooting steps to resolve issues related to the loading model in Jan. Learn how to fix problems such as model loading stuck due to missing Windows Management Instrumentation Command-line (WMIC) path in the system's PATH environment variable or due to using CPUs without Advanced Vector Extensions (AVX) support."/>
|
||||
</head>
|
||||
|
||||
## 1. Issue: Model Loading Stuck Due To Missing Windows Management Instrumentation Command-line (WMIC)
|
||||
|
||||
Encountering a stuck-on-loading model issue in Jan is caused by errors related to the `Windows Management Instrumentation Command-line (WMIC)` path not being included in the system's PATH environment variable.
|
||||
|
||||
Error message:
|
||||
```
|
||||
index.js:47 Uncaught (in promise) Error: Error invoking remote method 'invokeExtensionFunc': Error: Command failed: WMIC CPU Get NumberOfCores
|
||||
```
|
||||
|
||||
It can be resolved through the following steps:
|
||||
|
||||
1. **Open System Properties:**
|
||||
- Press `Windows key + R`.
|
||||
- Type `sysdm.cpl` and press `Enter`.
|
||||
|
||||
2. **Access Environment Variables:**
|
||||
- Go to the "Advanced" tab.
|
||||
- Click the "Environment Variables" button.
|
||||
|
||||
3. **Edit System PATH:**
|
||||
- Under "System Variables" find and select `Path`.
|
||||
- Click "Edit."
|
||||
|
||||
4. **Add WMIC Path:**
|
||||
- Click "New" and enter `C:\Windows\System32\Wbem`.
|
||||
|
||||
5. **Save Changes:**
|
||||
- Click "OK" to close and save your changes.
|
||||
|
||||
6. **Verify Installation:**
|
||||
- Restart any command prompts or terminals.
|
||||
- Run `where wmic` to verify. Expected output: `C:\Windows\System32\wbem\WMIC.exe`.
|
||||
|
||||
|
||||
## 2. Issue: Model Loading Stuck Due To CPU Without AVX
|
||||
|
||||
Encountering an issue with models stuck on loading in Jan can be due to the use of older generation CPUs that do not support Advanced Vector Extensions (AVX).
|
||||
|
||||
To check if your CPU supports AVX, visit the following link: [CPUs with AVX](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX)
|
||||
|
||||
:::warning [Please use this with caution]
|
||||
As a workaround, consider using an [emulator](https://www.intel.com/content/www/us/en/developer/articles/tool/software-development-emulator.html) to simulate AVX support.
|
||||
:::
|
||||
@ -1,38 +0,0 @@
|
||||
---
|
||||
title: Thread Disappearance
|
||||
sidebar_position: 6
|
||||
description: Troubleshooting steps to resolve issues threads suddenly disappearance.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
troubleshooting,
|
||||
thread disappearance,
|
||||
]
|
||||
---
|
||||
|
||||
<head>
|
||||
<title>Thread Disappearance</title>
|
||||
<meta name="description" content="Troubleshooting steps to resolve issues with threads suddenly disappearing in Jan. Learn how to fix problems when old threads vanish due to the creation of new, unintentional files in the /jan/threads directory."/>
|
||||
<meta name="keywords" content="Jan AI, Jan, ChatGPT alternative, local AI, private AI, conversational AI, no-subscription fee, large language model, troubleshooting, thread disappearance"/>
|
||||
<meta property="og:title" content="Thread Disappearance"/>
|
||||
<meta property="og:description" content="Troubleshooting steps to resolve issues with threads suddenly disappearing in Jan. Learn how to fix problems when old threads vanish due to the creation of new, unintentional files in the /jan/threads directory."/>
|
||||
<meta property="og:url" content="https://jan.ai/thread-disappearance"/>
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Thread Disappearance"/>
|
||||
<meta name="twitter:description" content="Troubleshooting steps to resolve issues with threads suddenly disappearing in Jan. Learn how to fix problems when old threads vanish due to the creation of new, unintentional files in the /jan/threads directory."/>
|
||||
</head>
|
||||
|
||||
When you encounter the error of old threads suddenly disappear. This can happen when a new, unintentional file is created in `/jan/threads`.
|
||||
|
||||
It can be resolved through the following steps:
|
||||
|
||||
1. Go to `/jan/threads`.
|
||||
|
||||
2. The `/jan/threads` directory contains many folders named with the prefix `jan_` followed by an ID (e.g., `jan_123`). Look for any file not conforming to this naming pattern and remove it.
|
||||
@ -1,38 +0,0 @@
|
||||
---
|
||||
title: Undefined Issue
|
||||
sidebar_position: 3
|
||||
description: A step-by-step guide to resolve errors when a variable or object is not defined.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
troubleshooting,
|
||||
undefined issue,
|
||||
]
|
||||
---
|
||||
|
||||
<head>
|
||||
<title>Undefined Issue</title>
|
||||
<meta name="description" content="A step-by-step guide to resolve errors when a variable or object is not defined in Jan. Learn how to troubleshoot and fix issues related to undefined variables or objects caused by Nitro tool or other internal processes."/>
|
||||
<meta name="keywords" content="Jan AI, Jan, ChatGPT alternative, local AI, private AI, conversational AI, no-subscription fee, large language model, troubleshooting, undefined issue"/>
|
||||
<meta property="og:title" content="Undefined Issue"/>
|
||||
<meta property="og:description" content="A step-by-step guide to resolve errors when a variable or object is not defined in Jan. Learn how to troubleshoot and fix issues related to undefined variables or objects caused by Nitro tool or other internal processes."/>
|
||||
<meta property="og:url" content="https://jan.ai/undefined-issue"/>
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Undefined Issue"/>
|
||||
<meta name="twitter:description" content="A step-by-step guide to resolve errors when a variable or object is not defined in Jan. Learn how to troubleshoot and fix issues related to undefined variables or objects caused by Nitro tool or other internal processes."/>
|
||||
</head>
|
||||
|
||||
Encountering an `undefined issue` in Jan is caused by errors related to the Nitro tool or other internal processes. It can be resolved through the following steps:
|
||||
|
||||
1. Clearing the Jan folder and then reopen the application to determine if the problem persists
|
||||
2. Manually run the nitro tool located at `~/jan/extensions/@janhq/inference-nitro-extensions/dist/bin/(your-os)/nitro` to check for error messages.
|
||||
3. Address any nitro error messages that are identified and reassess the persistence of the issue.
|
||||
4. Reopen Jan to determine if the problem has been resolved after addressing any identified errors.
|
||||
5. If the issue persists, please share the [app logs](https://jan.ai/troubleshooting/how-to-get-error-logs/) via [Jan Discord](https://discord.gg/mY69SZaMaC) for further assistance and troubleshooting.
|
||||
@ -1,36 +0,0 @@
|
||||
---
|
||||
title: Unexpected Token
|
||||
sidebar_position: 2
|
||||
description: A step-by-step guide to correct syntax errors caused by invalid JSON in the code.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
troubleshooting,
|
||||
unexpected token,
|
||||
]
|
||||
---
|
||||
|
||||
<head>
|
||||
<title>Unexpected Token</title>
|
||||
<meta name="description" content="A step-by-step guide to correct syntax errors caused by invalid JSON in the code. Learn how to troubleshoot and fix issues related to unexpected token errors when initiating a chat with OpenAI models."/>
|
||||
<meta name="keywords" content="Jan AI, Jan, ChatGPT alternative, local AI, private AI, conversational AI, no-subscription fee, large language model, troubleshooting, unexpected token"/>
|
||||
<meta property="og:title" content="Unexpected Token"/>
|
||||
<meta property="og:description" content="A step-by-step guide to correct syntax errors caused by invalid JSON in the code. Learn how to troubleshoot and fix issues related to unexpected token errors when initiating a chat with OpenAI models."/>
|
||||
<meta property="og:url" content="https://jan.ai/unexpected-token"/>
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Unexpected Token"/>
|
||||
<meta name="twitter:description" content="A step-by-step guide to correct syntax errors caused by invalid JSON in the code. Learn how to troubleshoot and fix issues related to unexpected token errors when initiating a chat with OpenAI models."/>
|
||||
</head>
|
||||
|
||||
Encountering the `Unexpected token` error when initiating a chat with OpenAI models mainly caused by either your OpenAI key or where you access your OpenAI from. This issue can be solved through the following steps:
|
||||
|
||||
1. Obtain an OpenAI API key from [OpenAI's developer platform](https://platform.openai.com/) and integrate it into your application.
|
||||
|
||||
2. Trying a VPN could potentially solve the issue, especially if it's related to region locking for accessing OpenAI services. By connecting through a VPN, you may bypass such restrictions and successfully initiate chats with OpenAI models.
|
||||
@ -1,22 +0,0 @@
|
||||
---
|
||||
title: Extensions
|
||||
slug: /guides/extensions/
|
||||
sidebar_position: 5
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
build extension,
|
||||
]
|
||||
---
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
||||
|
Before Width: | Height: | Size: 83 KiB |
|
Before Width: | Height: | Size: 88 KiB |
@ -1,7 +1,8 @@
|
||||
---
|
||||
title: Extension Setup
|
||||
title: What are Jan Extensions?
|
||||
slug: /extensions
|
||||
description: Jan Docs | Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
sidebar_position: 1
|
||||
description: Dive into the available extensions and configure them.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
@ -12,24 +13,11 @@ keywords:
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
extension settings,
|
||||
Jan Extensions,
|
||||
Extensions,
|
||||
]
|
||||
---
|
||||
|
||||
<head>
|
||||
<title>Configuring Extension Settings in Jan AI - User Guide</title>
|
||||
<meta charSet="utf-8" />
|
||||
<meta name="description" content="Learn how to configure settings for default extensions in Jan AI, including how to enable/disable extensions and modify their configurations." />
|
||||
<meta name="keywords" content="Jan AI, Jan, ChatGPT alternative, local AI, private AI, conversational AI, no-subscription fee, large language model, extension settings" />
|
||||
<meta name="twitter:card" content="summary" />
|
||||
<link rel="canonical" href="https://jan.ai/guides/using-extensions/extension-settings/" />
|
||||
<meta property="og:title" content="Configuring Extension Settings in Jan AI - User Guide" />
|
||||
<meta property="og:description" content="Learn how to configure settings for default extensions in Jan AI, including how to enable/disable extensions and modify their configurations." />
|
||||
<meta property="og:url" content="https://jan.ai/guides/using-extensions/extension-settings/" />
|
||||
<meta property="og:type" content="article" />
|
||||
<meta property="og:image" content="https://jan.ai/img/og-image.png" />
|
||||
</head>
|
||||
|
||||
The current Jan Desktop Client has some default extensions built on top of this framework to enhance the user experience. In this guide, we will show you the list of default extensions and how to configure extension settings.
|
||||
|
||||
## Default Extensions
|
||||
@ -149,6 +137,24 @@ To configure extension settings:
|
||||
}
|
||||
```
|
||||
|
||||
## Import Custom Extension
|
||||
|
||||
:::note
|
||||
Currently, Jan only supports official extensions, which can be directly downloaded in Extension Settings. We plan to support 3rd party Extensions in the future.
|
||||
:::
|
||||
|
||||
For now you can always import a third party extension at your own risk by following the steps below:
|
||||
|
||||
1. Navigate to **Settings** > **Extensions** > Click Select under **Manual Installation**.
|
||||
2. Then, the ~/jan/extensions/extensions.json file will be updated automatically.
|
||||
|
||||
:::caution
|
||||
|
||||
You need to prepare the extension file in .tgz format to install the **non-default** extension.
|
||||
|
||||
:::
|
||||
|
||||
|
||||
:::info[Assistance and Support]
|
||||
|
||||
If you have questions, please join our [Discord community](https://discord.gg/Dt7MxDyNNZ) for support, updates, and discussions.
|
||||
@ -1,36 +0,0 @@
|
||||
---
|
||||
title: Import Extensions
|
||||
sidebar_position: 2
|
||||
description: A step-by-step guide on how to import extensions.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
import extensions,
|
||||
]
|
||||
---
|
||||
|
||||
|
||||
Besides default extensions, you can import extensions into Jan by following the steps below:
|
||||
|
||||
1. Navigate to **Settings** > **Extensions** > Click Select under **Manual Installation**.
|
||||
2. Then, the ~/jan/extensions/extensions.json file will be updated automatically.
|
||||
|
||||
:::caution
|
||||
|
||||
You need to prepare the extension file in .tgz format to install the **non-default** extension.
|
||||
|
||||
:::
|
||||
|
||||
|
||||
:::info[Assistance and Support]
|
||||
|
||||
If you have questions, please join our [Discord community](https://discord.gg/Dt7MxDyNNZ) for support, updates, and discussions.
|
||||
|
||||
:::
|
||||
@ -1,111 +0,0 @@
|
||||
---
|
||||
title: FAQs
|
||||
slug: /guides/faqs
|
||||
sidebar_position: 12
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
acknowledgements,
|
||||
third-party libraries,
|
||||
]
|
||||
---
|
||||
|
||||
<head>
|
||||
<title>FAQs - Jan Guides</title>
|
||||
<meta name="description" content="Frequently asked questions (FAQs) about Jan AI, covering general issues, download and installation issues, technical issues and solutions, compatibility and support, development and features, and troubleshooting."/>
|
||||
<meta name="keywords" content="Jan AI, Jan, ChatGPT alternative, local AI, private AI, conversational AI, no-subscription fee, large language model, FAQs, troubleshooting, technical issues, compatibility, development"/>
|
||||
<meta property="og:title" content="FAQs - Jan Guides"/>
|
||||
<meta property="og:description" content="Frequently asked questions (FAQs) about Jan AI, covering general issues, download and installation issues, technical issues and solutions, compatibility and support, development and features, and troubleshooting."/>
|
||||
<meta property="og:url" content="https://jan.ai/guides/faqs"/>
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="FAQs - Jan Guides"/>
|
||||
<meta name="twitter:description" content="Frequently asked questions (FAQs) about Jan AI, covering general issues, download and installation issues, technical issues and solutions, compatibility and support, development and features, and troubleshooting."/>
|
||||
</head>
|
||||
|
||||
## General Issues
|
||||
|
||||
- **Why can't I download models like Pandora 11B Q4 and Solar Instruct 10.7B Q4?**
|
||||
- These models might have been removed or taken down. Please check the [Pre-configured Models](models-list.mdx) for the latest updates on model availability.
|
||||
|
||||
- **Why does Jan display "Apologies, something's amiss" when I try to run it?**
|
||||
- This issue may arise if you're using an older Intel chip that does not fully support AVX instructions required for running AI models. Upgrading your hardware may resolve this issue.
|
||||
|
||||
- **How can I use Jan in Russia?**
|
||||
- To use Jan in Russia, a VPN or [HTTPS - Proxy](./advanced-settings/http-proxy.mdx) is recommended to bypass any regional restrictions that might be in place.
|
||||
|
||||
- **I'm experiencing an error on startup from Nitro. What should I do?**
|
||||
- If you encounter errors with Nitro, try switching the path to use the Nitro executable for the version 12-0. This adjustment can help resolve path-related issues.
|
||||
|
||||
## Download and Installation Issues
|
||||
|
||||
- **What does "Error occurred: Unexpected token" mean?**
|
||||
- This error usually indicates a problem with your internet connection or that your access to certain resources is being blocked. Using a VPN or [HTTPS - Proxy](./advanced-settings/http-proxy.mdx) can help avoid these issues by providing a secure and unrestricted internet connection.
|
||||
|
||||
- **Why aren't my downloads working?**
|
||||
- If you're having trouble downloading directly through Jan, you might want to download the model separately and then import it into Jan. Detailed instructions are available on [here](install.mdx).
|
||||
|
||||
- **Jan AI doesn't open on my Mac with an Intel processor. What can I do?**
|
||||
- Granting the `.npm` folder permission for the user can resolve issues related to permissions on macOS, especially for users with Intel processors.
|
||||
|
||||
- **What should I do if the model download freezes?**
|
||||
- If a model download freezes, consider importing the models manually. You can find more detailed guidance on how to do this at [Manual Import](./models/import-models.mdx) article.
|
||||
|
||||
- **I received a message that the model GPT4 does not exist or I do not have access. What should I do?**
|
||||
- This message typically means you need to top up your credit with OpenAI or check your access permissions for the model.
|
||||
|
||||
- **I can't download models from "Explore the Hub." What's the solution?**
|
||||
- Uninstalling Jan, clearing the cache, and reinstalling it following the guide provided [here](install.mdx) may help. Also, consider downloading the `.gguf` model via a browser as an alternative approach.
|
||||
|
||||
## Technical Issues and Solutions
|
||||
|
||||
- **How can I download models with a socks5 proxy or import a local model file?**
|
||||
- Nightly builds of Jan offer support for downloading models with socks5 proxies or importing local model files.
|
||||
|
||||
- **My device shows no GPU usage and lacks a Settings folder. What should I do?**
|
||||
- Using the nightly builds of Jan can address issues related to GPU usage and the absence of a Settings folder, as these builds contain the latest fixes and features.
|
||||
|
||||
- **Why does Jan display a toast message saying a model is loaded when it is not actually loaded?**
|
||||
- This issue can be resolved by downloading the `.gguf` file from Hugging Face and replacing it in the model folder. This ensures the correct model is loaded.
|
||||
|
||||
- **How to enable CORS when running Nitro?**
|
||||
- By default, CORS (Cross-Origin Resource Sharing) is disabled when running Nitro. Enabling CORS can be necessary for certain operations and integrations. Check the official documentation for instructions on how to enable CORS if your workflow requires it.
|
||||
|
||||
## Compatibility and Support
|
||||
|
||||
- **How to use GPU AMD for Jan?**
|
||||
- Jan now supports AMD GPUs through Vulkan. This enhancement allows users with AMD graphics cards to leverage GPU acceleration, improving performance for AI model computations.
|
||||
|
||||
- **Is Jan available for Android or iOS?**
|
||||
- Jan is primarily focused on the Desktop app and does not currently offer mobile apps for Android or iOS. The development team is concentrating on enhancing the desktop experience.
|
||||
|
||||
## Development and Features
|
||||
|
||||
- **Does Jan support Safetensors?**
|
||||
- At the moment, Jan only supports GGUF. However, there are plans to support `.safetensor` files in the future.
|
||||
|
||||
- **I hope to customize the installation path of each model. Is that possible?**
|
||||
- Yes you can customize the installation path. Please see [here](https://jan.ai/guides/advanced-settings/#access-the-jan-data-folder) for more information.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- **What should I do if there's high CPU usage while Jan is idle?**
|
||||
- If you notice high CPU usage while Jan is idle, consider using the nightly builds of Jan
|
||||
|
||||
- **What does the error "Failed to fetch" mean, and how can I fix it?**
|
||||
- The "Failed to fetch" error typically occurs due to network issues or restrictions. Using the nightly builds of Jan may help overcome these issues by providing updated fixes and features.
|
||||
|
||||
- **What should I do if "Failed to fetch" occurs using MacBook Pro with Intel HD Graphics 4000 1536 MB?**
|
||||
- Ensure that the model size is less than 90% of your available VRAM and that the VRAM is accessible to the app. Managing the resources effectively can help mitigate this issue.
|
||||
|
||||
:::info[Assistance and Support]
|
||||
|
||||
If you have questions, please join our [Discord community](https://discord.gg/Dt7MxDyNNZ) for support, updates, and discussions.
|
||||
|
||||
:::
|
||||
BIN
docs/docs/guides/get-started/asset/download.gif
Normal file
|
After Width: | Height: | Size: 20 MiB |
BIN
docs/docs/guides/get-started/asset/gpt.gif
Normal file
|
After Width: | Height: | Size: 4.3 MiB |
BIN
docs/docs/guides/get-started/asset/model.gif
Normal file
|
After Width: | Height: | Size: 5.4 MiB |
24
docs/docs/guides/get-started/hardware-setup.mdx
Normal file
@ -0,0 +1,24 @@
|
||||
---
|
||||
title: Hardware Setup
|
||||
slug: /guides/hardware
|
||||
description: Jan Docs | Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
sidebar_position: 3
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
hardware requirements,
|
||||
Nvidia,
|
||||
AMD,
|
||||
CPU,
|
||||
GPU
|
||||
]
|
||||
---
|
||||
|
||||
Coming Soon
|
||||
19
docs/docs/guides/get-started/overview.mdx
Normal file
@ -0,0 +1,19 @@
|
||||
---
|
||||
title: Overview
|
||||
slug: /guides
|
||||
description: Jan Docs | Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
sidebar_position: 1
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
]
|
||||
---
|
||||
|
||||
Coming Soon
|
||||
259
docs/docs/guides/get-started/quickstart.mdx
Normal file
@ -0,0 +1,259 @@
|
||||
---
|
||||
title: Quickstart
|
||||
slug: /guides/quickstart
|
||||
description: Get started quickly with Jan, a ChatGPT-alternative that runs on your own computer, with a local API server. Learn how to install Jan and select an AI model to start chatting.
|
||||
sidebar_position: 2
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
quickstart,
|
||||
getting started,
|
||||
using AI model,
|
||||
installation
|
||||
]
|
||||
---
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
import download from './asset/download.gif';
|
||||
import gpt from './asset/gpt.gif';
|
||||
import model from './asset/model.gif';
|
||||
|
||||
To get started quickly with Jan, follow the steps below:
|
||||
## Step 1: Get Jan Desktop
|
||||
<Tabs>
|
||||
<TabItem value="mac" label = "Mac" default>
|
||||
|
||||
#### Pre-requisites
|
||||
Before installing Jan, ensure :
|
||||
- You have a Mac with an Apple Silicon Processor.
|
||||
- Homebrew and its dependencies are installed for installing Jan with Homebrew package.
|
||||
- Your macOS version is 10.15 or higher.
|
||||
|
||||
#### Stable Releases
|
||||
|
||||
To download stable releases, go to [Jan.ai](https://jan.ai/) > select **Download for Mac**.
|
||||
|
||||
The download should be available as a `.dmg`.
|
||||
|
||||
#### Nightly Releases
|
||||
|
||||
We provide the Nightly Release so that you can test new features and see what might be coming in a future stable release. Please be aware that there might be bugs!
|
||||
|
||||
You can download it from [Jan's Discord](https://discord.gg/FTk2MvZwJH) in the [`#nightly-builds`](https://discord.gg/q8szebnxZ7) channel.
|
||||
|
||||
#### Experimental Model
|
||||
|
||||
To enable the experimental mode, go to **Settings** > **Advanced Settings** and toggle the **Experimental Mode**
|
||||
|
||||
#### Install with Homebrew
|
||||
Install Jan with the following Homebrew command:
|
||||
|
||||
```brew
|
||||
brew install --cask jan
|
||||
```
|
||||
|
||||
:::warning
|
||||
|
||||
Homebrew package installation is currently limited to **Apple Silicon Macs**, with upcoming support for Windows and Linux.
|
||||
|
||||
:::
|
||||
|
||||
</TabItem>
|
||||
<TabItem value = "windows" label = "Windows">
|
||||
|
||||
#### Pre-requisites
|
||||
Ensure that your system meets the following requirements:
|
||||
- Windows 10 or higher is required to run Jan.
|
||||
|
||||
To enable GPU support, you will need:
|
||||
- NVIDIA GPU with CUDA Toolkit 11.7 or higher
|
||||
- NVIDIA driver 470.63.01 or higher
|
||||
|
||||
#### Stable Releases
|
||||
|
||||
To download stable releases, go to [Jan.ai](https://jan.ai/) > select **Download for Windows**.
|
||||
|
||||
The download should be available as a `.exe` file.
|
||||
|
||||
#### Nightly Releases
|
||||
|
||||
We provide the Nightly Release so that you can test new features and see what might be coming in a future stable release. Please be aware that there might be bugs!
|
||||
|
||||
You can download it from [Jan's Discord](https://discord.gg/FTk2MvZwJH) in the [`#nightly-builds`](https://discord.gg/q8szebnxZ7) channel.
|
||||
|
||||
#### Experimental Model
|
||||
|
||||
To enable the experimental mode, go to **Settings** > **Advanced Settings** and toggle the **Experimental Mode**
|
||||
|
||||
#### Default Installation Directory
|
||||
|
||||
By default, Jan is installed in the following directory:
|
||||
|
||||
```sh
|
||||
# Default installation directory
|
||||
C:\Users\{username}\AppData\Local\Programs\Jan
|
||||
```
|
||||
|
||||
:::warning
|
||||
|
||||
If you are stuck in a broken build, go to the [Broken Build](/guides/common-error/broken-build) section of Common Errors.
|
||||
|
||||
:::
|
||||
|
||||
</TabItem>
|
||||
<TabItem value = "linux" label = "Linux">
|
||||
|
||||
#### Pre-requisites
|
||||
Ensure that your system meets the following requirements:
|
||||
- glibc 2.27 or higher (check with `ldd --version`)
|
||||
- gcc 11, g++ 11, cpp 11, or higher, refer to this link for more information.
|
||||
|
||||
To enable GPU support, you will need:
|
||||
- NVIDIA GPU with CUDA Toolkit 11.7 or higher
|
||||
- NVIDIA driver 470.63.01 or higher
|
||||
|
||||
#### Stable Releases
|
||||
|
||||
To download stable releases, go to [Jan.ai](https://jan.ai/) > select **Download for Linux**.
|
||||
|
||||
The download should be available as a `.AppImage` file or a `.deb` file.
|
||||
|
||||
#### Nightly Releases
|
||||
|
||||
We provide the Nightly Release so that you can test new features and see what might be coming in a future stable release. Please be aware that there might be bugs!
|
||||
|
||||
You can download it from [Jan's Discord](https://discord.gg/FTk2MvZwJH) in the [`#nightly-builds`](https://discord.gg/q8szebnxZ7) channel.
|
||||
|
||||
#### Experimental Model
|
||||
|
||||
To enable the experimental mode, go to **Settings** > **Advanced Settings** and toggle the **Experimental Mode**
|
||||
|
||||
<Tabs groupId = "linux_type">
|
||||
<TabItem value="linux_main" label = "Linux">
|
||||
|
||||
To install Jan, you should use your package manager's install or `dpkg`.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value = "deb_ub" label = "Debian / Ubuntu">
|
||||
|
||||
To install Jan, run the following command:
|
||||
|
||||
```sh
|
||||
# Install Jan using dpkg
|
||||
sudo dpkg -i jan-linux-amd64-{version}.deb
|
||||
|
||||
# Install Jan using apt-get
|
||||
sudo apt-get install ./jan-linux-amd64-{version}.deb
|
||||
# where jan-linux-amd64-{version}.deb is path to the Jan package
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value = "other" label = "Others">
|
||||
|
||||
To install Jan, run the following commands:
|
||||
|
||||
```sh
|
||||
# Install Jan using AppImage
|
||||
chmod +x jan-linux-x86_64-{version}.AppImage
|
||||
./jan-linux-x86_64-{version}.AppImage
|
||||
# where jan-linux-x86_64-{version}.AppImage is path to the Jan package
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
:::warning
|
||||
|
||||
If you are stuck in a broken build, go to the [Broken Build](/guides/common-error/broken-build) section of Common Errors.
|
||||
|
||||
:::
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Step 2: Download a Model
|
||||
Jan provides a variety of local AI models tailored to different needs, ready for download. These models are installed and run directly on the user's device.
|
||||
|
||||
1. Go to the **Hub**.
|
||||
2. Select the models that you would like to install, to see a model details click the dropdown button.
|
||||
3. Click the **Download** button.
|
||||
|
||||
<br/>
|
||||
|
||||
<div style={{ display: 'flex', justifyContent: 'center' }}>
|
||||
<img src={download} alt="Download a Model" />
|
||||
</div>
|
||||
|
||||
<br/>
|
||||
|
||||
:::note
|
||||
|
||||
Ensure you select the appropriate model size by balancing performance, cost, and resource considerations in line with your task's specific requirements and hardware specifications.
|
||||
:::
|
||||
|
||||
## Step 3: Connect to ChatGPT (Optional)
|
||||
Jan also provides access to remote models hosted on external servers, requiring an API key for connectivity. For example, to use the ChatGPT model with Jan, you must input your API key by following these steps:
|
||||
|
||||
1. Go to the **Thread** tab.
|
||||
2. Under the Model dropdown menu, select the ChatGPT model.
|
||||
3. Fill in your ChatGPT API Key that you can get in your [OpenAI platform](https://platform.openai.com/account/api-keys).
|
||||
|
||||
<br/>
|
||||
|
||||
<div style={{ display: 'flex', justifyContent: 'center' }}>
|
||||
<img src={gpt} alt="Connect to ChatGPT" />
|
||||
</div>
|
||||
|
||||
<br/>
|
||||
|
||||
## Step 4: Chat with Models
|
||||
After downloading and configuring your model, you can immediately use it in the **Thread** tab.
|
||||
|
||||
<br/>
|
||||
|
||||
<div style={{ display: 'flex', justifyContent: 'center' }}>
|
||||
<img src={model} alt="Chat with a model" />
|
||||
</div>
|
||||
|
||||
<br/>
|
||||
|
||||
## Best Practices
|
||||
This section outlines best practices for developers, analysts, and AI enthusiasts to enhance their experience with Jan when adding AI locally to their computers. Implementing these practices will optimize the performance of AI models.
|
||||
|
||||
### Follow the Quickstart Guide
|
||||
The quickstart guide above is designed to facilitate a quick setup process. It provides a clear instruction and simple steps to get you up and running with Jan.ai quickly. Even, if you are inexperienced in AI.
|
||||
|
||||
### Select the Right Models
|
||||
Jan offers a range of pre-configured AI models that are suited for different purposes. You should identify which on that aligns with your objectives. There are factors to be considered:
|
||||
- Capabilities
|
||||
- Accuracy
|
||||
- Processing Speed
|
||||
|
||||
:::note
|
||||
- Some of these factors also depend on your hardware, please see Hardware Requirement.
|
||||
- Choosing the right model is important to achieve the best performance.
|
||||
:::
|
||||
|
||||
### Setting up Jan
|
||||
Ensure that you familiarize yourself with the Jan application. Jan offers advanced settings that you can adjust. These settings may influence how your AI behaves locally. Please see the [Advanced Settings](./guides/advanced) article for a complete list of Jan's configurations and instructions on how to configure them.
|
||||
|
||||
### Integrations
|
||||
Jan can work with many different systems and tools. Whether you are incorporating Jan.ai with any open-source LLM provider or other tools, it is important to understand the integration capabilities and limitations.
|
||||
|
||||
### Mastering the Prompt Engineering
|
||||
Prompt engineering is an important aspect when dealing with AI models to generate the desired outputs. Mastering this skill can significantly enhance the performance and the responses of the AI. Below are some tips that you can do for prompt engineering:
|
||||
- Ask the model to adopt a persona
|
||||
- Be specific and details get a more specific answers
|
||||
- Provide examples or preference text or context at the beginning
|
||||
- Use a clear and concise language
|
||||
- Use certain keywords and phrases
|
||||
|
||||
## Pre-configured Models
|
||||
To see the full list of Jan's pre-configured models, please see our official GitHub [here](https://github.com/janhq/jan).
|
||||
19
docs/docs/guides/get-started/settingup-gpu.mdx
Normal file
@ -0,0 +1,19 @@
|
||||
---
|
||||
title: Setting Up GPU
|
||||
slug: /guides/hardware/gpu
|
||||
description: Jan Docs | Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
sidebar_position: 1
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
]
|
||||
---
|
||||
|
||||
Coming Soon
|
||||
19
docs/docs/guides/inference/overview-inference.mdx
Normal file
@ -0,0 +1,19 @@
|
||||
---
|
||||
title: Overview
|
||||
slug: /guides/engines
|
||||
description: Jan Docs | Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
sidebar_position: 12
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
]
|
||||
---
|
||||
|
||||
Coming Soon
|
||||
@ -1,296 +0,0 @@
|
||||
---
|
||||
title: Installation
|
||||
sidebar_position: 2
|
||||
hide_table_of_contents: true
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
]
|
||||
---
|
||||
|
||||
<head>
|
||||
<title>Installation - Jan Guides</title>
|
||||
<meta name="description" content="Instructions for installing Jan AI on Mac, Windows, Linux, and Docker, along with prerequisites and considerations for each platform."/>
|
||||
<meta name="keywords" content="Jan AI, Jan, ChatGPT alternative, local AI, private AI, conversational AI, no-subscription fee, large language model, installation, macOS, Windows, Linux, Docker"/>
|
||||
<meta property="og:title" content="Installation - Jan Guides"/>
|
||||
<meta property="og:description" content="Instructions for installing Jan AI on Mac, Windows, Linux, and Docker, along with prerequisites and considerations for each platform."/>
|
||||
<meta property="og:url" content="https://jan.ai/guides/installation"/>
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Installation - Jan Guides"/>
|
||||
<meta name="twitter:description" content="Instructions for installing Jan AI on Mac, Windows, Linux, and Docker, along with prerequisites and considerations for each platform."/>
|
||||
</head>
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
import installImageURL from './assets/jan-ai-download.png';
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="mac" label = "Mac" default>
|
||||
|
||||
### Pre-requisites
|
||||
Before installing Jan, ensure :
|
||||
- You have a Mac with an Apple Silicon Processor.
|
||||
- Homebrew and its dependencies are installed. (for Installing Jan with Homebrew Package)
|
||||
- Your macOS version is 10.15 or higher.
|
||||
|
||||
### Stable Releases
|
||||
|
||||
To download stable releases, go to [Jan.ai](https://jan.ai/) > select **Download for Mac**.
|
||||
|
||||
The download should be available as a `.dmg`.
|
||||
|
||||
### Nightly Releases
|
||||
|
||||
We provide the Nightly Release so that you can test new features and see what might be coming in a future stable release. Please be aware that there might be bugs!
|
||||
|
||||
You can download it from [Jan's Discord](https://discord.gg/FTk2MvZwJH) in the [`#nightly-builds`](https://discord.gg/q8szebnxZ7) channel.
|
||||
|
||||
### Experimental Model
|
||||
|
||||
To enable the experimental mode, go to **Settings** > **Advanced Settings** and toggle the **Experimental Mode**
|
||||
|
||||
### Install with Homebrew
|
||||
Install Jan with the following Homebrew command:
|
||||
|
||||
```brew
|
||||
brew install --cask jan
|
||||
```
|
||||
|
||||
:::warning
|
||||
|
||||
Homebrew package installation is currently limited to **Apple Silicon Macs**, with upcoming support for Windows and Linux.
|
||||
|
||||
:::
|
||||
|
||||
</TabItem>
|
||||
<TabItem value = "windows" label = "Windows">
|
||||
|
||||
### Pre-requisites
|
||||
Ensure that your system meets the following requirements:
|
||||
- Windows 10 or higher is required to run Jan.
|
||||
|
||||
To enable GPU support, you will need:
|
||||
- NVIDIA GPU with CUDA Toolkit 11.7 or higher
|
||||
- NVIDIA driver 470.63.01 or higher
|
||||
|
||||
### Stable Releases
|
||||
|
||||
To download stable releases, go to [Jan.ai](https://jan.ai/) > select **Download for Windows**.
|
||||
|
||||
The download should be available as a `.exe` file.
|
||||
|
||||
### Nightly Releases
|
||||
|
||||
We provide the Nightly Release so that you can test new features and see what might be coming in a future stable release. Please be aware that there might be bugs!
|
||||
|
||||
You can download it from [Jan's Discord](https://discord.gg/FTk2MvZwJH) in the [`#nightly-builds`](https://discord.gg/q8szebnxZ7) channel.
|
||||
|
||||
### Experimental Model
|
||||
|
||||
To enable the experimental mode, go to **Settings** > **Advanced Settings** and toggle the **Experimental Mode**
|
||||
|
||||
### Default Installation Directory
|
||||
|
||||
By default, Jan is installed in the following directory:
|
||||
|
||||
```sh
|
||||
# Default installation directory
|
||||
C:\Users\{username}\AppData\Local\Programs\Jan
|
||||
```
|
||||
|
||||
:::warning
|
||||
|
||||
If you are stuck in a broken build, go to the [Broken Build](/guides/common-error/broken-build) section of Common Errors.
|
||||
|
||||
:::
|
||||
|
||||
</TabItem>
|
||||
<TabItem value = "linux" label = "Linux">
|
||||
|
||||
### Pre-requisites
|
||||
Ensure that your system meets the following requirements:
|
||||
- glibc 2.27 or higher (check with `ldd --version`)
|
||||
- gcc 11, g++ 11, cpp 11, or higher, refer to this link for more information.
|
||||
|
||||
To enable GPU support, you will need:
|
||||
- NVIDIA GPU with CUDA Toolkit 11.7 or higher
|
||||
- NVIDIA driver 470.63.01 or higher
|
||||
|
||||
### Stable Releases
|
||||
|
||||
To download stable releases, go to [Jan.ai](https://jan.ai/) > select **Download for Linux**.
|
||||
|
||||
The download should be available as a `.AppImage` file or a `.deb` file.
|
||||
|
||||
### Nightly Releases
|
||||
|
||||
We provide the Nightly Release so that you can test new features and see what might be coming in a future stable release. Please be aware that there might be bugs!
|
||||
|
||||
You can download it from [Jan's Discord](https://discord.gg/FTk2MvZwJH) in the [`#nightly-builds`](https://discord.gg/q8szebnxZ7) channel.
|
||||
|
||||
### Experimental Model
|
||||
|
||||
To enable the experimental mode, go to **Settings** > **Advanced Settings** and toggle the **Experimental Mode**
|
||||
|
||||
<Tabs groupId = "linux_type">
|
||||
<TabItem value="linux_main" label = "Linux">
|
||||
|
||||
To install Jan, you should use your package manager's install or `dpkg`.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value = "deb_ub" label = "Debian / Ubuntu">
|
||||
|
||||
To install Jan, run the following command:
|
||||
|
||||
```sh
|
||||
# Install Jan using dpkg
|
||||
sudo dpkg -i jan-linux-amd64-{version}.deb
|
||||
|
||||
# Install Jan using apt-get
|
||||
sudo apt-get install ./jan-linux-amd64-{version}.deb
|
||||
# where jan-linux-amd64-{version}.deb is path to the Jan package
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value = "other" label = "Others">
|
||||
|
||||
To install Jan, run the following commands:
|
||||
|
||||
```sh
|
||||
# Install Jan using AppImage
|
||||
chmod +x jan-linux-x86_64-{version}.AppImage
|
||||
./jan-linux-x86_64-{version}.AppImage
|
||||
# where jan-linux-x86_64-{version}.AppImage is path to the Jan package
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
:::warning
|
||||
|
||||
If you are stuck in a broken build, go to the [Broken Build](/guides/common-error/broken-build) section of Common Errors.
|
||||
|
||||
:::
|
||||
</TabItem>
|
||||
<TabItem value="docker" label = "Docker" default>
|
||||
|
||||
### Pre-requisites
|
||||
Ensure that your system meets the following requirements:
|
||||
- Linux or WSL2 Docker
|
||||
- Latest Docker Engine and Docker Compose
|
||||
|
||||
To enable GPU support, you will need:
|
||||
- `nvidia-driver`
|
||||
- `nvidia-docker2`
|
||||
|
||||
:::note
|
||||
- If you have not installed Docker, follow the instructions [here](https://docs.docker.com/engine/install/ubuntu/).
|
||||
- If you have not installed the required file for GPU support, follow the instructions [here](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html).
|
||||
:::
|
||||
|
||||
### Run Jan in Docker
|
||||
You can run Jan in Docker with two methods:
|
||||
1. Run Jan in CPU mode
|
||||
2. Run Jan in GPU mode
|
||||
<Tabs groupId = "ldocker_type">
|
||||
<TabItem value="docker_cpu" label = "CPU">
|
||||
|
||||
To run Jan in Docker CPU mode, by using the following code:
|
||||
|
||||
```bash
|
||||
# cpu mode with default file system
|
||||
docker compose --profile cpu-fs up -d
|
||||
|
||||
# cpu mode with S3 file system
|
||||
docker compose --profile cpu-s3fs up -d
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="docker_gpu" label = "GPU">
|
||||
|
||||
To run Jan in Docker CPU mode, follow the steps below:
|
||||
1. Check CUDA compatibility with your NVIDIA driver by running nvidia-smi and check the CUDA version in the output as shown below:
|
||||
```sh
|
||||
nvidia-smi
|
||||
|
||||
# Output
|
||||
+---------------------------------------------------------------------------------------+
|
||||
| NVIDIA-SMI 531.18 Driver Version: 531.18 CUDA Version: 12.1 |
|
||||
|-----------------------------------------+----------------------+----------------------+
|
||||
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
|
||||
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|
||||
| | | MIG M. |
|
||||
|=========================================+======================+======================|
|
||||
| 0 NVIDIA GeForce RTX 4070 Ti WDDM | 00000000:01:00.0 On | N/A |
|
||||
| 0% 44C P8 16W / 285W| 1481MiB / 12282MiB | 2% Default |
|
||||
| | | N/A |
|
||||
+-----------------------------------------+----------------------+----------------------+
|
||||
| 1 NVIDIA GeForce GTX 1660 Ti WDDM | 00000000:02:00.0 Off | N/A |
|
||||
| 0% 49C P8 14W / 120W| 0MiB / 6144MiB | 0% Default |
|
||||
| | | N/A |
|
||||
+-----------------------------------------+----------------------+----------------------+
|
||||
| 2 NVIDIA GeForce GTX 1660 Ti WDDM | 00000000:05:00.0 Off | N/A |
|
||||
| 29% 38C P8 11W / 120W| 0MiB / 6144MiB | 0% Default |
|
||||
| | | N/A |
|
||||
+-----------------------------------------+----------------------+----------------------+
|
||||
|
||||
+---------------------------------------------------------------------------------------+
|
||||
| Processes: |
|
||||
| GPU GI CI PID Type Process name GPU Memory |
|
||||
| ID ID Usage |
|
||||
|=======================================================================================|
|
||||
```
|
||||
2. Visit [NVIDIA NGC Catalog](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/cuda/tags) and find the smallest minor version of image tag that matches your CUDA version (e.g., 12.1 -> 12.1.0)
|
||||
3. Update the `Dockerfile.gpu` line number 5 with the latest minor version of the image tag from step 2 (e.g. change `FROM nvidia/cuda:12.2.0-runtime-ubuntu22.04 AS base` to `FROM nvidia/cuda:12.1.0-runtime-ubuntu22.04 AS base`)
|
||||
4. Run Jan in GPU mode by using the following command:
|
||||
|
||||
```bash
|
||||
# GPU mode with default file system
|
||||
docker compose --profile gpu-fs up -d
|
||||
|
||||
# GPU mode with S3 file system
|
||||
docker compose --profile gpu-s3fs up -d
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
### Docker Compose Profile and Environment
|
||||
The available Docker Compose profile and the environment variables listed below:
|
||||
|
||||
#### Docker Compose Profile
|
||||
|
||||
| Profile | Description |
|
||||
|-----------|-------------------------------------------|
|
||||
| cpu-fs | Run Jan in CPU mode with default file system |
|
||||
| cpu-s3fs | Run Jan in CPU mode with S3 file system |
|
||||
| gpu-fs | Run Jan in GPU mode with default file system |
|
||||
| gpu-s3fs | Run Jan in GPU mode with S3 file system |
|
||||
|
||||
#### Environment Variables
|
||||
|
||||
| Environment Variable | Description |
|
||||
|--------------------------|------------------------------------------------------------|
|
||||
| S3_BUCKET_NAME | S3 bucket name - leave blank for default file system |
|
||||
| AWS_ACCESS_KEY_ID | AWS access key ID - leave blank for default file system |
|
||||
| AWS_SECRET_ACCESS_KEY | AWS secret access key - leave blank for default file system|
|
||||
| AWS_ENDPOINT | AWS endpoint URL - leave blank for default file system |
|
||||
| AWS_REGION | AWS region - leave blank for default file system |
|
||||
| API_BASE_URL | Jan Server URL, please modify it as your public ip address or domain name default http://localhost:1377 |
|
||||
|
||||
|
||||
:::warning
|
||||
|
||||
If you are stuck in a broken build, go to the [Broken Build](/guides/common-error/broken-build/) section of Common Errors.
|
||||
|
||||
:::
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
95
docs/docs/guides/installation/README.mdx
Normal file
@ -0,0 +1,95 @@
|
||||
---
|
||||
title: Installation
|
||||
sidebar_position: 4
|
||||
slug: /guides/install/
|
||||
hide_table_of_contents: true
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
]
|
||||
---
|
||||
|
||||
## Jan Device Compatible
|
||||
Jan is compatible with macOS, Windows, and Linux, making it accessible for a wide range of users. This compatibility allows users to leverage Jan's AI tools effectively, regardless of their device or operating system.
|
||||
|
||||
:::note
|
||||
For detailed system requirements and setup instructions, refer to our [Hardware Setup](/guides/hardware/) guide.
|
||||
:::
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
||||
|
||||
|
||||
## Install Server-Side
|
||||
To install Jan from source, follow the steps below:
|
||||
|
||||
### Pre-requisites
|
||||
Before proceeding with the installation of Jan from source, ensure that the following software versions are installed on your system:
|
||||
|
||||
- Node.js version 20.0.0 or higher
|
||||
- Yarn version 1.22.0 or higher
|
||||
|
||||
### Install Jan Development Build
|
||||
1. Clone the Jan repository from GitHub by using the following command:
|
||||
```bash
|
||||
git clone https://github.com/janhq/jan
|
||||
git checkout DESIRED_BRANCH
|
||||
cd jan
|
||||
```
|
||||
|
||||
2. Install the required dependencies by using the following Yarn command:
|
||||
```bash
|
||||
yarn install
|
||||
|
||||
# Build core module
|
||||
yarn build:core
|
||||
|
||||
# Packing base plugins
|
||||
yarn build:plugins
|
||||
|
||||
# Packing uikit
|
||||
yarn build:uikit
|
||||
```
|
||||
|
||||
3. Run the development server.
|
||||
```bash
|
||||
yarn dev
|
||||
```
|
||||
This will start the development server and open the desktop app. During this step, you may encounter notifications about installing base plugins. Simply click **OK** and **Next** to continue.
|
||||
|
||||
### Install Jan Production Build
|
||||
1. Clone the Jan repository from GitHub by using the following command:
|
||||
```bash
|
||||
git clone https://github.com/janhq/jan
|
||||
cd jan
|
||||
```
|
||||
|
||||
2. Install the required dependencies by using the following Yarn command:
|
||||
```bash
|
||||
yarn install
|
||||
|
||||
# Build core module
|
||||
yarn build:core
|
||||
|
||||
# Packing base plugins
|
||||
yarn build:plugins
|
||||
|
||||
# Packing uikit
|
||||
yarn build:uikit
|
||||
```
|
||||
|
||||
3. Run the production server.
|
||||
```bash
|
||||
yarn
|
||||
```
|
||||
|
||||
This completes the installation process for Jan from source. The production-ready app for macOS can be found in the dist folder.
|
||||
133
docs/docs/guides/installation/docker.mdx
Normal file
@ -0,0 +1,133 @@
|
||||
---
|
||||
title: Install with Docker
|
||||
sidebar_position: 4
|
||||
slug: /guides/install/server
|
||||
hide_table_of_contents: true
|
||||
description: A step-by-step guide to install Jan using Docker.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
Install on Docker,
|
||||
Docker,
|
||||
Helm,
|
||||
]
|
||||
---
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
### Pre-requisites
|
||||
Ensure that your system meets the following requirements:
|
||||
- Linux or WSL2 Docker
|
||||
- Latest Docker Engine and Docker Compose
|
||||
|
||||
To enable GPU support, you will need:
|
||||
- `nvidia-driver`
|
||||
- `nvidia-docker2`
|
||||
|
||||
:::note
|
||||
- If you have not installed Docker, follow the instructions [here](https://docs.docker.com/engine/install/ubuntu/).
|
||||
- If you have not installed the required file for GPU support, follow the instructions [here](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html).
|
||||
:::
|
||||
|
||||
### Run Jan in Docker
|
||||
You can run Jan in Docker with two methods:
|
||||
1. Run Jan in CPU mode
|
||||
2. Run Jan in GPU mode
|
||||
<Tabs groupId = "ldocker_type">
|
||||
<TabItem value="docker_cpu" label = "CPU">
|
||||
|
||||
To run Jan in Docker CPU mode, by using the following code:
|
||||
|
||||
```bash
|
||||
# cpu mode with default file system
|
||||
docker compose --profile cpu-fs up -d
|
||||
|
||||
# cpu mode with S3 file system
|
||||
docker compose --profile cpu-s3fs up -d
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="docker_gpu" label = "GPU">
|
||||
|
||||
To run Jan in Docker CPU mode, follow the steps below:
|
||||
1. Check CUDA compatibility with your NVIDIA driver by running nvidia-smi and check the CUDA version in the output as shown below:
|
||||
```sh
|
||||
nvidia-smi
|
||||
|
||||
# Output
|
||||
+---------------------------------------------------------------------------------------+
|
||||
| NVIDIA-SMI 531.18 Driver Version: 531.18 CUDA Version: 12.1 |
|
||||
|-----------------------------------------+----------------------+----------------------+
|
||||
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
|
||||
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|
||||
| | | MIG M. |
|
||||
|=========================================+======================+======================|
|
||||
| 0 NVIDIA GeForce RTX 4070 Ti WDDM | 00000000:01:00.0 On | N/A |
|
||||
| 0% 44C P8 16W / 285W| 1481MiB / 12282MiB | 2% Default |
|
||||
| | | N/A |
|
||||
+-----------------------------------------+----------------------+----------------------+
|
||||
| 1 NVIDIA GeForce GTX 1660 Ti WDDM | 00000000:02:00.0 Off | N/A |
|
||||
| 0% 49C P8 14W / 120W| 0MiB / 6144MiB | 0% Default |
|
||||
| | | N/A |
|
||||
+-----------------------------------------+----------------------+----------------------+
|
||||
| 2 NVIDIA GeForce GTX 1660 Ti WDDM | 00000000:05:00.0 Off | N/A |
|
||||
| 29% 38C P8 11W / 120W| 0MiB / 6144MiB | 0% Default |
|
||||
| | | N/A |
|
||||
+-----------------------------------------+----------------------+----------------------+
|
||||
|
||||
+---------------------------------------------------------------------------------------+
|
||||
| Processes: |
|
||||
| GPU GI CI PID Type Process name GPU Memory |
|
||||
| ID ID Usage |
|
||||
|=======================================================================================|
|
||||
```
|
||||
2. Visit [NVIDIA NGC Catalog](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/cuda/tags) and find the smallest minor version of image tag that matches your CUDA version (e.g., 12.1 -> 12.1.0)
|
||||
3. Update the `Dockerfile.gpu` line number 5 with the latest minor version of the image tag from step 2 (e.g. change `FROM nvidia/cuda:12.2.0-runtime-ubuntu22.04 AS base` to `FROM nvidia/cuda:12.1.0-runtime-ubuntu22.04 AS base`)
|
||||
4. Run Jan in GPU mode by using the following command:
|
||||
|
||||
```bash
|
||||
# GPU mode with default file system
|
||||
docker compose --profile gpu-fs up -d
|
||||
|
||||
# GPU mode with S3 file system
|
||||
docker compose --profile gpu-s3fs up -d
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
### Docker Compose Profile and Environment
|
||||
The available Docker Compose profile and the environment variables listed below:
|
||||
|
||||
#### Docker Compose Profile
|
||||
|
||||
| Profile | Description |
|
||||
|-----------|-------------------------------------------|
|
||||
| cpu-fs | Run Jan in CPU mode with default file system |
|
||||
| cpu-s3fs | Run Jan in CPU mode with S3 file system |
|
||||
| gpu-fs | Run Jan in GPU mode with default file system |
|
||||
| gpu-s3fs | Run Jan in GPU mode with S3 file system |
|
||||
|
||||
#### Environment Variables
|
||||
|
||||
| Environment Variable | Description |
|
||||
|--------------------------|------------------------------------------------------------|
|
||||
| S3_BUCKET_NAME | S3 bucket name - leave blank for default file system |
|
||||
| AWS_ACCESS_KEY_ID | AWS access key ID - leave blank for default file system |
|
||||
| AWS_SECRET_ACCESS_KEY | AWS secret access key - leave blank for default file system|
|
||||
| AWS_ENDPOINT | AWS endpoint URL - leave blank for default file system |
|
||||
| AWS_REGION | AWS region - leave blank for default file system |
|
||||
| API_BASE_URL | Jan Server URL, please modify it as your public ip address or domain name default http://localhost:1377 |
|
||||
|
||||
|
||||
:::warning
|
||||
|
||||
If you are stuck in a broken build, go to the [Broken Build](/troubleshooting/#broken-build) section of Common Errors.
|
||||
|
||||
:::
|
||||
22
docs/docs/guides/installation/linux.mdx
Normal file
@ -0,0 +1,22 @@
|
||||
---
|
||||
title: Install on Linux
|
||||
sidebar_position: 3
|
||||
slug: /guides/install/linux
|
||||
hide_table_of_contents: true
|
||||
description: A step-by-step guide to install Jan on your Linux.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
Install on Linux,
|
||||
Linux,
|
||||
]
|
||||
---
|
||||
|
||||
Coming soon
|
||||
23
docs/docs/guides/installation/mac.mdx
Normal file
@ -0,0 +1,23 @@
|
||||
---
|
||||
title: Install on Mac
|
||||
sidebar_position: 1
|
||||
slug: /guides/install/mac
|
||||
hide_table_of_contents: true
|
||||
description: A step-by-step guide to install Jan on your Mac.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
MacOs,
|
||||
Install on Mac,
|
||||
Apple devices,
|
||||
]
|
||||
---
|
||||
|
||||
Coming soon
|
||||
24
docs/docs/guides/installation/windows.mdx
Normal file
@ -0,0 +1,24 @@
|
||||
---
|
||||
title: Install on Windows
|
||||
sidebar_position: 2
|
||||
slug: /guides/install/windows
|
||||
hide_table_of_contents: true
|
||||
description: A step-by-step guide to install Jan on your Windows.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
Windows 10,
|
||||
Windows 11,
|
||||
Install on Windows,
|
||||
Microsoft devices,
|
||||
]
|
||||
---
|
||||
|
||||
Coming soon
|
||||
@ -1,7 +1,7 @@
|
||||
---
|
||||
title: Integrations
|
||||
slug: /guides/integrations/
|
||||
sidebar_position: 6
|
||||
slug: /integrations/
|
||||
sidebar_position: 1
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
22
docs/docs/guides/integrations/crewai.mdx
Normal file
@ -0,0 +1,22 @@
|
||||
---
|
||||
title: CrewAI
|
||||
sidebar_position: 19
|
||||
slug: /integrations/crewai
|
||||
description: A step-by-step guide on how to integrate Jan with CrewAI.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
Continue integration,
|
||||
CrewAI integration,
|
||||
CrewAI
|
||||
]
|
||||
---
|
||||
|
||||
Coming Soon
|
||||
@ -1,22 +1,25 @@
|
||||
---
|
||||
title: Discord
|
||||
slug: /integrations/discord
|
||||
sidebar_position: 5
|
||||
description: A step-by-step guide on how to integrate Jan with a Discord bot.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
Discord integration,
|
||||
Discord,
|
||||
bot,
|
||||
]
|
||||
---
|
||||
|
||||
<head>
|
||||
<title>Discord</title>
|
||||
<meta name="description" content="A step-by-step guide on how to integrate Jan with a Discord bot. Learn how to clone the repository, install required libraries, set up the environment, insert the bot into Discord server, and run the bot."/>
|
||||
<meta name="keywords" content="Jan AI, Jan, ChatGPT alternative, local AI, private AI, conversational AI, no-subscription fee, large language model, integration, Discord, Discord bot"/>
|
||||
<meta property="og:title" content="Discord"/>
|
||||
<meta property="og:description" content="A step-by-step guide on how to integrate Jan with a Discord bot. Learn how to clone the repository, install required libraries, set up the environment, insert the bot into Discord server, and run the bot."/>
|
||||
<meta property="og:url" content="https://jan.ai/discord"/>
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Discord"/>
|
||||
<meta name="twitter:description" content="A step-by-step guide on how to integrate Jan with a Discord bot. Learn how to clone the repository, install required libraries, set up the environment, insert the bot into Discord server, and run the bot."/>
|
||||
</head>
|
||||
|
||||
## How to Integrate Discord Bot with Jan
|
||||
## Integrate Discord Bot with Jan
|
||||
|
||||
Discord bot can enhances your discord server interactions. By integrating Jan with it, you can significantly boost responsiveness and user engaggement in your discord server.
|
||||
|
||||
@ -1,7 +1,21 @@
|
||||
---
|
||||
title: Open Interpreter
|
||||
slug: /integrations/interpreter
|
||||
sidebar_position: 6
|
||||
description: A step-by-step guide on how to integrate Jan with Open Interpreter.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
Open Interpreter integration,
|
||||
Open Interpreter,
|
||||
]
|
||||
---
|
||||
|
||||
<head>
|
||||
@ -16,7 +30,7 @@ description: A step-by-step guide on how to integrate Jan with Open Interpreter.
|
||||
<meta name="twitter:description" content="A step-by-step guide on how to integrate Jan with Open Interpreter. Learn how to install Open Interpreter, configure Jan's local API server, and set up the Open Interpreter environment for seamless interaction with Jan."/>
|
||||
</head>
|
||||
|
||||
## How to Integrate Open Interpreter with Jan
|
||||
## Integrate Open Interpreter with Jan
|
||||
|
||||
[Open Interpreter](https://github.com/KillianLucas/open-interpreter/) lets LLMs run code (Python, Javascript, Shell, and more) locally. You can chat with Open Interpreter through a ChatGPT-like interface in your terminal by running `interpreter` after installing. To integrate Open Interpreter with Jan, follow the steps below:
|
||||
|
||||
19
docs/docs/guides/integrations/overview-integration.mdx
Normal file
@ -0,0 +1,19 @@
|
||||
---
|
||||
title: Overview
|
||||
slug: /integrationss
|
||||
description: Jan Docs | Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
sidebar_position: 1
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
]
|
||||
---
|
||||
|
||||
Coming Soon
|
||||
@ -1,6 +1,20 @@
|
||||
---
|
||||
title: Raycast
|
||||
sidebar_position: 4
|
||||
slug: /integrations/raycast
|
||||
sidebar_position: 17
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
raycast integration,
|
||||
Raycast,
|
||||
]
|
||||
description: A step-by-step guide on how to integrate Jan with Raycast.
|
||||
---
|
||||
|
||||
@ -16,7 +30,7 @@ description: A step-by-step guide on how to integrate Jan with Raycast.
|
||||
<meta name="twitter:description" content="A step-by-step guide on how to integrate Jan with Raycast. Learn how to download the TinyLlama model, clone and run the program, and use Jan models in Raycast for enhanced productivity."/>
|
||||
</head>
|
||||
|
||||
## How to Integrate Raycast
|
||||
## Integrate Raycast with Jan
|
||||
[Raycast](https://www.raycast.com/) is a productivity tool designed for macOS that enhances workflow efficiency by providing quick access to various tasks and functionalities through a keyboard-driven interface. To integrate Raycast with Jan, follow the steps below:
|
||||
|
||||
### Step 1: Download the TinyLlama Model
|
||||
@ -1,7 +1,21 @@
|
||||
---
|
||||
title: OpenRouter
|
||||
slug: /integrations/openrouter
|
||||
sidebar_position: 2
|
||||
description: A step-by-step guide on how to integrate Jan with OpenRouter.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
OpenRouter integration,
|
||||
OpenRouter
|
||||
]
|
||||
---
|
||||
|
||||
<head>
|
||||
@ -16,7 +30,7 @@ description: A step-by-step guide on how to integrate Jan with OpenRouter.
|
||||
<meta name="twitter:description" content="A step-by-step guide on how to integrate Jan with OpenRouter. Learn how to configure the OpenRouter API key, set up model configuration, and start using remote Large Language Models (LLMs) through OpenRouter with Jan."/>
|
||||
</head>
|
||||
|
||||
## How to Integrate OpenRouter with Jan
|
||||
## Integrate OpenRouter with Jan
|
||||
|
||||
[OpenRouter](https://openrouter.ai/docs#quick-start) is a tool that gathers AI models. Developers can utilize its API to engage with diverse large language models, generative image models, and generative 3D object models.
|
||||
|
||||
@ -61,7 +75,7 @@ To connect Jan with OpenRouter for accessing remote Large Language Models (LLMs)
|
||||
```
|
||||
|
||||
:::note
|
||||
For more details regarding the `model.json` settings and parameters fields, please see [here](../models/integrate-remote.mdx#modeljson).
|
||||
For more details regarding the `model.json` settings and parameters fields, please see [here](/guides/engines/remote-server/#modeljson).
|
||||
:::
|
||||
|
||||
### Step 3 : Start the Model
|
||||
21
docs/docs/guides/integrations/unsloth.mdx
Normal file
@ -0,0 +1,21 @@
|
||||
---
|
||||
title: Unsloth
|
||||
sidebar_position: 20
|
||||
slug: /integrations/unsloth
|
||||
description: A step-by-step guide on how to integrate Jan with Unsloth.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
Continue integration,
|
||||
Unsloth integration,
|
||||
]
|
||||
---
|
||||
|
||||
Coming Soon
|
||||
@ -1,6 +1,7 @@
|
||||
---
|
||||
title: Continue
|
||||
sidebar_position: 1
|
||||
title: Continue Integration
|
||||
sidebar_position: 18
|
||||
slug: /integrations/continue
|
||||
description: A step-by-step guide on how to integrate Jan with Continue and VS Code.
|
||||
keywords:
|
||||
[
|
||||
@ -17,22 +18,11 @@ keywords:
|
||||
]
|
||||
---
|
||||
|
||||
<head>
|
||||
<title>Continue</title>
|
||||
<meta name="description" content="A step-by-step guide on how to integrate Jan with Continue and Visual Studio Code. Learn how to install the Continue extension on Visual Studio Code, enable Jan's API server, configure Continue to use Jan's local server, and try out Jan integration with Continue in Visual Studio Code for enhanced coding experience."/>
|
||||
<meta name="keywords" content="Jan AI, Jan, ChatGPT alternative, local AI, private AI, conversational AI, no-subscription fee, large language model, Continue integration, VSCode integration"/>
|
||||
<meta property="og:title" content="Continue"/>
|
||||
<meta property="og:description" content="A step-by-step guide on how to integrate Jan with Continue and Visual Studio Code. Learn how to install the Continue extension on Visual Studio Code, enable Jan's API server, configure Continue to use Jan's local server, and try out Jan integration with Continue in Visual Studio Code for enhanced coding experience."/>
|
||||
<meta property="og:url" content="https://jan.ai/guides/integration/continue"/>
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Continue"/>
|
||||
<meta name="twitter:description" content="A step-by-step guide on how to integrate Jan with Continue and Visual Studio Code. Learn how to install the Continue extension on Visual Studio Code, enable Jan's API server, configure Continue to use Jan's local server, and try out Jan integration with Continue in Visual Studio Code for enhanced coding experience."/>
|
||||
</head>
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
## How to Integrate with Continue VS Code
|
||||
## Integrate with Continue VS Code
|
||||
|
||||
[Continue](https://continue.dev/docs/intro) is an open-source autopilot compatible with Visual Studio Code and JetBrains, offering the simplest method to code with any LLM (Local Language Model).
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
---
|
||||
title: Common Error
|
||||
slug: /guides/common-error/
|
||||
sidebar_position: 8
|
||||
title: Local Engines
|
||||
slug: /guides/engines/local
|
||||
sidebar_position: 13
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
|
Before Width: | Height: | Size: 111 KiB After Width: | Height: | Size: 111 KiB |
|
Before Width: | Height: | Size: 145 KiB After Width: | Height: | Size: 145 KiB |
|
Before Width: | Height: | Size: 155 KiB After Width: | Height: | Size: 155 KiB |
BIN
docs/docs/guides/local-providers/assets/image.png
Normal file
|
After Width: | Height: | Size: 27 KiB |
|
Before Width: | Height: | Size: 109 KiB After Width: | Height: | Size: 109 KiB |
|
Before Width: | Height: | Size: 258 KiB After Width: | Height: | Size: 258 KiB |
|
Before Width: | Height: | Size: 7.3 MiB After Width: | Height: | Size: 7.3 MiB |
|
Before Width: | Height: | Size: 163 KiB After Width: | Height: | Size: 163 KiB |
|
Before Width: | Height: | Size: 12 MiB After Width: | Height: | Size: 12 MiB |
|
Before Width: | Height: | Size: 106 KiB After Width: | Height: | Size: 106 KiB |
|
Before Width: | Height: | Size: 105 KiB After Width: | Height: | Size: 105 KiB |
|
Before Width: | Height: | Size: 115 KiB After Width: | Height: | Size: 115 KiB |
|
Before Width: | Height: | Size: 111 KiB After Width: | Height: | Size: 111 KiB |
|
Before Width: | Height: | Size: 644 KiB After Width: | Height: | Size: 644 KiB |
|
Before Width: | Height: | Size: 128 KiB After Width: | Height: | Size: 128 KiB |
|
Before Width: | Height: | Size: 144 KiB After Width: | Height: | Size: 144 KiB |
@ -1,7 +1,8 @@
|
||||
---
|
||||
title: Customize Engine Settings
|
||||
title: LlamaCPP Extension
|
||||
slug: /guides/engines/llamacpp
|
||||
sidebar_position: 1
|
||||
description: A step-by-step guide to change your engine's settings.
|
||||
description: A step-by-step guide on how to customize the LlamaCPP extension.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
@ -12,26 +13,22 @@ keywords:
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
import-models-manually,
|
||||
customize-engine-settings,
|
||||
Llama CPP integration,
|
||||
LlamaCPP Extension,
|
||||
]
|
||||
---
|
||||
|
||||
<head>
|
||||
<title>Customize Engine Settings</title>
|
||||
<meta name="description" content="A step-by-step guide to change your engine's settings. Learn how to modify the nitro.json file to customize parameters such as ctx_len, ngl, cpu_threads, cont_batching, and embedding to optimize the performance of your Jan AI."/>
|
||||
<meta name="keywords" content="Jan AI, Jan, ChatGPT alternative, local AI, private AI, conversational AI, no-subscription fee, large language model, import-models-manually, customize-engine-settings"/>
|
||||
<meta property="og:title" content="Customize Engine Settings"/>
|
||||
<meta property="og:description" content="A step-by-step guide to change your engine's settings. Learn how to modify the nitro.json file to customize parameters such as ctx_len, ngl, cpu_threads, cont_batching, and embedding to optimize the performance of your Jan AI."/>
|
||||
<meta property="og:url" content="https://jan.ai/guides/customize-engine-settings"/>
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Customize Engine Settings"/>
|
||||
<meta name="twitter:description" content="A step-by-step guide to change your engine's settings. Learn how to modify the nitro.json file to customize parameters such as ctx_len, ngl, cpu_threads, cont_batching, and embedding to optimize the performance of your Jan AI."/>
|
||||
</head>
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
## Overview
|
||||
[Nitro](https://github.com/janhq/nitro) is an inference server on top of [llama.cpp](https://github.com/ggerganov/llama.cpp). It provides an OpenAI-compatible API, queue, & scaling.
|
||||
|
||||
## LlamaCPP Extension
|
||||
:::note
|
||||
Nitro is the default AI engine downloaded with Jan. There is no additional setup needed.
|
||||
:::
|
||||
|
||||
In this guide, we'll walk you through the process of customizing your engine settings by configuring the `nitro.json` file
|
||||
|
||||
@ -1,5 +1,6 @@
|
||||
---
|
||||
title: LM Studio
|
||||
slug: /guides/engines/lmstudio
|
||||
sidebar_position: 8
|
||||
description: A step-by-step guide on how to integrate Jan with LM Studio.
|
||||
keywords:
|
||||
@ -16,19 +17,7 @@ keywords:
|
||||
]
|
||||
---
|
||||
|
||||
<head>
|
||||
<title>LM Studio</title>
|
||||
<meta name="description" content="A step-by-step guide on how to integrate Jan with LM Studio. Learn how to connect Jan to LM Studio, set up LM Studio server, configure Jan settings, enable LM Studio integration, migrate models, and troubleshoot integration issues."/>
|
||||
<meta name="keywords" content="Jan AI, Jan, ChatGPT alternative, local AI, private AI, conversational AI, no-subscription fee, large language model, LM Studio integration"/>
|
||||
<meta property="og:title" content="LM Studio"/>
|
||||
<meta property="og:description" content="A step-by-step guide on how to integrate Jan with LM Studio. Learn how to connect Jan to LM Studio, set up LM Studio server, configure Jan settings, enable LM Studio integration, migrate models, and troubleshoot integration issues."/>
|
||||
<meta property="og:url" content="https://jan.ai/guides/integration/lm-studio"/>
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="LM Studio"/>
|
||||
<meta name="twitter:description" content="A step-by-step guide on how to integrate Jan with LM Studio. Learn how to connect Jan to LM Studio, set up LM Studio server, configure Jan settings, enable LM Studio integration, migrate models, and troubleshoot integration issues."/>
|
||||
</head>
|
||||
|
||||
## How to Integrate LM Studio with Jan
|
||||
## Integrate LM Studio with Jan
|
||||
|
||||
[LM Studio](https://lmstudio.ai/) enables you to explore, download, and run local Large Language Models (LLMs). You can integrate Jan with LM Studio using two methods:
|
||||
1. Integrate the LM Studio server with Jan UI
|
||||
@ -93,7 +82,7 @@ Replace `(port)` with your chosen port number. The default is 1234.
|
||||
}
|
||||
```
|
||||
:::note
|
||||
For more details regarding the `model.json` settings and parameters fields, please see [here](../models/integrate-remote.mdx#modeljson).
|
||||
For more details regarding the `model.json` settings and parameters fields, please see [here](/guides/engines/remote-server/#modeljson).
|
||||
:::
|
||||
|
||||
|
||||
@ -1,6 +1,7 @@
|
||||
---
|
||||
title: Ollama
|
||||
sidebar_position: 9
|
||||
slug: /guides/engines/ollama
|
||||
sidebar_position: 4
|
||||
description: A step-by-step guide on how to integrate Jan with Ollama.
|
||||
keywords:
|
||||
[
|
||||
@ -16,19 +17,7 @@ keywords:
|
||||
]
|
||||
---
|
||||
|
||||
<head>
|
||||
<title>Ollama</title>
|
||||
<meta name="description" content="A step-by-step guide on how to integrate Jan with Ollama. Learn how to start the Ollama server, configure model settings, and start the model in Jan for enhanced functionality."/>
|
||||
<meta name="keywords" content="Jan AI, Jan, ChatGPT alternative, local AI, private AI, conversational AI, no-subscription fee, large language model, Ollama integration"/>
|
||||
<meta property="og:title" content="Ollama"/>
|
||||
<meta property="og:description" content="A step-by-step guide on how to integrate Jan with Ollama. Learn how to start the Ollama server, configure model settings, and start the model in Jan for enhanced functionality."/>
|
||||
<meta property="og:url" content="https://jan.ai/guides/integration/ollama"/>
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Ollama"/>
|
||||
<meta name="twitter:description" content="A step-by-step guide on how to integrate Jan with Ollama. Learn how to start the Ollama server, configure model settings, and start the model in Jan for enhanced functionality."/>
|
||||
</head>
|
||||
|
||||
## How to Integrate Ollama with Jan
|
||||
## Integrate Ollama with Jan
|
||||
|
||||
Ollama provides you with largen language that you can run locally. There are two methods to integrate Ollama with Jan:
|
||||
1. Integrate Ollama server with Jan.
|
||||
@ -92,7 +81,7 @@ ollama run <model-name>
|
||||
}
|
||||
```
|
||||
:::note
|
||||
For more details regarding the `model.json` settings and parameters fields, please see [here](../models/integrate-remote.mdx#modeljson).
|
||||
For more details regarding the `model.json` settings and parameters fields, please see [here](/guides/engines/remote-server/#modeljson).
|
||||
:::
|
||||
|
||||
### Step 3: Start the Model
|
||||
109
docs/docs/guides/local-providers/tensorrt.mdx
Normal file
@ -0,0 +1,109 @@
|
||||
---
|
||||
title: TensorRT-LLM Extension
|
||||
slug: /guides/engines/tensorrt-llm
|
||||
sidebar_position: 2
|
||||
description: A step-by-step guide on how to customize the TensorRT-LLM extension.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
TensorRT-LLM Extension,
|
||||
TensorRT,
|
||||
tensorRT,
|
||||
extension,
|
||||
]
|
||||
---
|
||||
|
||||
## Overview
|
||||
Users with Nvidia GPUs can get **20-40% faster token speeds** compared to using LlamaCPP engine on their laptop or desktops by using [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM). The greater implication is that you are running FP16, which is also more accurate than quantized models.
|
||||
|
||||
## TensortRT-LLM Extension
|
||||
This guide walks you through how to install Jan's official [TensorRT-LLM Extension](https://github.com/janhq/nitro-tensorrt-llm). This extension uses [Nitro-TensorRT-LLM](https://github.com/janhq/nitro-tensorrt-llm) as the AI engine, instead of the default [Nitro-Llama-CPP](https://github.com/janhq/nitro). It includes an efficient C++ server to natively execute the [TRT-LLM C++ runtime](https://nvidia.github.io/TensorRT-LLM/gpt_runtime.html). It also comes with additional feature and performance improvements like OpenAI compatibility, tokenizer improvements, and queues.
|
||||
|
||||
:::warning
|
||||
- This feature is only available for Windows users. Linux is coming soon.
|
||||
|
||||
- Additionally, we only prebuilt a few demo models. You can always build your desired models directly on your machine. For more information, please see [here](#build-your-own-tensorrt-models).
|
||||
|
||||
:::
|
||||
|
||||
### Pre-requisites
|
||||
|
||||
- A Windows PC
|
||||
- Nvidia GPU(s): Ada or Ampere series (i.e. RTX 4000s & 3000s). More will be supported soon.
|
||||
- 3GB+ of disk space to download TRT-LLM artifacts and a Nitro binary
|
||||
- Jan v0.4.9+ or Jan v0.4.8-321+ (nightly)
|
||||
- Nvidia Driver v535+ (For installation guide, please see [here](/troubleshooting/#1-ensure-gpu-mode-requirements))
|
||||
- CUDA Toolkit v12.2+ (For installation guide, please see [here](/troubleshooting/#1-ensure-gpu-mode-requirements))
|
||||
|
||||
### Step 1: Install TensorRT-Extension
|
||||
|
||||
1. Go to **Settings** > **Extensions**.
|
||||
2. Click **Install** next to the TensorRT-LLM Extension.
|
||||
3. Check that files are correctly downloaded.
|
||||
|
||||
```sh
|
||||
ls ~\jan\extensions\@janhq\tensorrt-llm-extension\dist\bin
|
||||
# Your Extension Folder should now include `nitro.exe`, among other artifacts needed to run TRT-LLM
|
||||
```
|
||||
|
||||
### Step 2: Download a Compatible Model
|
||||
TensorRT-LLM can only run models in `TensorRT` format. These models, aka "TensorRT Engines", are prebuilt specifically for each target OS+GPU architecture.
|
||||
|
||||
We offer a handful of precompiled models for Ampere and Ada cards that you can immediately download and play with:
|
||||
|
||||
1. Restart the application and go to the Hub.
|
||||
2. Look for models with the `TensorRT-LLM` label in the recommended models list > Click **Download**.
|
||||
|
||||
:::note
|
||||
This step might take some time. 🙏
|
||||
:::
|
||||
|
||||

|
||||
|
||||
3. Click use and start chatting!
|
||||
4. You may need to allow Nitro in your network
|
||||
|
||||

|
||||
|
||||
:::warning
|
||||
If you are our nightly builds, you may have to reinstall the TensorRT-LLM extension each time you update the app. We're working on better extension lifecyles - stay tuned.
|
||||
:::
|
||||
|
||||
### Step 3: Configure Settings
|
||||
|
||||
You can customize the default parameters for how Jan runs TensorRT-LLM.
|
||||
|
||||
:::info
|
||||
coming soon
|
||||
:::
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Incompatible Extension vs Engine versions
|
||||
|
||||
For now, the model versions are pinned to the extension versions.
|
||||
|
||||
### Uninstall Extension
|
||||
To uninstall the extension, follow the steps below:
|
||||
|
||||
1. Quit the app.
|
||||
2. Go to **Settings** > **Extensions**.
|
||||
3. Delete the entire Extensions folder.
|
||||
4. Reopen the app, only the default extensions should be restored.
|
||||
|
||||
### Install Nitro-TensorRT-LLM manually
|
||||
|
||||
To manually build the artifacts needed to run the server and TensorRT-LLM, you can reference the source code. [Read here](https://github.com/janhq/nitro-tensorrt-llm?tab=readme-ov-file#quickstart).
|
||||
|
||||
### Build your own TensorRT models
|
||||
|
||||
:::info
|
||||
coming soon
|
||||
:::
|
||||
@ -1,82 +0,0 @@
|
||||
---
|
||||
title: Pre-configured Models
|
||||
sidebar_position: 3
|
||||
---
|
||||
|
||||
<head>
|
||||
<title>Pre-configured Models - Jan Guides</title>
|
||||
<meta name="description" content="Explore the various pre-configured AI models available in Jan, along with their descriptions, authors, model IDs, formats, and sizes."/>
|
||||
<meta name="keywords" content="Jan AI, Jan, pre-configured models, Mistral Instruct, OpenHermes Neural, Stealth, Trinity, Openchat, Wizard Coder Python, OpenAI GPT, TinyLlama Chat, Deepseek Coder, Phi-2, Llama 2 Chat, CodeNinja, Noromaid, Starling, Yarn Mistral, LlaVa, BakLlava"/>
|
||||
<meta property="og:title" content="Pre-configured Models - Jan Guides"/>
|
||||
<meta property="og:description" content="Explore the various pre-configured AI models available in Jan, along with their descriptions, authors, model IDs, formats, and sizes."/>
|
||||
<meta property="og:url" content="https://jan.ai/guides/pre-configured-models"/>
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Pre-configured Models - Jan Guides"/>
|
||||
<meta name="twitter:description" content="Explore the various pre-configured AI models available in Jan, along with their descriptions, authors, model IDs, formats, and sizes."/>
|
||||
</head>
|
||||
|
||||
## Overview
|
||||
|
||||
Jan provides various pre-configured AI models with different capabilities. Please see the following list for details.
|
||||
|
||||
| Model | Description |
|
||||
| ----- | ----------- |
|
||||
| Mistral Instruct 7B Q4 | A model designed for a comprehensive understanding through training on extensive internet data |
|
||||
| OpenHermes Neural 7B Q4 | A merged model using the TIES method. It performs well in various benchmarks |
|
||||
| Stealth 7B Q4 | This is a new experimental family designed to enhance Mathematical and Logical abilities |
|
||||
| Trinity-v1.2 7B Q4 | An experimental model merge using the Slerp method |
|
||||
| Openchat-3.5 7B Q4 | An open-source model that has a performance that surpasses that of ChatGPT-3.5 and Grok-1 across various benchmarks |
|
||||
| Wizard Coder Python 13B Q5 | A Python coding model that demonstrates high proficiency in specific domains like coding and mathematics |
|
||||
| OpenAI GPT 3.5 Turbo | The latest GPT-3.5 Turbo model with higher accuracy at responding in requested formats and a fix for a bug that caused a text encoding issue for non-English language function calls |
|
||||
| OpenAI GPT 3.5 Turbo 16k 0613 | A Snapshot model of gpt-3.5-16k-turbo from June 13th 2023 |
|
||||
| OpenAI GPT 4 | The latest GPT-4 model intended to reduce cases of “laziness” where the model doesn't complete a task |
|
||||
| TinyLlama Chat 1.1B Q4 | A tiny model with only 1.1B. It's a good model for less powerful computers |
|
||||
| Deepseek Coder 1.3B Q8 | A model that excelled in project-level code completion with advanced capabilities across multiple programming languages |
|
||||
| Phi-2 3B Q8 | a 2.7B model, excelling in common sense and logical reasoning benchmarks, trained with synthetic texts and filtered websites |
|
||||
| Llama 2 Chat 7B Q4 | A model that is specifically designed for a comprehensive understanding through training on extensive internet data |
|
||||
| CodeNinja 7B Q4 | A model that is good for coding tasks and can handle various languages, including Python, C, C++, Rust, Java, JavaScript, and more |
|
||||
| Noromaid 7B Q5 | A model designed for role-playing with human-like behavior. |
|
||||
| Starling alpha 7B Q4 | An upgrade of Openchat 3.5 using RLAIF, is good at various benchmarks, especially with GPT-4 judging its performance |
|
||||
| Yarn Mistral 7B Q4 | A language model for long context and supports a 128k token context window |
|
||||
| LlaVa 1.5 7B Q5 K | A model can bring vision understanding to Jan |
|
||||
| BakLlava 1 | A model can bring vision understanding to Jan |
|
||||
| Solar Slerp 10.7B Q4 | A model that uses the Slerp merge method from SOLAR Instruct and Pandora-v1 |
|
||||
| LlaVa 1.5 13B Q5 K | A model can bring vision understanding to Jan |
|
||||
| Deepseek Coder 33B Q5 | A model that excelled in project-level code completion with advanced capabilities across multiple programming languages |
|
||||
| Phind 34B Q5 | A multi-lingual model that is fine-tuned on 1.5B tokens of high-quality programming data, excels in various programming languages, and is designed to be steerable and user-friendly |
|
||||
| Yi 34B Q5 | A specialized chat model is known for its diverse and creative responses and excels across various NLP tasks and benchmarks |
|
||||
| Capybara 200k 34B Q5 | A long context length model that supports 200K tokens |
|
||||
| Dolphin 8x7B Q4 | An uncensored model built on Mixtral-8x7b and it is good at programming tasks |
|
||||
| Mixtral 8x7B Instruct Q4 | A pre-trained generative Sparse Mixture of Experts, which outperforms 70B models on most benchmarks |
|
||||
| Tulu 2 70B Q4 | A strong model alternative to Llama 2 70b Chat to act as helpful assistants |
|
||||
| Llama 2 Chat 70B Q4 | A model that is specifically designed for a comprehensive understanding through training on extensive internet data |
|
||||
|
||||
:::note
|
||||
|
||||
OpenAI GPT models require a subscription to use them further. To learn more, [click here](https://openai.com/pricing).
|
||||
|
||||
:::
|
||||
|
||||
## Model details
|
||||
|
||||
| Model | Author | Model ID | Format | Size |
|
||||
| ----- | ------ | -------- | ------ | ---- |
|
||||
| Mistral Instruct 7B Q4 | MistralAI, The Bloke | `mistral-ins-7b-q4` | **GGUF** | 4.07GB |
|
||||
| OpenHermes Neural 7B Q4 | Intel, Jan | `openhermes-neural-7b` | **GGUF** | 4.07GB |
|
||||
| Stealth 7B Q4 | Jan | `stealth-v1.2-7b` | **GGUF** | 4.07GB |
|
||||
| Trinity-v1.2 7B Q4 | Jan | `trinity-v1.2-7b` | **GGUF** | 4.07GB |
|
||||
| Openchat-3.5 7B Q4 | Openchat | `openchat-3.5-7b` | **GGUF** | 4.07GB |
|
||||
| Wizard Coder Python 13B Q5 | WizardLM, The Bloke | `wizardcoder-13b` | **GGUF** | 7.33GB | - |
|
||||
| OpenAI GPT 3.5 Turbo | OpenAI | `gpt-3.5-turbo` | **GGUF** | - |
|
||||
| OpenAI GPT 3.5 Turbo 16k 0613 | OpenAI | `gpt-3.5-turbo-16k-0613` | **GGUF** | - |
|
||||
| OpenAI GPT 4 | OpenAI | `gpt-4` | **GGUF** | - |
|
||||
| TinyLlama Chat 1.1B Q4 | TinyLlama | `tinyllama-1.1b` | **GGUF** | 638.01MB |
|
||||
| Deepseek Coder 1.3B Q8 | Deepseek, The Bloke | `deepseek-coder-1.3b` | **GGUF** | 1.33GB |
|
||||
| Phi-2 3B Q8 | Microsoft | `phi-2-3b` | **GGUF** | 2.76GB |
|
||||
| Llama 2 Chat 7B Q4 | MetaAI, The Bloke | `llama2-chat-7b-q4` | **GGUF** | 3.80GB |
|
||||
| CodeNinja 7B Q4 | Beowolx | `codeninja-1.0-7b` | **GGUF** | 4.07GB |
|
||||
| Noromaid 7B Q5 | NeverSleep | `noromaid-7b` | **GGUF** | 4.07GB |
|
||||
| Starling alpha 7B Q4 | Berkeley-nest, The Bloke | `starling-7b` | **GGUF** | 4.07GB |
|
||||
| Yarn Mistral 7B Q4 | NousResearch, The Bloke | `yarn-mistral-7b` | **GGUF** | 4.07GB |
|
||||
| LlaVa 1.5 7B Q5 K | Mys | `llava-1.5-7b-q5` | **GGUF** | 5.03GB |
|
||||
| BakLlava 1 | Mys | `bakllava-1` | **GGUF** | 5.36GB |
|
||||
@ -1,22 +0,0 @@
|
||||
---
|
||||
title: Models Setup
|
||||
slug: /guides/models-setup/
|
||||
sidebar_position: 5
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
build extension,
|
||||
]
|
||||
---
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
||||
|
Before Width: | Height: | Size: 1.3 MiB |
@ -1,269 +0,0 @@
|
||||
---
|
||||
title: Manual Import
|
||||
sidebar_position: 3
|
||||
description: A step-by-step guide on how to perform manual import feature.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
import-models-manually,
|
||||
absolute-filepath,
|
||||
]
|
||||
---
|
||||
|
||||
<head>
|
||||
<title>Manual Import</title>
|
||||
<meta name="description" content="A step-by-step guide on how to perform manual import feature. Learn how to import models into Jan using both drag-and-drop method and absolute file path, along with creating model folders and configuring the model JSON file."/>
|
||||
<meta name="keywords" content="Jan AI, Jan, ChatGPT alternative, local AI, private AI, conversational AI, no-subscription fee, large language model, import-models-manually, absolute-filepath"/>
|
||||
<meta property="og:title" content="Manual Import"/>
|
||||
<meta property="og:description" content="A step-by-step guide on how to perform manual import feature. Learn how to import models into Jan using both drag-and-drop method and absolute file path, along with creating model folders and configuring the model JSON file."/>
|
||||
<meta property="og:url" content="https://jan.ai/guides/manual-import"/>
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Manual Import"/>
|
||||
<meta name="twitter:description" content="A step-by-step guide on how to perform manual import feature. Learn how to import models into Jan using both drag-and-drop method and absolute file path, along with creating model folders and configuring the model JSON file."/>
|
||||
</head>
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
import janModel from './assets/jan-model-hub.png';
|
||||
|
||||
|
||||
This guide will show you how to perform manual import. In this guide, we are using a GGUF model from [HuggingFace](https://huggingface.co/) and our latest model, [Trinity](https://huggingface.co/janhq/trinity-v1-GGUF), as an example.
|
||||
|
||||
## Newer versions - nightly versions and v0.4.8+
|
||||
|
||||
Starting with version 0.4.8, Jan has introduced the capability to import models using a UI drag-and-drop method. This allows you to import models directly into the Jan application UI by dragging the `.GGUF` file from your directory into the Jan application.
|
||||
|
||||
### 1. Get the Model
|
||||
Download the model from HuggingFace in the `.GGUF` format.
|
||||
|
||||
### 2. Import the Model
|
||||
1. Open your Jan application.
|
||||
2. Click the **Import Model** button.
|
||||
3. Open your downloaded model.
|
||||
4. Drag the `.GGUF` file from your directory into the Jan **Import Model** window.
|
||||
|
||||
### 3. Done!
|
||||
|
||||
If your model doesn't show up in the **Model Selector** in conversations, **restart the app** or contact us via our [Discord community](https://discord.gg/Dt7MxDyNNZ).
|
||||
|
||||
## Newer versions - nightly versions and v0.4.7+
|
||||
|
||||
Starting from version 0.4.7, Jan has introduced the capability to import models using an absolute file path. It allows you to import models from any directory on your computer.
|
||||
|
||||
### 1. Get the Absolute Filepath of the Model
|
||||
|
||||
After downloading the model from HuggingFace, get the absolute filepath of the model.
|
||||
|
||||
### 2. Configure the Model JSON
|
||||
|
||||
1. Navigate to the `~/jan/models` folder.
|
||||
2. Create a folder named `<modelname>`, for example, `tinyllama`.
|
||||
3. Create a `model.json` file inside the folder, including the following configurations:
|
||||
|
||||
- Ensure the `id` property matches the folder name you created.
|
||||
- Ensure the `url` property is the direct binary download link ending in `.gguf`. Now, you can use the absolute filepath of the model file.
|
||||
- Ensure the `engine` property is set to `nitro`.
|
||||
|
||||
```json
|
||||
{
|
||||
"sources": [
|
||||
{
|
||||
"filename": "tinyllama.gguf",
|
||||
// highlight-next-line
|
||||
"url": "<absolute-filepath-of-the-model-file>"
|
||||
}
|
||||
],
|
||||
"id": "tinyllama-1.1b",
|
||||
"object": "model",
|
||||
"name": "(Absolute Path) TinyLlama Chat 1.1B Q4",
|
||||
"version": "1.0",
|
||||
"description": "TinyLlama is a tiny model with only 1.1B. It's a good model for less powerful computers.",
|
||||
"format": "gguf",
|
||||
"settings": {
|
||||
"ctx_len": 4096,
|
||||
"prompt_template": "<|system|>\n{system_message}<|user|>\n{prompt}<|assistant|>",
|
||||
"llama_model_path": "tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf"
|
||||
},
|
||||
"parameters": {
|
||||
"temperature": 0.7,
|
||||
"top_p": 0.95,
|
||||
"stream": true,
|
||||
"max_tokens": 2048,
|
||||
"stop": [],
|
||||
"frequency_penalty": 0,
|
||||
"presence_penalty": 0
|
||||
},
|
||||
"metadata": {
|
||||
"author": "TinyLlama",
|
||||
"tags": ["Tiny", "Foundation Model"],
|
||||
"size": 669000000
|
||||
},
|
||||
"engine": "nitro"
|
||||
}
|
||||
```
|
||||
|
||||
:::warning
|
||||
|
||||
- If you are using Windows, you need to use double backslashes in the url property, for example: `C:\\Users\\username\\filename.gguf`.
|
||||
|
||||
:::
|
||||
|
||||
### 3. Done!
|
||||
|
||||
If your model doesn't show up in the **Model Selector** in conversations, **restart the app** or contact us via our [Discord community](https://discord.gg/Dt7MxDyNNZ).
|
||||
|
||||
## Newer versions - nightly versions and v0.4.4+
|
||||
|
||||
### 1. Create a Model Folder
|
||||
|
||||
1. Navigate to the `App Settings` > `Advanced` > `Open App Directory` > `~/jan/models` folder.
|
||||
|
||||
<Tabs groupId = "operating-systems" >
|
||||
<TabItem value="mac" label = "MacOS" default>
|
||||
```sh
|
||||
cd ~/jan/models
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value = "windows" label = "Windows" default>
|
||||
```sh
|
||||
C:/Users/<your_user_name>/jan/models
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value = "linux" label = "Linux" default>
|
||||
```sh
|
||||
cd ~/jan/models
|
||||
```
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
2. In the `models` folder, create a folder with the name of the model.
|
||||
|
||||
```sh
|
||||
mkdir trinity-v1-7b
|
||||
```
|
||||
|
||||
### 2. Drag & Drop the Model
|
||||
|
||||
Drag and drop your model binary into this folder, ensuring the `modelname.gguf` is the same name as the folder name, e.g. `models/modelname`.
|
||||
|
||||
### 3. Done!
|
||||
|
||||
If your model doesn't show up in the **Model Selector** in conversations, **restart the app** or contact us via our [Discord community](https://discord.gg/Dt7MxDyNNZ).
|
||||
|
||||
## Older versions - before v0.44
|
||||
|
||||
### 1. Create a Model Folder
|
||||
|
||||
1. Navigate to the `App Settings` > `Advanced` > `Open App Directory` > `~/jan/models` folder.
|
||||
|
||||
<Tabs groupId = "operating-systems" >
|
||||
<TabItem value="mac" label = "MacOS" default>
|
||||
```sh
|
||||
cd ~/jan/models
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value = "windows" label = "Windows" default>
|
||||
```sh
|
||||
C:/Users/<your_user_name>/jan/models
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value = "linux" label = "Linux" default>
|
||||
```sh
|
||||
cd ~/jan/models
|
||||
```
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
2. In the `models` folder, create a folder with the name of the model.
|
||||
|
||||
```sh
|
||||
mkdir trinity-v1-7b
|
||||
```
|
||||
|
||||
### 2. Create a Model JSON
|
||||
|
||||
Jan follows a folder-based, [standard model template](https://jan.ai/docs/engineering/models/) called a `model.json` to persist the model configurations on your local filesystem.
|
||||
|
||||
This means that you can easily reconfigure your models, export them, and share your preferences transparently.
|
||||
|
||||
<Tabs groupId = "operating-systems" >
|
||||
<TabItem value="mac" label = "MacOS" default>
|
||||
```sh
|
||||
cd trinity-v1-7b
|
||||
touch model.json
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value = "windows" label = "Windows" default>
|
||||
```sh
|
||||
cd trinity-v1-7b
|
||||
echo {} > model.json
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value = "linux" label = "Linux" default>
|
||||
```sh
|
||||
cd trinity-v1-7b
|
||||
touch model.json
|
||||
```
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
To update `model.json`:
|
||||
|
||||
- Match `id` with folder name.
|
||||
- Ensure GGUF filename matches `id`.
|
||||
- Set `source.url` to direct download link ending in `.gguf`. In HuggingFace, you can find the direct links in the `Files and versions` tab.
|
||||
- Verify that you are using the correct `prompt_template`. This is usually provided in the HuggingFace model's description page.
|
||||
|
||||
```json title="model.json"
|
||||
{
|
||||
"sources": [
|
||||
{
|
||||
"filename": "trinity-v1.Q4_K_M.gguf",
|
||||
"url": "https://huggingface.co/janhq/trinity-v1-GGUF/resolve/main/trinity-v1.Q4_K_M.gguf"
|
||||
}
|
||||
],
|
||||
"id": "trinity-v1-7b",
|
||||
"object": "model",
|
||||
"name": "Trinity-v1 7B Q4",
|
||||
"version": "1.0",
|
||||
"description": "Trinity is an experimental model merge of GreenNodeLM & LeoScorpius using the Slerp method. Recommended for daily assistance purposes.",
|
||||
"format": "gguf",
|
||||
"settings": {
|
||||
"ctx_len": 4096,
|
||||
"prompt_template": "{system_message}\n### Instruction:\n{prompt}\n### Response:",
|
||||
"llama_model_path": "trinity-v1.Q4_K_M.gguf"
|
||||
},
|
||||
"parameters": {
|
||||
"max_tokens": 4096
|
||||
},
|
||||
"metadata": {
|
||||
"author": "Jan",
|
||||
"tags": ["7B", "Merged"],
|
||||
"size": 4370000000
|
||||
},
|
||||
"engine": "nitro"
|
||||
}
|
||||
```
|
||||
|
||||
:::note
|
||||
For more details regarding the `model.json` settings and parameters fields, please see [here](/docs/guides/models/integrate-remote.mdx#modeljson).
|
||||
:::
|
||||
|
||||
### 3. Download the Model
|
||||
|
||||
1. Restart Jan and navigate to the Hub.
|
||||
2. Locate your model.
|
||||
3. Click **Download** button to download the model binary.
|
||||
|
||||
:::info[Assistance and Support]
|
||||
|
||||
If you have questions, please join our [Discord community](https://discord.gg/Dt7MxDyNNZ) for support, updates, and discussions.
|
||||
|
||||
:::
|
||||
@ -1,8 +0,0 @@
|
||||
---
|
||||
title: Inference Providers
|
||||
slug: /guides/providers
|
||||
---
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
||||
@ -1,22 +0,0 @@
|
||||
---
|
||||
title: llama.cpp
|
||||
slug: /guides/providers/llama-cpp
|
||||
---
|
||||
|
||||
<head>
|
||||
<title>llama.cpp - Jan Guides</title>
|
||||
<meta name="description" content="Learn about llama.cpp, the inference server used by Nitro, the default AI engine downloaded with Jan. Understand how Nitro provides an OpenAI-compatible API, queue, & scaling."/>
|
||||
<meta name="keywords" content="Jan AI, Jan, ChatGPT alternative, llama.cpp, Nitro, inference server, OpenAI-compatible API, queue, scaling"/>
|
||||
<meta property="og:title" content="llama.cpp - Jan Guides"/>
|
||||
<meta property="og:description" content="Learn about llama.cpp, the inference server used by Nitro, the default AI engine downloaded with Jan. Understand how Nitro provides an OpenAI-compatible API, queue, & scaling."/>
|
||||
<meta property="og:url" content="https://jan.ai/guides/providers/llama-cpp"/>
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="llama.cpp - Jan Guides"/>
|
||||
<meta name="twitter:description" content="Learn about llama.cpp, the inference server used by Nitro, the default AI engine downloaded with Jan. Understand how Nitro provides an OpenAI-compatible API, queue, & scaling."/>
|
||||
</head>
|
||||
|
||||
## Overview
|
||||
|
||||
[Nitro](https://github.com/janhq/nitro) is an inference server on top of [llama.cpp](https://github.com/ggerganov/llama.cpp). It provides an OpenAI-compatible API, queue, & scaling.
|
||||
|
||||
Nitro is the default AI engine downloaded with Jan. There is no additional setup needed.
|
||||
@ -1,80 +0,0 @@
|
||||
---
|
||||
title: Quickstart
|
||||
slug: /guides
|
||||
description: Jan Docs | Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
sidebar_position: 1
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
]
|
||||
---
|
||||
|
||||
<head>
|
||||
<title>Quickstart - Jan Docs</title>
|
||||
<meta name="description" content="Get started quickly with Jan, a ChatGPT-alternative that runs on your own computer, with a local API server. Learn how to install Jan and select an AI model to start chatting."/>
|
||||
<meta name="keywords" content="Jan AI, Jan, Quickstart, installation, select AI model, using AI model, getting started"/>
|
||||
<meta property="og:title" content="Quickstart - Jan Docs"/>
|
||||
<meta property="og:description" content="Get started quickly with Jan, a ChatGPT-alternative that runs on your own computer, with a local API server. Learn how to install Jan and select an AI model to start chatting."/>
|
||||
<meta property="og:url" content="https://jan.ai/guides"/>
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Quickstart - Jan Docs"/>
|
||||
<meta name="twitter:description" content="Get started quickly with Jan, a ChatGPT-alternative that runs on your own computer, with a local API server. Learn how to install Jan and select an AI model to start chatting."/>
|
||||
</head>
|
||||
|
||||
import installImageURL from './assets/jan-ai-quickstart.png';
|
||||
import flow from './assets/quick.png';
|
||||
|
||||
# Quickstart
|
||||
|
||||
{/* After finish installing, here are steps for using Jan
|
||||
|
||||
## Run Jan
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="mac" label="MacOS" default>
|
||||
1. Search Jan in the Dock and run the program.
|
||||
</TabItem>
|
||||
<TabItem value="windows" label="Windows" default>
|
||||
1. Search Jan in the Start menu and run the program.
|
||||
</TabItem>
|
||||
<TabItem value="linux" label="Linux" default>
|
||||
1. Go to the Jan directory and run the program.
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
2. After you run Jan, the program will take you to the Threads window, with list of threads and each thread is a chatting box between you and the AI model.
|
||||
|
||||
3. Go to the **Hub** under the **Thread** section and select the AI model that you want to use. For more info, go to the [Using Models](category/using-models) section.
|
||||
|
||||
4. A new thread will be added. You can use Jan in the thread with the AI model that you selected before. */}
|
||||
|
||||
To get started quickly with Jan, follow the steps below:
|
||||
|
||||
### Step 1: Install Jan
|
||||
|
||||
Go to [Jan.ai](https://jan.ai/) > Select your operating system > Install the program.
|
||||
|
||||
:::note
|
||||
To learn more about system requirements for your operating system, go to [Installation guide](/guides/install).
|
||||
:::
|
||||
|
||||
### Step 2: Select AI Model
|
||||
|
||||
Before using Jan, you need to select an AI model that based on your hardware capabilities and specifications. Each model has their purposes, capabilities, and different requirements. To select AI models:
|
||||
|
||||
Go to the **Hub** > select the models that you would like to install.
|
||||
|
||||
:::note
|
||||
For more info, go to [list of supported models](/guides/models-list/).
|
||||
:::
|
||||
|
||||
### Step 3: Use the AI Model
|
||||
|
||||
After you install the AI model, you use it immediately under **Thread** tab.
|
||||
@ -1,7 +1,7 @@
|
||||
---
|
||||
title: Error Codes
|
||||
slug: /guides/error-codes/
|
||||
sidebar_position: 7
|
||||
title: Remote Engines
|
||||
slug: /guides/engines/remote
|
||||
sidebar_position: 14
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
BIN
docs/docs/guides/remote-providers/assets/azure.png
Normal file
|
After Width: | Height: | Size: 111 KiB |
BIN
docs/docs/guides/remote-providers/assets/cont.png
Normal file
|
After Width: | Height: | Size: 145 KiB |
BIN
docs/docs/guides/remote-providers/assets/discordflow.png
Normal file
|
After Width: | Height: | Size: 155 KiB |
BIN
docs/docs/guides/remote-providers/assets/interpreter.png
Normal file
|
After Width: | Height: | Size: 109 KiB |
BIN
docs/docs/guides/remote-providers/assets/jan-ai-continue-ask.png
Normal file
|
After Width: | Height: | Size: 258 KiB |
|
After Width: | Height: | Size: 7.3 MiB |
BIN
docs/docs/guides/remote-providers/assets/jan-ai-discord-repo.png
Normal file
|
After Width: | Height: | Size: 163 KiB |
BIN
docs/docs/guides/remote-providers/assets/jan-ai-openrouter.gif
Normal file
|
After Width: | Height: | Size: 12 MiB |
BIN
docs/docs/guides/remote-providers/assets/lmstudio.png
Normal file
|
After Width: | Height: | Size: 106 KiB |
BIN
docs/docs/guides/remote-providers/assets/mistral.png
Normal file
|
After Width: | Height: | Size: 105 KiB |
BIN
docs/docs/guides/remote-providers/assets/ollama.png
Normal file
|
After Width: | Height: | Size: 115 KiB |
BIN
docs/docs/guides/remote-providers/assets/openrouter.png
Normal file
|
After Width: | Height: | Size: 111 KiB |
BIN
docs/docs/guides/remote-providers/assets/raycast-image.png
Normal file
|
After Width: | Height: | Size: 644 KiB |
BIN
docs/docs/guides/remote-providers/assets/raycast.png
Normal file
|
After Width: | Height: | Size: 128 KiB |
BIN
docs/docs/guides/remote-providers/assets/vscode.png
Normal file
|
After Width: | Height: | Size: 144 KiB |
21
docs/docs/guides/remote-providers/claude.mdx
Normal file
@ -0,0 +1,21 @@
|
||||
---
|
||||
title: Claude
|
||||
sidebar_position: 6
|
||||
slug: /guides/engines/claude
|
||||
description: A step-by-step guide on how to integrate Jan with LM Studio.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
Claude integration,
|
||||
claude,
|
||||
]
|
||||
---
|
||||
|
||||
Coming Soon
|
||||
@ -1,7 +1,7 @@
|
||||
---
|
||||
title: Groq
|
||||
sidebar_position: 10
|
||||
slug: /guides/integration/groq
|
||||
sidebar_position: 5
|
||||
slug: /guides/engines/groq
|
||||
description: Learn how to integrate Groq API with Jan for enhanced functionality.
|
||||
keywords:
|
||||
[
|
||||
@ -1,6 +1,7 @@
|
||||
---
|
||||
title: Mistral AI
|
||||
sidebar_position: 7
|
||||
sidebar_position: 4
|
||||
slug: /guides/engines/mistral
|
||||
description: A step-by-step guide on how to integrate Jan with Mistral AI.
|
||||
keywords:
|
||||
[
|
||||
@ -88,7 +89,7 @@ This tutorial demonstrates integrating Mistral AI with Jan using the API.
|
||||
```
|
||||
|
||||
:::note
|
||||
- For more details regarding the `model.json` settings and parameters fields, please see [here](../models/integrate-remote.mdx#modeljson).
|
||||
- For more details regarding the `model.json` settings and parameters fields, please see [here](/guides/engines/remote-server/#modeljson).
|
||||
- Mistral AI offers various endpoints. Refer to their [endpoint documentation](https://docs.mistral.ai/platform/endpoints/) to select the one that fits your requirements. Here, we use the `mistral-tiny` model as an example.
|
||||
:::
|
||||
|
||||
@ -1,6 +1,7 @@
|
||||
---
|
||||
title: Azure OpenAI
|
||||
sidebar_position: 3
|
||||
sidebar_position: 2
|
||||
slug: /guides/engines/openai
|
||||
description: A step-by-step guide on how to integrate Jan with Azure OpenAI.
|
||||
keywords:
|
||||
[
|
||||
@ -17,19 +18,7 @@ keywords:
|
||||
]
|
||||
---
|
||||
|
||||
<head>
|
||||
<title>Azure OpenAI</title>
|
||||
<meta name="description" content="A step-by-step guide on how to integrate Jan with Azure OpenAI. Learn how to configure Azure OpenAI Service API key, set up model configuration, and start the model in Jan."/>
|
||||
<meta name="keywords" content="Jan AI, Jan, ChatGPT alternative, local AI, private AI, conversational AI, no-subscription fee, large language model, integration, Azure OpenAI Service"/>
|
||||
<meta property="og:title" content="Azure OpenAI"/>
|
||||
<meta property="og:description" content="A step-by-step guide on how to integrate Jan with Azure OpenAI. Learn how to configure Azure OpenAI Service API key, set up model configuration, and start the model in Jan."/>
|
||||
<meta property="og:url" content="https://jan.ai/azure-openai"/>
|
||||
<meta name="twitter:card" content="summary"/>
|
||||
<meta name="twitter:title" content="Azure OpenAI"/>
|
||||
<meta name="twitter:description" content="A step-by-step guide on how to integrate Jan with Azure OpenAI. Learn how to configure Azure OpenAI Service API key, set up model configuration, and start the model in Jan."/>
|
||||
</head>
|
||||
|
||||
## How to Integrate Azure OpenAI with Jan
|
||||
## Integrate Azure OpenAI with Jan
|
||||
|
||||
The [Azure OpenAI Service](https://learn.microsoft.com/en-us/azure/ai-services/openai/overview?source=docs) offers robust APIs, making it simple for you to incorporate OpenAI's language models into your applications. You can integrate Azure OpenAI with Jan by following the steps below:
|
||||
|
||||
@ -83,7 +72,7 @@ The [Azure OpenAI Service](https://learn.microsoft.com/en-us/azure/ai-services/o
|
||||
```
|
||||
|
||||
:::note
|
||||
For more details regarding the `model.json` settings and parameters fields, please see [here](../models/integrate-remote.mdx#modeljson).
|
||||
For more details regarding the `model.json` settings and parameters fields, please see [here](/guides/engines/remote-server/#modeljson).
|
||||
:::
|
||||
|
||||
### Step 3: Start the Model
|
||||
@ -1,6 +1,7 @@
|
||||
---
|
||||
title: Remote Server Integration
|
||||
sidebar_position: 2
|
||||
sidebar_position: 1
|
||||
slug: /guides/engines/remote-server
|
||||
description: A step-by-step guide on how to set up Jan to connect with any remote or local API server.
|
||||
keywords:
|
||||
[
|
||||
434
docs/docs/guides/troubleshooting.mdx
Normal file
@ -0,0 +1,434 @@
|
||||
---
|
||||
title: Troubleshooting
|
||||
slug: /troubleshooting
|
||||
description: Jan Docs | Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
sidebar_position: 21
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
troubleshooting,
|
||||
error codes,
|
||||
broken build,
|
||||
something amiss,
|
||||
unexpected token,
|
||||
undefined issue,
|
||||
permission denied,
|
||||
]
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
## Broken Build
|
||||
To resolve the issue where your Jan is stuck in a broken build after installation.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="mac" label="Mac" default>
|
||||
#### 1. Uninstall Jan
|
||||
|
||||
Delete Jan from your `/Applications` folder.
|
||||
|
||||
#### 2. Delete Application Data, Cache, and User Data
|
||||
|
||||
```zsh
|
||||
# Step 1: Delete the application data
|
||||
## Newer versions
|
||||
rm -rf ~/Library/Application\ Support/jan
|
||||
## Versions 0.2.0 and older
|
||||
rm -rf ~/Library/Application\ Support/jan-electron
|
||||
|
||||
# Step 2: Clear application cache
|
||||
rm -rf ~/Library/Caches/jan*
|
||||
|
||||
# Step 3: Remove all user data
|
||||
rm -rf ~/jan
|
||||
```
|
||||
|
||||
#### 3. Additional Step for Versions Before 0.4.2
|
||||
|
||||
If you are using a version before `0.4.2`, you need to run the following commands:
|
||||
|
||||
```zsh
|
||||
ps aux | grep nitro
|
||||
# Looks for processes like `nitro` and `nitro_arm_64`, and kill them one by one by process ID
|
||||
kill -9 <PID>
|
||||
```
|
||||
|
||||
#### 4. Download the Latest Version
|
||||
|
||||
Download the latest version of Jan from our [homepage](https://jan.ai/).
|
||||
</TabItem>
|
||||
|
||||
<TabItem value="windows" label="Windows">
|
||||
#### 1. Uninstall Jan
|
||||
|
||||
To uninstall Jan on Windows, use the [Windows Control Panel](https://support.microsoft.com/en-us/windows/uninstall-or-remove-apps-and-programs-in-windows-4b55f974-2cc6-2d2b-d092-5905080eaf98).
|
||||
|
||||
#### 2. Delete Application Data, Cache, and User Data
|
||||
|
||||
```sh
|
||||
# You can delete the `/Jan` directory in Windows's AppData Directory by visiting the following path `%APPDATA%\Jan`
|
||||
cd C:\Users\%USERNAME%\AppData\Roaming
|
||||
rmdir /S jan
|
||||
```
|
||||
|
||||
#### 3. Additional Step for Versions Before 0.4.2
|
||||
|
||||
If you are using a version before `0.4.2`, you need to run the following commands:
|
||||
|
||||
```sh
|
||||
# Find the process ID (PID) of the nitro process by filtering the list by process name
|
||||
tasklist | findstr "nitro"
|
||||
# Once you have the PID of the process you want to terminate, run the `taskkill`
|
||||
taskkill /F /PID <PID>
|
||||
```
|
||||
|
||||
#### 4. Download the Latest Version
|
||||
|
||||
Download the latest version of Jan from our [homepage](https://jan.ai/).
|
||||
</TabItem>
|
||||
|
||||
<TabItem value="linux" label="Linux">
|
||||
|
||||
#### 1. Uninstall Jan
|
||||
|
||||
<Tabs groupId = "linux_type">
|
||||
<TabItem value="linux_main" label = "Linux">
|
||||
|
||||
To uninstall Jan, you should use your package manager's uninstall or remove option.
|
||||
|
||||
This will return your system to its state before the installation of Jan.
|
||||
|
||||
This method can also reset all settings if you are experiencing any issues with Jan.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value = "deb_ub" label = "Debian / Ubuntu">
|
||||
|
||||
To uninstall Jan, run the following command.MDXContent
|
||||
|
||||
```sh
|
||||
sudo apt-get remove jan
|
||||
# where jan is the name of Jan package
|
||||
```
|
||||
|
||||
This will return your system to its state before the installation of Jan.
|
||||
|
||||
This method can also be used to reset all settings if you are experiencing any issues with Jan.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value = "other" label = "Others">
|
||||
|
||||
To uninstall Jan, you can uninstall Jan by deleting the `.AppImage` file.
|
||||
|
||||
If you wish to completely remove all user data associated with Jan after uninstallation, you can delete the user data at `~/jan`.
|
||||
|
||||
This method can also reset all settings if you are experiencing any issues with Jan.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
#### 2. Delete Application Data, Cache, and User Data
|
||||
|
||||
```sh
|
||||
# You can delete the user data folders located at the following `~/jan`
|
||||
rm -rf ~/jan
|
||||
```
|
||||
|
||||
#### 3. Additional Step for Versions Before 0.4.2
|
||||
|
||||
If you are using a version before `0.4.2`, you need to run the following commands:
|
||||
|
||||
```zsh
|
||||
ps aux | grep nitro
|
||||
# Looks for processes like `nitro` and `nitro_arm_64`, and kill them one by one by process ID
|
||||
kill -9 <PID>
|
||||
```
|
||||
|
||||
#### 4. Download the Latest Version
|
||||
|
||||
Download the latest version of Jan from our [homepage](https://jan.ai/).
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
By following these steps, you can cleanly uninstall and reinstall Jan, ensuring a smooth and error-free experience with the latest version.
|
||||
|
||||
:::note
|
||||
|
||||
Before reinstalling Jan, ensure it's completely removed from all shared spaces if it's installed on multiple user accounts on your device.
|
||||
|
||||
:::
|
||||
|
||||
## Troubleshooting NVIDIA GPU
|
||||
To resolve issues when the Jan app does not utilize the NVIDIA GPU on Windows and Linux systems.
|
||||
|
||||
#### 1. Ensure GPU Mode Requirements
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="windows" label="Windows">
|
||||
|
||||
##### NVIDIA Driver
|
||||
|
||||
- Install an [NVIDIA Driver](https://www.nvidia.com/Download/index.aspx) supporting CUDA 11.7 or higher.
|
||||
- Use the following command to verify the installation:
|
||||
|
||||
```sh
|
||||
nvidia-smi
|
||||
```
|
||||
|
||||
##### CUDA Toolkit
|
||||
|
||||
- Install a [CUDA toolkit](https://developer.nvidia.com/cuda-downloads) compatible with your NVIDIA driver.
|
||||
- Use the following command to verify the installation:
|
||||
|
||||
```sh
|
||||
nvcc --version
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="linux" label="Linux">
|
||||
|
||||
##### NVIDIA Driver
|
||||
|
||||
- Install an [NVIDIA Driver](https://www.nvidia.com/Download/index.aspx) supporting CUDA 11.7 or higher.
|
||||
- Use the following command to verify the installation:
|
||||
|
||||
```sh
|
||||
nvidia-smi
|
||||
```
|
||||
|
||||
##### CUDA Toolkit
|
||||
|
||||
- Install a [CUDA toolkit](https://developer.nvidia.com/cuda-downloads) compatible with your NVIDIA driver.
|
||||
- Use the following command to verify the installation:
|
||||
|
||||
```sh
|
||||
nvcc --version
|
||||
```
|
||||
##### Linux Specifics
|
||||
|
||||
- Ensure that `gcc-11`, `g++-11`, `cpp-11`, or higher is installed.
|
||||
- See [instructions](https://gcc.gnu.org/projects/cxx-status.html#cxx17) for Ubuntu installation.
|
||||
|
||||
- **Post-Installation Actions**: Add CUDA libraries to `LD_LIBRARY_PATH`.
|
||||
- Follow the [Post-installation Actions](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#post-installation-actions) instructions.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
#### 2. Switch to GPU Mode
|
||||
|
||||
Jan defaults to CPU mode but automatically switches to GPU mode if your system supports it, selecting the GPU with the highest VRAM. Check this setting in `Settings` > `Advanced Settings`.
|
||||
|
||||
##### Troubleshooting Tips
|
||||
|
||||
If GPU mode isn't enabled by default:
|
||||
|
||||
1. Confirm that you have installed an NVIDIA driver supporting CUDA 11.7 or higher. Refer to [CUDA compatibility](https://docs.nvidia.com/deploy/cuda-compatibility/index.html#binary-compatibility__table-toolkit-driver).
|
||||
2. Ensure compatibility of the CUDA toolkit with your NVIDIA driver. Refer to [CUDA compatibility](https://docs.nvidia.com/deploy/cuda-compatibility/index.html#binary-compatibility__table-toolkit-driver).
|
||||
3. For Linux, add CUDA's `.so` libraries to the `LD_LIBRARY_PATH`. For Windows, ensure that CUDA's `.dll` libraries are in the PATH. Refer to [Windows setup](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html#environment-setup).
|
||||
|
||||
#### 3. Check GPU Settings
|
||||
|
||||
1. Navigate to `Settings` > `Advanced Settings` > `Jan Data Folder` to access GPU settings.
|
||||
2. Open the `settings.json` file in the `settings` folder. Here's an example:
|
||||
|
||||
```json title="~/jan/settings/settings.json"
|
||||
{
|
||||
"notify": true,
|
||||
"run_mode": "gpu",
|
||||
"nvidia_driver": {
|
||||
"exist": true,
|
||||
"version": "531.18"
|
||||
},
|
||||
"cuda": {
|
||||
"exist": true,
|
||||
"version": "12"
|
||||
},
|
||||
"gpus": [
|
||||
{
|
||||
"id": "0",
|
||||
"vram": "12282"
|
||||
},
|
||||
{
|
||||
"id": "1",
|
||||
"vram": "6144"
|
||||
},
|
||||
{
|
||||
"id": "2",
|
||||
"vram": "6144"
|
||||
}
|
||||
],
|
||||
"gpu_highest_vram": "0"
|
||||
}
|
||||
```
|
||||
#### 4. Restart Jan
|
||||
Restart Jan application to make sure it works.
|
||||
|
||||
##### Troubleshooting Tips
|
||||
|
||||
- Ensure `nvidia_driver` and `cuda` fields indicate installed software.
|
||||
- If `gpus` field is empty or lacks your GPU, check NVIDIA driver and CUDA toolkit installations.
|
||||
- For further assistance, share the `settings.json` file.
|
||||
|
||||
#### Tested Configurations
|
||||
|
||||
- **Windows 11 Pro 64-bit:**
|
||||
- GPU: NVIDIA GeForce RTX 4070ti
|
||||
- CUDA: 12.2
|
||||
- NVIDIA driver: 531.18 (Bare metal)
|
||||
|
||||
- **Ubuntu 22.04 LTS:**
|
||||
- GPU: NVIDIA GeForce RTX 4070ti
|
||||
- CUDA: 12.2
|
||||
- NVIDIA driver: 545 (Bare metal)
|
||||
|
||||
- **Ubuntu 20.04 LTS:**
|
||||
- GPU: NVIDIA GeForce GTX 1660ti
|
||||
- CUDA: 12.1
|
||||
- NVIDIA driver: 535 (Proxmox VM passthrough GPU)
|
||||
|
||||
- **Ubuntu 18.04 LTS:**
|
||||
- GPU: NVIDIA GeForce GTX 1660ti
|
||||
- CUDA: 12.1
|
||||
- NVIDIA driver: 535 (Proxmox VM passthrough GPU)
|
||||
|
||||
#### Common Issues and Solutions
|
||||
|
||||
1. If the issue persists, try installing the [Nightly version](/guides/quickstart/#nightly-releases).
|
||||
2. Ensure your (V)RAM is accessible; some users with virtual RAM may require additional configuration.
|
||||
3. Seek assistance in [Jan Discord](https://discord.gg/mY69SZaMaC).
|
||||
|
||||
## How to Get Error Logs
|
||||
To get the error logs of your Jan application, follow the steps below:
|
||||
#### Jan Application
|
||||
1. Navigate to the main dashboard.
|
||||
2. Click the **gear icon (⚙️)** on the bottom left of your screen.
|
||||
3. Under the **Settings screen**, click the **Advanced Settings**.
|
||||
4. On the **Jan Data Folder** click the **folder icon (📂)** to access the data.
|
||||
5. Click the **logs** folder.
|
||||
|
||||
#### Jan UI
|
||||
1. Open your Unix or Linux terminal.
|
||||
2. Use the following commands to get the recent 50 lines of log files:
|
||||
```bash
|
||||
tail -n 50 ~/jan/logs/app.log
|
||||
|
||||
```
|
||||
|
||||
#### Jan API Server
|
||||
1. Open your Unix or Linux terminal.
|
||||
2. Use the following commands to get the recent 50 lines of log files:
|
||||
```bash
|
||||
tail -n 50 ~/jan/logs/server.log
|
||||
|
||||
```
|
||||
:::warning
|
||||
Ensure to redact any private or sensitive information when sharing logs or error details.
|
||||
:::
|
||||
|
||||
:::note
|
||||
If you have any questions or are looking for support, please don't hesitate to contact us via our [Discord community](https://discord.gg/Dt7MxDyNNZ) or create a new issue in our [GitHub repository](https://github.com/janhq/jan/issues/new/choose).
|
||||
:::
|
||||
|
||||
## Permission Denied
|
||||
When running Jan, you might encounter the following error message:
|
||||
|
||||
```
|
||||
Uncaught (in promise) Error: Error invoking layout-480796bff433a3a3.js:538 remote method 'installExtension':
|
||||
Error Package /Applications/Jan.app/Contents/Resources/app.asar.unpacked/pre-install/janhq-assistant-extension-1.0.0.tgz does not contain a valid manifest:
|
||||
Error EACCES: permission denied, mkdtemp '/Users/username/.npm/_cacache/tmp/ueCMn4'
|
||||
```
|
||||
|
||||
This error mainly caused by permission problem during installation. To resolve this issue, follow these steps:
|
||||
|
||||
1. Open your terminal.
|
||||
|
||||
2. Execute the following command to change ownership of the `~/.npm` directory to the current user:
|
||||
|
||||
```sh
|
||||
sudo chown -R $(whoami) ~/.npm
|
||||
```
|
||||
:::note
|
||||
- This command ensures that the necessary permissions are granted for Jan installation, resolving the encountered error.
|
||||
- If you have any questions or are looking for support, please don't hesitate to contact us via our [Discord community](https://discord.gg/Dt7MxDyNNZ) or create a new issue in our [GitHub repository](https://github.com/janhq/jan/issues/new/choose).
|
||||
:::
|
||||
|
||||
## Something's Amiss
|
||||
When you start a chat with a model and encounter with a Something's Amiss error, here's how to resolve it:
|
||||
1. Ensure your OS is up to date.
|
||||
2. Choose a model smaller than 80% of your hardware's V/RAM. For example, on an 8GB machine, opt for models smaller than 6GB.
|
||||
3. Install the latest [Nightly release](/guides/quickstart/#nightly-releases) or [clear the application cache](/troubleshooting/#broken-build) when reinstalling Jan.
|
||||
4. Confirm your V/RAM accessibility, particularly if using virtual RAM.
|
||||
5. Nvidia GPU users should download [CUDA](https://developer.nvidia.com/cuda-downloads).
|
||||
6. Linux users, ensure your system meets the requirements of gcc 11, g++ 11, cpp 11, or higher. Refer to this [link](/troubleshooting/#troubleshooting-nvidia-gpu) for details.
|
||||
7. You might use the wrong port when you [check the app logs](/troubleshooting/#how-to-get-error-logs) and encounter the Bind address failed at 127.0.0.1:3928 error. To check the port status, try use the `netstat` command, like the following:
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="mac" label="MacOS" default>
|
||||
```sh
|
||||
netstat -an | grep 3928
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value="windows" label="Windows" default>
|
||||
```sh
|
||||
netstat -ano | find "3928"
|
||||
tasklist /fi "PID eq 3928"
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value="linux" label="Linux" default>
|
||||
```sh
|
||||
netstat -anpe | grep "3928"
|
||||
```
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
:::note
|
||||
|
||||
`Netstat` displays the contents of various network-related data structures for active connections
|
||||
|
||||
:::
|
||||
|
||||
:::tip
|
||||
|
||||
Jan uses the following ports:
|
||||
|
||||
- Nitro: `3928`
|
||||
- Jan API Server: `1337`
|
||||
- Jan Documentation: `3001`
|
||||
|
||||
:::
|
||||
|
||||
:::note
|
||||
If you have any questions or are looking for support, please don't hesitate to contact us via our [Discord community](https://discord.gg/Dt7MxDyNNZ) or create a new issue in our [GitHub repository](https://github.com/janhq/jan/issues/new/choose).
|
||||
:::
|
||||
|
||||
## Undefined Issue
|
||||
Encountering an `undefined issue` in Jan is caused by errors related to the Nitro tool or other internal processes. It can be resolved through the following steps:
|
||||
|
||||
1. Clearing the Jan folder and then reopen the application to determine if the problem persists
|
||||
2. Manually run the nitro tool located at `~/jan/extensions/@janhq/inference-nitro-extensions/dist/bin/(your-os)/nitro` to check for error messages.
|
||||
3. Address any nitro error messages that are identified and reassess the persistence of the issue.
|
||||
4. Reopen Jan to determine if the problem has been resolved after addressing any identified errors.
|
||||
5. If the issue persists, please share the [app logs](/troubleshooting/#how-to-get-error-logs) via [Jan Discord](https://discord.gg/mY69SZaMaC) for further assistance and troubleshooting.
|
||||
|
||||
:::note
|
||||
If you have any questions or are looking for support, please don't hesitate to contact us via our [Discord community](https://discord.gg/Dt7MxDyNNZ) or create a new issue in our [GitHub repository](https://github.com/janhq/jan/issues/new/choose).
|
||||
:::
|
||||
|
||||
## Unexpected Token
|
||||
Encountering the `Unexpected token` error when initiating a chat with OpenAI models mainly caused by either your OpenAI key or where you access your OpenAI from. This issue can be solved through the following steps:
|
||||
|
||||
1. Obtain an OpenAI API key from [OpenAI's developer platform](https://platform.openai.com/) and integrate it into your application.
|
||||
|
||||
2. Trying a VPN could potentially solve the issue, especially if it's related to region locking for accessing OpenAI services. By connecting through a VPN, you may bypass such restrictions and successfully initiate chats with OpenAI models.
|
||||
|
||||
:::note
|
||||
If you have any questions or are looking for support, please don't hesitate to contact us via our [Discord community](https://discord.gg/Dt7MxDyNNZ) or create a new issue in our [GitHub repository](https://github.com/janhq/jan/issues/new/choose).
|
||||
:::
|
||||
@ -1,6 +1,8 @@
|
||||
---
|
||||
title: Advanced Settings
|
||||
sidebar_position: 1
|
||||
slug: /guides/advanced
|
||||
description: Jan Docs | Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
sidebar_position: 11
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
@ -11,7 +13,11 @@ keywords:
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
advanced-settings,
|
||||
Advanced Settings,
|
||||
HTTPS Proxy,
|
||||
SSL,
|
||||
settings,
|
||||
Jan settings
|
||||
]
|
||||
---
|
||||
|
||||
@ -47,7 +53,7 @@ To access the Jan's advanced settings, follow the steps below:
|
||||
| **Experimental Mode** | Enables experimental features that may be unstable. |
|
||||
| **GPU Acceleration** | Enables the boosting of your model performance by using your GPU devices for acceleration. |
|
||||
| **Jan Data Folder** | Location for messages, model configurations, and user data. Changeable to a different location. |
|
||||
| **HTTPS Proxy & Ignore SSL Certificate** | Use a proxy server for internet connections and ignore SSL certificates for self-signed certificates. Please check out the guide on how to set up your own HTTPS proxy server [here](http-proxy.mdx). |
|
||||
| **HTTPS Proxy & Ignore SSL Certificate** | Use a proxy server for internet connections and ignore SSL certificates for self-signed certificates. Please check out the guide on how to set up your own HTTPS proxy server [here](advanced-settings.mdx#https-proxy). |
|
||||
| **Clear Logs** | Removes all logs from the Jan application. |
|
||||
| **Reset To Factory Default** | Resets the application to its original state, deleting all data including model customizations and conversation history. |
|
||||
|
||||
@ -113,7 +119,7 @@ To try out new fetures that are still in testing phase, follow the steps below:
|
||||
To enhance your model performance, follow the steps below:
|
||||
|
||||
:::warning
|
||||
Ensure that you have read the [troubleshooting guide](/docs/guides/common-error/not-using-gpu.mdx) here for further assistance.
|
||||
Ensure that you have read the [troubleshooting guide](/troubleshooting/#troubleshooting-nvidia-gpu) here for further assistance.
|
||||
:::
|
||||
1. Navigate to the main dashboard.
|
||||
2. Click the **gear icon (⚙️)** on the bottom left of your screen.
|
||||
@ -127,14 +133,105 @@ To access the folder where messages, model configurations and user data are stor
|
||||
3. Under the **Settings screen**, click the **Advanced Settings**.
|
||||
4. On the **Jan Data Folder** click the **folder icon (📂)** to access the data or the **pencil icon (✏️)** to change the folder where you keep your data.
|
||||
|
||||
## Enable the HTTPS Proxy
|
||||
To enable the HTTPS Proxy feature, follow the steps below:
|
||||
1. Make sure to set up your HTTPS Proxy. Check out this [guide](http-proxy.mdx) for instructions on how to do it.
|
||||
2. Navigate to the main dashboard.
|
||||
3. Click the **gear icon (⚙️)** on the bottom left of your screen.
|
||||
4. Under the **Settings screen**, click the **Advanced Settings**.
|
||||
5. On the **HTTPS Proxy** click the slider to enable.
|
||||
6. Input your domain in the blank field.
|
||||
## HTTPS Proxy
|
||||
HTTPS Proxy encrypts data between your browser and the internet, making it hard for outsiders to intercept or read. It also helps you to maintain your privacy and security while being able to bypass regional restrictions on internet.
|
||||
|
||||
:::note
|
||||
|
||||
- When configuring Jan using an HTTPS proxy, the speed of the downloading model may be affected due to the encryption and decryption process. It also depends on the networking of the cloud service provider.
|
||||
- HTTPS Proxy does not affect the remote model usage.
|
||||
|
||||
:::
|
||||
### Setting Up Your Own HTTPS Proxy Server
|
||||
This guide provides a simple overview of setting up an HTTPS proxy server using **Squid**, a widely used open-source proxy software.
|
||||
|
||||
:::note
|
||||
Other software options are also available depending on your requirements.
|
||||
:::
|
||||
|
||||
#### Step 1: Choosing a Server
|
||||
1. Firstly, you need to choose a server to host your proxy server.
|
||||
:::note
|
||||
We recommend using a well-known cloud provider service like:
|
||||
- Amazon AWS
|
||||
- Google Cloud
|
||||
- Microsoft Azure
|
||||
- Digital Ocean
|
||||
:::
|
||||
|
||||
2. Ensure that your server has a public IP address and is accessible from the internet.
|
||||
|
||||
#### Step 2: Installing Squid
|
||||
Instal **Squid** using the following command:
|
||||
```bash
|
||||
sudo apt-get update
|
||||
sudo apt-get install squid
|
||||
```
|
||||
|
||||
#### Step 3: Configure Squid for HTTPS
|
||||
|
||||
To enable HTTPS, you will need to configure Squid with SSL support.
|
||||
|
||||
1. Squid requires an SSL certificate to be able to handle HTTPS traffic. You can generate a self-signed certificate or obtain one from a Certificate Authority (CA). For a self-signed certificate, you can use OpenSSL:
|
||||
|
||||
```bash
|
||||
openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout squid-proxy.pem -out squid-proxy.pem
|
||||
```
|
||||
|
||||
2. Edit the Squid configuration file `/etc/squid/squid.conf` to include the path to your SSL certificate and enable the HTTPS port:
|
||||
|
||||
```bash
|
||||
http_port 3128 ssl-bump cert=/path/to/your/squid-proxy.pem
|
||||
ssl_bump server-first all
|
||||
ssl_bump bump all
|
||||
```
|
||||
|
||||
3. To intercept HTTPS traffic, Squid uses a process called SSL Bumping. This process allows Squid to decrypt and re-encrypt HTTPS traffic. To enable SSL Bumping, ensure the `ssl_bump` directives are configured correctly in your `squid.conf` file.
|
||||
|
||||
#### Step 4 (Optional): Configure ACLs and Authentication
|
||||
|
||||
1. You can define rules to control who can access your proxy. This is done by editing the squid.conf file and defining ACLs:
|
||||
|
||||
```bash
|
||||
acl allowed_ips src "/etc/squid/allowed_ips.txt"
|
||||
http_access allow allowed_ips
|
||||
```
|
||||
|
||||
2. If you want to add an authentication layer, Squid supports several authentication schemes. Basic authentication setup might look like this:
|
||||
|
||||
```bash
|
||||
auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/passwords
|
||||
acl authenticated proxy_auth REQUIRED
|
||||
http_access allow authenticated
|
||||
```
|
||||
|
||||
#### Step 5: Restart and Test Your Proxy
|
||||
|
||||
1. After configuring, restart Squid to apply the changes:
|
||||
|
||||
```bash
|
||||
sudo systemctl restart squid
|
||||
```
|
||||
|
||||
2. To test, configure your browser or another client to use the proxy server with its IP address and port (default is 3128).
|
||||
3. Check if you can access the internet through your proxy.
|
||||
|
||||
:::tip
|
||||
|
||||
Tips for Secure Your Proxy:
|
||||
- **Firewall rules**: Ensure that only intended users or IP addresses can connect to your proxy server. This can be achieved by setting up appropriate firewall rules.
|
||||
- **Regular updates**: Keep your server and proxy software updated to ensure that you are protected against known vulnerabilities.
|
||||
- **Monitoring and logging**: Monitor your proxy server for unusual activity and enable logging to keep track of the traffic passing through your proxy.
|
||||
|
||||
:::
|
||||
|
||||
### Setting Up Jan to Use Your HTTPS Proxy
|
||||
|
||||
Once you have your HTTPS proxy server set up, you can configure Jan to use it.
|
||||
1. Navigate to **Settings** > **Advanced Settings**.
|
||||
2. On the **HTTPS Proxy** click the slider to enable.
|
||||
3. Input your domain in the blank field.
|
||||
|
||||
|
||||
## Ignore SSL Certificate
|
||||
To Allow self-signed or unverified certificates, follow the steps below:
|
||||