docs: Refactored into the new page structure

This commit is contained in:
Arista Indrajaya 2024-03-18 15:53:22 +07:00
parent 2f752db798
commit 89e6ca9009
100 changed files with 1621 additions and 1819 deletions

View File

@ -1,8 +0,0 @@
{
"label": "Advanced Settings",
"position": 11,
"link": {
"type": "doc",
"id": "guides/advanced-settings/advanced-settings"
}
}

View File

@ -1,118 +0,0 @@
---
title: HTTPS Proxy
sidebar_position: 2
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
advanced-settings,
https-proxy,
]
---
## Why HTTPS Proxy?
HTTPS Proxy encrypts data between your browser and the internet, making it hard for outsiders to intercept or read. It also helps you to maintain your privacy and security while being able to bypass regional restrictions on internet.
:::note
- When configuring Jan using an HTTPS proxy, the speed of the downloading model may be affected due to the encryption and decryption process. It also depends on the networking of the cloud service provider.
- HTTPS Proxy does not affect the remote model usage.
:::
## Setting Up Your Own HTTPS Proxy Server
This guide provides a simple overview of setting up an HTTPS proxy server using **Squid**, a widely used open-source proxy software.
:::note
Other software options are also available depending on your requirements.
:::
### Step 1: Choosing a Server
1. Firstly, you need to choose a server to host your proxy server.
:::note
We recommend using a well-known cloud provider service like:
- Amazon AWS
- Google Cloud
- Microsoft Azure
- Digital Ocean
:::
2. Ensure that your server has a public IP address and is accessible from the internet.
### Step 2: Installing Squid
Instal **Squid** using the following command:
```bash
sudo apt-get update
sudo apt-get install squid
```
### Step 3: Configure Squid for HTTPS
To enable HTTPS, you will need to configure Squid with SSL support.
1. Squid requires an SSL certificate to be able to handle HTTPS traffic. You can generate a self-signed certificate or obtain one from a Certificate Authority (CA). For a self-signed certificate, you can use OpenSSL:
```bash
openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout squid-proxy.pem -out squid-proxy.pem
```
2. Edit the Squid configuration file `/etc/squid/squid.conf` to include the path to your SSL certificate and enable the HTTPS port:
```bash
http_port 3128 ssl-bump cert=/path/to/your/squid-proxy.pem
ssl_bump server-first all
ssl_bump bump all
```
3. To intercept HTTPS traffic, Squid uses a process called SSL Bumping. This process allows Squid to decrypt and re-encrypt HTTPS traffic. To enable SSL Bumping, ensure the `ssl_bump` directives are configured correctly in your `squid.conf` file.
### Step 4 (Optional): Configure ACLs and Authentication
1. You can define rules to control who can access your proxy. This is done by editing the squid.conf file and defining ACLs:
```bash
acl allowed_ips src "/etc/squid/allowed_ips.txt"
http_access allow allowed_ips
```
2. If you want to add an authentication layer, Squid supports several authentication schemes. Basic authentication setup might look like this:
```bash
auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/passwords
acl authenticated proxy_auth REQUIRED
http_access allow authenticated
```
### Step 5: Restart and Test Your Proxy
1. After configuring, restart Squid to apply the changes:
```bash
sudo systemctl restart squid
```
2. To test, configure your browser or another client to use the proxy server with its IP address and port (default is 3128).
3. Check if you can access the internet through your proxy.
:::tip
Tips for Secure Your Proxy:
- **Firewall rules**: Ensure that only intended users or IP addresses can connect to your proxy server. This can be achieved by setting up appropriate firewall rules.
- **Regular updates**: Keep your server and proxy software updated to ensure that you are protected against known vulnerabilities.
- **Monitoring and logging**: Monitor your proxy server for unusual activity and enable logging to keep track of the traffic passing through your proxy.
:::
## Setting Up Jan to Use Your HTTPS Proxy
Once you have your HTTPS proxy server set up, you can configure Jan to use it.
1. Navigate to `Settings` > `Advanced Settings` and specify the HTTPS proxy (proxy auto-configuration and SOCKS not supported).
2. You can turn on the feature `Ignore SSL Certificates` if you are using a self-signed certificate. This feature allows self-signed or unverified certificates.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 83 KiB

View File

Before

Width:  |  Height:  |  Size: 479 KiB

After

Width:  |  Height:  |  Size: 479 KiB

View File

Before

Width:  |  Height:  |  Size: 472 KiB

After

Width:  |  Height:  |  Size: 472 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 107 KiB

View File

@ -1,48 +0,0 @@
---
title: Best Practices
sidebar_position: 3
description: Comprehensive set of best practices.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
acknowledgements,
third-party libraries,
]
---
Jan is a versatile platform offering solutions for integrating AI locally across various platforms. This guide outlines best practices for developers, analysts, and AI enthusiasts to enhance their experience with Jan when adding AI locally to their computers. Implementing these practices will optimize the performance of AI models.
## Follow the Quickstart Guide
The [quickstart guide](quickstart.mdx) is designed to facilitate a quick setup process. It provides a clear instruction and simple steps to get you up and running with Jan.ai quickly. Even, if you are inexperienced in AI, the quickstart can offer valuable insights and tips to help you get started quickly.
## Setting up the Right Models
Jan offers a range of pre-configured AI models that are tailored to different tasks and industries. You should identify which on that aligns with your objectives. There are factors to be considered:
- Capabilities
- Accuracy
- Processing Speed
:::note
- Some of these factors also depend on your hardware, please see Hardware Requirement.
- Choosing the right model is important to achieve the best performance.
:::
## Setting up Jan
Ensure that you familiarize yourself with the Jan application. Jan offers advanced settings that you can adjust. These settings may influence how your AI behaves locally. Please see the [Advanced Settings](./advanced-settings/advanced-settings.mdx) article for a complete list of Jan's configurations and instructions on how to configure them.
## Integrations
One of Jan's key features is its ability to integrate with many systems. Whether you are incorporating Jan.ai with any open-source LLM provider or other tools, it is important to understand the integration capabilities and limitations.
## Mastering the Prompt Engineering
Prompt engineering is an important aspect when dealing with AI models to generate the desired outputs. Mastering this skill can significantly enhance the performance and the responses of the AI. Below are some tips that you can do for prompt engineering:
- Ask the model to adopt a persona
- Be specific and details get a more specific answers
- Provide examples or preference text or context at the beginning
- Use a clear and concise language
- Use certain keywords and phrases

View File

@ -1,163 +0,0 @@
---
title: Broken Build
sidebar_position: 1
hide_table_of_contents: true
description: A step-by-step guide to fix errors that prevent the project from compiling or running successfully.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
troubleshooting,
]
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
This guide provides you steps to troubleshoot and to resolve the issue where your Jan is stuck in a broken build after installation.
<Tabs>
<TabItem value="mac" label="Mac" default>
### 1. Uninstall Jan
Delete Jan from your `/Applications` folder.
### 2. Delete Application Data, Cache, and User Data
```zsh
# Step 1: Delete the application data
## Newer versions
rm -rf ~/Library/Application\ Support/jan
## Versions 0.2.0 and older
rm -rf ~/Library/Application\ Support/jan-electron
# Step 2: Clear application cache
rm -rf ~/Library/Caches/jan*
# Step 3: Remove all user data
rm -rf ~/jan
```
### 3. Additional Step for Versions Before 0.4.2
If you are using a version before `0.4.2`, you need to run the following commands:
```zsh
ps aux | grep nitro
# Looks for processes like `nitro` and `nitro_arm_64`, and kill them one by one by process ID
kill -9 <PID>
```
### 4. Download the Latest Version
Download the latest version of Jan from our [homepage](https://jan.ai/).
</TabItem>
<TabItem value="windows" label="Windows">
### 1. Uninstall Jan
To uninstall Jan on Windows, use the [Windows Control Panel](https://support.microsoft.com/en-us/windows/uninstall-or-remove-apps-and-programs-in-windows-4b55f974-2cc6-2d2b-d092-5905080eaf98).
### 2. Delete Application Data, Cache, and User Data
```sh
# Delete your own user data
cd ~ # Or where you moved the Jan Data Folder to
rm -r ./jan
# Delete Application Cache
cd C:\Users\YOUR_USERNAME\AppData\Roaming
rm -r ./Jan
```
### 3. Additional Step for Versions Before 0.4.2
If you are using a version before `0.4.2`, you need to run the following commands:
```sh
# Find the process ID (PID) of the nitro process by filtering the list by process name
tasklist | findstr "nitro"
# Once you have the PID of the process you want to terminate, run the `taskkill`
taskkill /F /PID <PID>
```
### 4. Download the Latest Version
Download the latest version of Jan from our [homepage](https://jan.ai/).
</TabItem>
<TabItem value="linux" label="Linux">
### 1. Uninstall Jan
<Tabs groupId = "linux_type">
<TabItem value="linux_main" label = "Linux">
To uninstall Jan, you should use your package manager's uninstall or remove option.
This will return your system to its state before the installation of Jan.
This method can also reset all settings if you are experiencing any issues with Jan.
</TabItem>
<TabItem value = "deb_ub" label = "Debian / Ubuntu">
To uninstall Jan, run the following command.MDXContent
```sh
sudo apt-get remove jan
# where jan is the name of Jan package
```
This will return your system to its state before the installation of Jan.
This method can also be used to reset all settings if you are experiencing any issues with Jan.
</TabItem>
<TabItem value = "other" label = "Others">
To uninstall Jan, you can uninstall Jan by deleting the `.AppImage` file.
If you wish to completely remove all user data associated with Jan after uninstallation, you can delete the user data at `~/jan`.
This method can also reset all settings if you are experiencing any issues with Jan.
</TabItem>
</Tabs>
### 2. Delete Application Data, Cache, and User Data
```sh
# You can delete the user data folders located at the following `~/jan`
rm -rf ~/jan
```
### 3. Additional Step for Versions Before 0.4.2
If you are using a version before `0.4.2`, you need to run the following commands:
```zsh
ps aux | grep nitro
# Looks for processes like `nitro` and `nitro_arm_64`, and kill them one by one by process ID
kill -9 <PID>
```
### 4. Download the Latest Version
Download the latest version of Jan from our [homepage](https://jan.ai/).
</TabItem>
</Tabs>
By following these steps, you can cleanly uninstall and reinstall Jan, ensuring a smooth and error-free experience with the latest version.
:::note
Before reinstalling Jan, ensure it's completely removed from all shared spaces if it's installed on multiple user accounts on your device.
:::

View File

@ -1,161 +0,0 @@
---
title: Troubleshooting NVIDIA GPU
sidebar_position: 2
description: A step-by-step guide to enable Jan to properly leverage NVIDIA GPU resources, avoiding performance issues.
keywords: [
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
convZ
ersational AI,
no-subscription fee,
large language model,
troubleshooting,
using GPU,
]
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
This guide provides steps to troubleshoot and resolve issues when the Jan app does not utilize the NVIDIA GPU on Windows and Linux systems.
### 1. Ensure GPU Mode Requirements
<Tabs>
<TabItem value="windows" label="Windows">
#### NVIDIA Driver
- Install an [NVIDIA Driver](https://www.nvidia.com/Download/index.aspx) supporting CUDA 11.7 or higher.
- Use the following command to verify the installation:
```sh
nvidia-smi
```
#### CUDA Toolkit
- Install a [CUDA toolkit](https://developer.nvidia.com/cuda-downloads) compatible with your NVIDIA driver.
- Use the following command to verify the installation:
```sh
nvcc --version
```
</TabItem>
<TabItem value="linux" label="Linux">
#### NVIDIA Driver
- Install an [NVIDIA Driver](https://www.nvidia.com/Download/index.aspx) supporting CUDA 11.7 or higher.
- Use the following command to verify the installation:
```sh
nvidia-smi
```
#### CUDA Toolkit
- Install a [CUDA toolkit](https://developer.nvidia.com/cuda-downloads) compatible with your NVIDIA driver.
- Use the following command to verify the installation:
```sh
nvcc --version
```
#### Linux Specifics
- Ensure that `gcc-11`, `g++-11`, `cpp-11`, or higher is installed.
- See [instructions](https://gcc.gnu.org/projects/cxx-status.html#cxx17) for Ubuntu installation.
- **Post-Installation Actions**: Add CUDA libraries to `LD_LIBRARY_PATH`.
- Follow the [Post-installation Actions](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#post-installation-actions) instructions.
</TabItem>
</Tabs>
### 2. Switch to GPU Mode
Jan defaults to CPU mode but automatically switches to GPU mode if your system supports it, selecting the GPU with the highest VRAM. Check this setting in `Settings` > `Advanced Settings`.
#### Troubleshooting Tips
If GPU mode isn't enabled by default:
1. Confirm that you have installed an NVIDIA driver supporting CUDA 11.7 or higher. Refer to [CUDA compatibility](https://docs.nvidia.com/deploy/cuda-compatibility/index.html#binary-compatibility__table-toolkit-driver).
2. Ensure compatibility of the CUDA toolkit with your NVIDIA driver. Refer to [CUDA compatibility](https://docs.nvidia.com/deploy/cuda-compatibility/index.html#binary-compatibility__table-toolkit-driver).
3. For Linux, add CUDA's `.so` libraries to the `LD_LIBRARY_PATH`. For Windows, ensure that CUDA's `.dll` libraries are in the PATH. Refer to [Windows setup](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html#environment-setup).
### 3. Check GPU Settings
1. Navigate to `Settings` > `Advanced Settings` > `Jan Data Folder` to access GPU settings.
2. Open the `settings.json` file in the `settings` folder. Here's an example:
```json title="~/jan/settings/settings.json"
{
"notify": true,
"run_mode": "gpu",
"nvidia_driver": {
"exist": true,
"version": "531.18"
},
"cuda": {
"exist": true,
"version": "12"
},
"gpus": [
{
"id": "0",
"vram": "12282"
},
{
"id": "1",
"vram": "6144"
},
{
"id": "2",
"vram": "6144"
}
],
"gpu_highest_vram": "0"
}
```
### 4. Restart Jan
Restart Jan application to make sure it works.
#### Troubleshooting Tips
- Ensure `nvidia_driver` and `cuda` fields indicate installed software.
- If `gpus` field is empty or lacks your GPU, check NVIDIA driver and CUDA toolkit installations.
- For further assistance, share the `settings.json` file.
### Tested Configurations
- **Windows 11 Pro 64-bit:**
- GPU: NVIDIA GeForce RTX 4070ti
- CUDA: 12.2
- NVIDIA driver: 531.18 (Bare metal)
- **Ubuntu 22.04 LTS:**
- GPU: NVIDIA GeForce RTX 4070ti
- CUDA: 12.2
- NVIDIA driver: 545 (Bare metal)
- **Ubuntu 20.04 LTS:**
- GPU: NVIDIA GeForce GTX 1660ti
- CUDA: 12.1
- NVIDIA driver: 535 (Proxmox VM passthrough GPU)
- **Ubuntu 18.04 LTS:**
- GPU: NVIDIA GeForce GTX 1660ti
- CUDA: 12.1
- NVIDIA driver: 535 (Proxmox VM passthrough GPU)
### Common Issues and Solutions
1. If the issue persists, try installing the [Nightly version](https://jan.ai/install/nightly/).
2. Ensure your (V)RAM is accessible; some users with virtual RAM may require additional configuration.
3. Seek assistance in [Jan Discord](https://discord.gg/mY69SZaMaC).

View File

@ -1,49 +0,0 @@
---
title: How to Get Error Logs
sidebar_position: 5
description: A step-by-step guide to get the Jan app error logs.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
troubleshooting,
permission denied,
]
---
To get the error logs of your Jan application, follow the steps below:
### Jan Application
1. Navigate to the main dashboard.
2. Click the **gear icon (⚙️)** on the bottom left of your screen.
3. Under the **Settings screen**, click the **Advanced Settings**.
4. On the **Jan Data Folder** click the **folder icon (📂)** to access the data.
5. Click the **logs** folder.
### Jan UI
1. Open your Unix or Linux terminal.
2. Use the following commands to get the recent 50 lines of log files:
```bash
tail -n 50 ~/jan/logs/app.log
```
### Jan API Server
1. Open your Unix or Linux terminal.
2. Use the following commands to get the recent 50 lines of log files:
```bash
tail -n 50 ~/jan/logs/server.log
```
:::warning
Ensure to redact any private or sensitive information when sharing logs or error details.
:::
:::note
If you have any questions or are looking for support, please don't hesitate to contact us via our [Discord community](https://discord.gg/Dt7MxDyNNZ) or create a new issue in our [GitHub repository](https://github.com/janhq/jan/issues/new/choose).
:::

View File

@ -1,31 +0,0 @@
---
title: No Assistant Available
sidebar_position: 7
description: Troubleshooting steps to resolve issues no assistant available.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
troubleshooting,
no assistant available,
]
---
When you encounter the following error message:
```
No assistant available.
```
This issue arises when a new, unintentional file appears in `/jan/assistants`.
It can be resolved through the following steps:
1. Access the `/jan/assistants` directory using a file manager or terminal.
2. Within `/jan/assistants`, this directory should only contain a folder named `jan`. Identify any file outside of this folder and remove it.

View File

@ -1,39 +0,0 @@
---
title: Permission Denied
sidebar_position: 1
description: A step-by-step guide to fix the issue when access is denied due to insufficient permissions.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
troubleshooting,
permission denied,
]
---
When running Jan, you might encounter the following error message:
```
Uncaught (in promise) Error: Error invoking layout-480796bff433a3a3.js:538 remote method 'installExtension':
Error Package /Applications/Jan.app/Contents/Resources/app.asar.unpacked/pre-install/janhq-assistant-extension-1.0.0.tgz does not contain a valid manifest:
Error EACCES: permission denied, mkdtemp '/Users/username/.npm/_cacache/tmp/ueCMn4'
```
This error mainly caused by permission problem during installation. To resolve this issue, follow these steps:
1. Open your terminal.
2. Execute the following command to change ownership of the `~/.npm` directory to the current user:
```sh
sudo chown -R $(whoami) ~/.npm
```
:::note
This command ensures that the necessary permissions are granted for Jan installation, resolving the encountered error.
:::

View File

@ -1,53 +0,0 @@
---
title: Something's Amiss
sidebar_position: 4
description: A step-by-step guide to resolve an unspecified or general error.
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
When you start a chat with a model and encounter with a Something's Amiss error, here's how to resolve it:
1. Ensure your OS is up to date.
2. Choose a model smaller than 80% of your hardware's V/RAM. For example, on an 8GB machine, opt for models smaller than 6GB.
3. Install the latest [Nightly release](https://jan.ai/install/nightly/) or [clear the application cache](https://jan.ai/troubleshooting/stuck-on-broken-build/) when reinstalling Jan.
4. Confirm your V/RAM accessibility, particularly if using virtual RAM.
5. Nvidia GPU users should download [CUDA](https://developer.nvidia.com/cuda-downloads).
6. Linux users, ensure your system meets the requirements of gcc 11, g++ 11, cpp 11, or higher. Refer to this [link](https://jan.ai/guides/troubleshooting/gpu-not-used/#specific-requirements-for-linux) for details.
7. You might use the wrong port when you [check the app logs](https://jan.ai/troubleshooting/how-to-get-error-logs/) and encounter the Bind address failed at 127.0.0.1:3928 error. To check the port status, try use the `netstat` command, like the following:
<Tabs>
<TabItem value="mac" label="MacOS" default>
```sh
netstat -an | grep 3928
```
</TabItem>
<TabItem value="windows" label="Windows" default>
```sh
netstat -ano | find "3928"
tasklist /fi "PID eq 3928"
```
</TabItem>
<TabItem value="linux" label="Linux" default>
```sh
netstat -anpe | grep "3928"
```
</TabItem>
</Tabs>
:::note
`Netstat` displays the contents of various network-related data structures for active connections
:::
:::tip
Jan uses the following ports:
- Nitro: `3928`
- Jan API Server: `1337`
- Jan Documentation: `3001`
:::

View File

@ -1,62 +0,0 @@
---
title: Stuck on Loading Model
sidebar_position: 8
description: Troubleshooting steps to resolve issues related to the loading model.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
troubleshooting,
stuck on loading model,
]
---
## 1. Issue: Model Loading Stuck Due To Missing Windows Management Instrumentation Command-line (WMIC)
Encountering a stuck-on-loading model issue in Jan is caused by errors related to the `Windows Management Instrumentation Command-line (WMIC)` path not being included in the system's PATH environment variable.
Error message:
```
index.js:47 Uncaught (in promise) Error: Error invoking remote method 'invokeExtensionFunc': Error: Command failed: WMIC CPU Get NumberOfCores
```
It can be resolved through the following steps:
1. **Open System Properties:**
- Press `Windows key + R`.
- Type `sysdm.cpl` and press `Enter`.
2. **Access Environment Variables:**
- Go to the "Advanced" tab.
- Click the "Environment Variables" button.
3. **Edit System PATH:**
- Under "System Variables" find and select `Path`.
- Click "Edit."
4. **Add WMIC Path:**
- Click "New" and enter `C:\Windows\System32\Wbem`.
5. **Save Changes:**
- Click "OK" to close and save your changes.
6. **Verify Installation:**
- Restart any command prompts or terminals.
- Run `where wmic` to verify. Expected output: `C:\Windows\System32\wbem\WMIC.exe`.
## 2. Issue: Model Loading Stuck Due To CPU Without AVX
Encountering an issue with models stuck on loading in Jan can be due to the use of older generation CPUs that do not support Advanced Vector Extensions (AVX).
To check if your CPU supports AVX, visit the following link: [CPUs with AVX](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX)
:::warning [Please use this with caution]
As a workaround, consider using an [emulator](https://www.intel.com/content/www/us/en/developer/articles/tool/software-development-emulator.html) to simulate AVX support.
:::

View File

@ -1,26 +0,0 @@
---
title: Thread Disappearance
sidebar_position: 6
description: Troubleshooting steps to resolve issues threads suddenly disappearance.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
troubleshooting,
thread disappearance,
]
---
When you encounter the error of old threads suddenly disappear. This can happen when a new, unintentional file is created in `/jan/threads`.
It can be resolved through the following steps:
1. Go to `/jan/threads`.
2. The `/jan/threads` directory contains many folders named with the prefix `jan_` followed by an ID (e.g., `jan_123`). Look for any file not conforming to this naming pattern and remove it.

View File

@ -1,26 +0,0 @@
---
title: Undefined Issue
sidebar_position: 3
description: A step-by-step guide to resolve errors when a variable or object is not defined.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
troubleshooting,
undefined issue,
]
---
Encountering an `undefined issue` in Jan is caused by errors related to the Nitro tool or other internal processes. It can be resolved through the following steps:
1. Clearing the Jan folder and then reopen the application to determine if the problem persists
2. Manually run the nitro tool located at `~/jan/extensions/@janhq/inference-nitro-extensions/dist/bin/(your-os)/nitro` to check for error messages.
3. Address any nitro error messages that are identified and reassess the persistence of the issue.
4. Reopen Jan to determine if the problem has been resolved after addressing any identified errors.
5. If the issue persists, please share the [app logs](https://jan.ai/troubleshooting/how-to-get-error-logs/) via [Jan Discord](https://discord.gg/mY69SZaMaC) for further assistance and troubleshooting.

View File

@ -1,24 +0,0 @@
---
title: Unexpected Token
sidebar_position: 2
description: A step-by-step guide to correct syntax errors caused by invalid JSON in the code.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
troubleshooting,
unexpected token,
]
---
Encountering the `Unexpected token` error when initiating a chat with OpenAI models mainly caused by either your OpenAI key or where you access your OpenAI from. This issue can be solved through the following steps:
1. Obtain an OpenAI API key from [OpenAI's developer platform](https://platform.openai.com/) and integrate it into your application.
2. Trying a VPN could potentially solve the issue, especially if it's related to region locking for accessing OpenAI services. By connecting through a VPN, you may bypass such restrictions and successfully initiate chats with OpenAI models.

View File

@ -1,22 +0,0 @@
---
title: Extensions
slug: /guides/extensions/
sidebar_position: 5
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
build extension,
]
---
import DocCardList from "@theme/DocCardList";
<DocCardList />

Binary file not shown.

Before

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 88 KiB

View File

@ -1,7 +1,8 @@
---
title: Extension Setup
title: What are Jan Extensions?
slug: /extensions
description: Jan Docs | Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
sidebar_position: 1
description: Dive into the available extensions and configure them.
keywords:
[
Jan AI,
@ -12,11 +13,11 @@ keywords:
conversational AI,
no-subscription fee,
large language model,
extension settings,
Jan Extensions,
Extensions,
]
---
The current Jan Desktop Client has some default extensions built on top of this framework to enhance the user experience. In this guide, we will show you the list of default extensions and how to configure extension settings.
## Default Extensions
@ -136,6 +137,24 @@ To configure extension settings:
}
```
## Import Custom Extension
:::note
Currently, Jan only supports official extensions, which can be directly downloaded in Extension Settings. We plan to support 3rd party Extensions in the future.
:::
For now you can always import a third party extension at your own risk by following the steps below:
1. Navigate to **Settings** > **Extensions** > Click Select under **Manual Installation**.
2. Then, the ~/jan/extensions/extensions.json file will be updated automatically.
:::caution
You need to prepare the extension file in .tgz format to install the **non-default** extension.
:::
:::info[Assistance and Support]
If you have questions, please join our [Discord community](https://discord.gg/Dt7MxDyNNZ) for support, updates, and discussions.

View File

@ -1,36 +0,0 @@
---
title: Import Extensions
sidebar_position: 2
description: A step-by-step guide on how to import extensions.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
import extensions,
]
---
Besides default extensions, you can import extensions into Jan by following the steps below:
1. Navigate to **Settings** > **Extensions** > Click Select under **Manual Installation**.
2. Then, the ~/jan/extensions/extensions.json file will be updated automatically.
:::caution
You need to prepare the extension file in .tgz format to install the **non-default** extension.
:::
:::info[Assistance and Support]
If you have questions, please join our [Discord community](https://discord.gg/Dt7MxDyNNZ) for support, updates, and discussions.
:::

View File

@ -1,99 +0,0 @@
---
title: FAQs
slug: /guides/faqs
sidebar_position: 12
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
acknowledgements,
third-party libraries,
]
---
## General Issues
- **Why can't I download models like Pandora 11B Q4 and Solar Instruct 10.7B Q4?**
- These models might have been removed or taken down. Please check the [Pre-configured Models](models-list.mdx) for the latest updates on model availability.
- **Why does Jan display "Apologies, something's amiss" when I try to run it?**
- This issue may arise if you're using an older Intel chip that does not fully support AVX instructions required for running AI models. Upgrading your hardware may resolve this issue.
- **How can I use Jan in Russia?**
- To use Jan in Russia, a VPN or [HTTPS - Proxy](./advanced-settings/http-proxy.mdx) is recommended to bypass any regional restrictions that might be in place.
- **I'm experiencing an error on startup from Nitro. What should I do?**
- If you encounter errors with Nitro, try switching the path to use the Nitro executable for the version 12-0. This adjustment can help resolve path-related issues.
## Download and Installation Issues
- **What does "Error occurred: Unexpected token" mean?**
- This error usually indicates a problem with your internet connection or that your access to certain resources is being blocked. Using a VPN or [HTTPS - Proxy](./advanced-settings/http-proxy.mdx) can help avoid these issues by providing a secure and unrestricted internet connection.
- **Why aren't my downloads working?**
- If you're having trouble downloading directly through Jan, you might want to download the model separately and then import it into Jan. Detailed instructions are available on [here](install.mdx).
- **Jan AI doesn't open on my Mac with an Intel processor. What can I do?**
- Granting the `.npm` folder permission for the user can resolve issues related to permissions on macOS, especially for users with Intel processors.
- **What should I do if the model download freezes?**
- If a model download freezes, consider importing the models manually. You can find more detailed guidance on how to do this at [Manual Import](./models/import-models.mdx) article.
- **I received a message that the model GPT4 does not exist or I do not have access. What should I do?**
- This message typically means you need to top up your credit with OpenAI or check your access permissions for the model.
- **I can't download models from "Explore the Hub." What's the solution?**
- Uninstalling Jan, clearing the cache, and reinstalling it following the guide provided [here](install.mdx) may help. Also, consider downloading the `.gguf` model via a browser as an alternative approach.
## Technical Issues and Solutions
- **How can I download models with a socks5 proxy or import a local model file?**
- Nightly builds of Jan offer support for downloading models with socks5 proxies or importing local model files.
- **My device shows no GPU usage and lacks a Settings folder. What should I do?**
- Using the nightly builds of Jan can address issues related to GPU usage and the absence of a Settings folder, as these builds contain the latest fixes and features.
- **Why does Jan display a toast message saying a model is loaded when it is not actually loaded?**
- This issue can be resolved by downloading the `.gguf` file from Hugging Face and replacing it in the model folder. This ensures the correct model is loaded.
- **How to enable CORS when running Nitro?**
- By default, CORS (Cross-Origin Resource Sharing) is disabled when running Nitro. Enabling CORS can be necessary for certain operations and integrations. Check the official documentation for instructions on how to enable CORS if your workflow requires it.
## Compatibility and Support
- **How to use GPU AMD for Jan?**
- Jan now supports AMD GPUs through Vulkan. This enhancement allows users with AMD graphics cards to leverage GPU acceleration, improving performance for AI model computations.
- **Is Jan available for Android or iOS?**
- Jan is primarily focused on the Desktop app and does not currently offer mobile apps for Android or iOS. The development team is concentrating on enhancing the desktop experience.
## Development and Features
- **Does Jan support Safetensors?**
- At the moment, Jan only supports GGUF. However, there are plans to support `.safetensor` files in the future.
- **I hope to customize the installation path of each model. Is that possible?**
- Yes you can customize the installation path. Please see [here](https://jan.ai/guides/advanced-settings/#access-the-jan-data-folder) for more information.
## Troubleshooting
- **What should I do if there's high CPU usage while Jan is idle?**
- If you notice high CPU usage while Jan is idle, consider using the nightly builds of Jan
- **What does the error "Failed to fetch" mean, and how can I fix it?**
- The "Failed to fetch" error typically occurs due to network issues or restrictions. Using the nightly builds of Jan may help overcome these issues by providing updated fixes and features.
- **What should I do if "Failed to fetch" occurs using MacBook Pro with Intel HD Graphics 4000 1536 MB?**
- Ensure that the model size is less than 90% of your available VRAM and that the VRAM is accessible to the app. Managing the resources effectively can help mitigate this issue.
:::info[Assistance and Support]
If you have questions, please join our [Discord community](https://discord.gg/Dt7MxDyNNZ) for support, updates, and discussions.
:::

View File

@ -0,0 +1,24 @@
---
title: Hardware Setup
slug: /guides/hardware
description: Jan Docs | Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
sidebar_position: 3
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
hardware requirements,
Nvidia,
AMD,
CPU,
GPU
]
---
Coming Soon

View File

@ -0,0 +1,19 @@
---
title: Overview
slug: /guides
description: Jan Docs | Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
sidebar_position: 1
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
]
---
Coming Soon

View File

@ -0,0 +1,228 @@
---
title: Quickstart
slug: /guides/quickstart
description: Jan Docs | Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
sidebar_position: 2
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
quickstart,
]
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
To get started quickly with Jan, follow the steps below:
## Step 1: Get Jan Desktop
<Tabs>
<TabItem value="mac" label = "Mac" default>
#### Pre-requisites
Before installing Jan, ensure :
- You have a Mac with an Apple Silicon Processor.
- Homebrew and its dependencies are installed. (for Installing Jan with Homebrew Package)
- Your macOS version is 10.15 or higher.
#### Stable Releases
To download stable releases, go to [Jan.ai](https://jan.ai/) > select **Download for Mac**.
The download should be available as a `.dmg`.
#### Nightly Releases
We provide the Nightly Release so that you can test new features and see what might be coming in a future stable release. Please be aware that there might be bugs!
You can download it from [Jan's Discord](https://discord.gg/FTk2MvZwJH) in the [`#nightly-builds`](https://discord.gg/q8szebnxZ7) channel.
#### Experimental Model
To enable the experimental mode, go to **Settings** > **Advanced Settings** and toggle the **Experimental Mode**
#### Install with Homebrew
Install Jan with the following Homebrew command:
```brew
brew install --cask jan
```
:::warning
Homebrew package installation is currently limited to **Apple Silicon Macs**, with upcoming support for Windows and Linux.
:::
</TabItem>
<TabItem value = "windows" label = "Windows">
#### Pre-requisites
Ensure that your system meets the following requirements:
- Windows 10 or higher is required to run Jan.
To enable GPU support, you will need:
- NVIDIA GPU with CUDA Toolkit 11.7 or higher
- NVIDIA driver 470.63.01 or higher
#### Stable Releases
To download stable releases, go to [Jan.ai](https://jan.ai/) > select **Download for Windows**.
The download should be available as a `.exe` file.
#### Nightly Releases
We provide the Nightly Release so that you can test new features and see what might be coming in a future stable release. Please be aware that there might be bugs!
You can download it from [Jan's Discord](https://discord.gg/FTk2MvZwJH) in the [`#nightly-builds`](https://discord.gg/q8szebnxZ7) channel.
#### Experimental Model
To enable the experimental mode, go to **Settings** > **Advanced Settings** and toggle the **Experimental Mode**
#### Default Installation Directory
By default, Jan is installed in the following directory:
```sh
# Default installation directory
C:\Users\{username}\AppData\Local\Programs\Jan
```
:::warning
If you are stuck in a broken build, go to the [Broken Build](/guides/common-error/broken-build) section of Common Errors.
:::
</TabItem>
<TabItem value = "linux" label = "Linux">
#### Pre-requisites
Ensure that your system meets the following requirements:
- glibc 2.27 or higher (check with `ldd --version`)
- gcc 11, g++ 11, cpp 11, or higher, refer to this link for more information.
To enable GPU support, you will need:
- NVIDIA GPU with CUDA Toolkit 11.7 or higher
- NVIDIA driver 470.63.01 or higher
#### Stable Releases
To download stable releases, go to [Jan.ai](https://jan.ai/) > select **Download for Linux**.
The download should be available as a `.AppImage` file or a `.deb` file.
#### Nightly Releases
We provide the Nightly Release so that you can test new features and see what might be coming in a future stable release. Please be aware that there might be bugs!
You can download it from [Jan's Discord](https://discord.gg/FTk2MvZwJH) in the [`#nightly-builds`](https://discord.gg/q8szebnxZ7) channel.
#### Experimental Model
To enable the experimental mode, go to **Settings** > **Advanced Settings** and toggle the **Experimental Mode**
<Tabs groupId = "linux_type">
<TabItem value="linux_main" label = "Linux">
To install Jan, you should use your package manager's install or `dpkg`.
</TabItem>
<TabItem value = "deb_ub" label = "Debian / Ubuntu">
To install Jan, run the following command:
```sh
# Install Jan using dpkg
sudo dpkg -i jan-linux-amd64-{version}.deb
# Install Jan using apt-get
sudo apt-get install ./jan-linux-amd64-{version}.deb
# where jan-linux-amd64-{version}.deb is path to the Jan package
```
</TabItem>
<TabItem value = "other" label = "Others">
To install Jan, run the following commands:
```sh
# Install Jan using AppImage
chmod +x jan-linux-x86_64-{version}.AppImage
./jan-linux-x86_64-{version}.AppImage
# where jan-linux-x86_64-{version}.AppImage is path to the Jan package
```
</TabItem>
</Tabs>
:::warning
If you are stuck in a broken build, go to the [Broken Build](/guides/common-error/broken-build) section of Common Errors.
:::
</TabItem>
</Tabs>
## Step 2: Download a Model
Before using Jan, you must download a pre-configured AI model. Jan offers a selection of local AI models for various purposes and requirements, available for download without needing an API key.
1. Go to the **Hub**.
2. Select the models that you would like to install, to see a model details click the dropdown button.
:::note
Ensure you select the appropriate model size by balancing performance, cost, and resource considerations in line with your task's specific requirements and hardware specifications.
:::
## Step 3: Connect to ChatGPT (Optional)
Jan also offers a remote model that requires an API key for access. For instance, to use the ChatGPT model with Jan, you must enter your API key to establish a connection by following the steps below:
1. Go to the **Thread** tab.
2. Under the Model dropdown menu, select the ChatGPT model.
3. Fill in your ChatGPT API Key that you can get in your [OpenAI platform](https://platform.openai.com/account/api-keys).
## Step 4: Chat with Models
After downloading and configuring your model, you can immediately use it in the **Thread** tab.
## Best Practices
This section outlines best practices for developers, analysts, and AI enthusiasts to enhance their experience with Jan when adding AI locally to their computers. Implementing these practices will optimize the performance of AI models.
### Follow the Quickstart Guide
The [quickstart guide](quickstart.mdx) is designed to facilitate a quick setup process. It provides a clear instruction and simple steps to get you up and running with Jan.ai quickly. Even, if you are inexperienced in AI, the quickstart can offer valuable insights and tips to help you get started quickly.
### Setting up the Right Models
Jan offers a range of pre-configured AI models that are tailored to different tasks and industries. You should identify which on that aligns with your objectives. There are factors to be considered:
- Capabilities
- Accuracy
- Processing Speed
:::note
- Some of these factors also depend on your hardware, please see Hardware Requirement.
- Choosing the right model is important to achieve the best performance.
:::
### Setting up Jan
Ensure that you familiarize yourself with the Jan application. Jan offers advanced settings that you can adjust. These settings may influence how your AI behaves locally. Please see the [Advanced Settings](/guides/advanced/) article for a complete list of Jan's configurations and instructions on how to configure them.
### Integrations
One of Jan's key features is its ability to integrate with many systems. Whether you are incorporating Jan.ai with any open-source LLM provider or other tools, it is important to understand the integration capabilities and limitations.
### Mastering the Prompt Engineering
Prompt engineering is an important aspect when dealing with AI models to generate the desired outputs. Mastering this skill can significantly enhance the performance and the responses of the AI. Below are some tips that you can do for prompt engineering:
- Ask the model to adopt a persona
- Be specific and details get a more specific answers
- Provide examples or preference text or context at the beginning
- Use a clear and concise language
- Use certain keywords and phrases
## Pre-configured Models
To see the full list of Jan's pre-configured models, please see our official GitHub [here](https://github.com/janhq/jan).

View File

@ -0,0 +1,19 @@
---
title: Overview
slug: /guides/providers
description: Jan Docs | Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
sidebar_position: 12
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
]
---
Coming Soon

View File

@ -0,0 +1,91 @@
---
title: Installation
sidebar_position: 4
slug: /guides/install/
hide_table_of_contents: true
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
]
---
## Jan Device Compatible
Jan support Mac, Windows, and Linux devices.
import DocCardList from "@theme/DocCardList";
<DocCardList />
## Install Server-Side
To install Jan from source, follow the steps below:
### Pre-requisites
Before proceeding with the installation of Jan from source, ensure that the following software versions are installed on your system:
- Node.js version 20.0.0 or higher
- Yarn version 1.22.0 or higher
### Install Jan Development Build
1. Clone the Jan repository from GitHub by using the following command:
```bash
git clone https://github.com/janhq/jan
git checkout DESIRED_BRANCH
cd jan
```
2. Install the required dependencies by using the following Yarn command:
```bash
yarn install
# Build core module
yarn build:core
# Packing base plugins
yarn build:plugins
# Packing uikit
yarn build:uikit
```
3. Run the development server.
```bash
yarn dev
```
This will start the development server and open the desktop app. During this step, you may encounter notifications about installing base plugins. Simply click **OK** and **Next** to continue.
### Install Jan Production Build
1. Clone the Jan repository from GitHub by using the following command:
```bash
git clone https://github.com/janhq/jan
cd jan
```
2. Install the required dependencies by using the following Yarn command:
```bash
yarn install
# Build core module
yarn build:core
# Packing base plugins
yarn build:plugins
# Packing uikit
yarn build:uikit
```
3. Run the production server.
```bash
yarn
```
This completes the installation process for Jan from source. The production-ready app for macOS can be found in the dist folder.

View File

@ -1,8 +1,9 @@
---
title: Installation
sidebar_position: 2
title: Install on Docker
sidebar_position: 4
slug: /guides/install/server
hide_table_of_contents: true
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
description: A step-by-step guide to install Jan using Docker.
keywords:
[
Jan AI,
@ -13,164 +14,15 @@ keywords:
conversational AI,
no-subscription fee,
large language model,
Install on Docker,
Docker,
Helm,
]
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import installImageURL from './assets/jan-ai-download.png';
<Tabs>
<TabItem value="mac" label = "Mac" default>
### Pre-requisites
Before installing Jan, ensure :
- You have a Mac with an Apple Silicon Processor.
- Homebrew and its dependencies are installed. (for Installing Jan with Homebrew Package)
- Your macOS version is 10.15 or higher.
### Stable Releases
To download stable releases, go to [Jan.ai](https://jan.ai/) > select **Download for Mac**.
The download should be available as a `.dmg`.
### Nightly Releases
We provide the Nightly Release so that you can test new features and see what might be coming in a future stable release. Please be aware that there might be bugs!
You can download it from [Jan's Discord](https://discord.gg/FTk2MvZwJH) in the [`#nightly-builds`](https://discord.gg/q8szebnxZ7) channel.
### Experimental Model
To enable the experimental mode, go to **Settings** > **Advanced Settings** and toggle the **Experimental Mode**
### Install with Homebrew
Install Jan with the following Homebrew command:
```brew
brew install --cask jan
```
:::warning
Homebrew package installation is currently limited to **Apple Silicon Macs**, with upcoming support for Windows and Linux.
:::
</TabItem>
<TabItem value = "windows" label = "Windows">
### Pre-requisites
Ensure that your system meets the following requirements:
- Windows 10 or higher is required to run Jan.
To enable GPU support, you will need:
- NVIDIA GPU with CUDA Toolkit 11.7 or higher
- NVIDIA driver 470.63.01 or higher
### Stable Releases
To download stable releases, go to [Jan.ai](https://jan.ai/) > select **Download for Windows**.
The download should be available as a `.exe` file.
### Nightly Releases
We provide the Nightly Release so that you can test new features and see what might be coming in a future stable release. Please be aware that there might be bugs!
You can download it from [Jan's Discord](https://discord.gg/FTk2MvZwJH) in the [`#nightly-builds`](https://discord.gg/q8szebnxZ7) channel.
### Experimental Model
To enable the experimental mode, go to **Settings** > **Advanced Settings** and toggle the **Experimental Mode**
### Default Installation Directory
By default, Jan is installed in the following directory:
```sh
# Default installation directory
C:\Users\{username}\AppData\Local\Programs\Jan
```
:::warning
If you are stuck in a broken build, go to the [Broken Build](/guides/common-error/broken-build) section of Common Errors.
:::
</TabItem>
<TabItem value = "linux" label = "Linux">
### Pre-requisites
Ensure that your system meets the following requirements:
- glibc 2.27 or higher (check with `ldd --version`)
- gcc 11, g++ 11, cpp 11, or higher, refer to this link for more information.
To enable GPU support, you will need:
- NVIDIA GPU with CUDA Toolkit 11.7 or higher
- NVIDIA driver 470.63.01 or higher
### Stable Releases
To download stable releases, go to [Jan.ai](https://jan.ai/) > select **Download for Linux**.
The download should be available as a `.AppImage` file or a `.deb` file.
### Nightly Releases
We provide the Nightly Release so that you can test new features and see what might be coming in a future stable release. Please be aware that there might be bugs!
You can download it from [Jan's Discord](https://discord.gg/FTk2MvZwJH) in the [`#nightly-builds`](https://discord.gg/q8szebnxZ7) channel.
### Experimental Model
To enable the experimental mode, go to **Settings** > **Advanced Settings** and toggle the **Experimental Mode**
<Tabs groupId = "linux_type">
<TabItem value="linux_main" label = "Linux">
To install Jan, you should use your package manager's install or `dpkg`.
</TabItem>
<TabItem value = "deb_ub" label = "Debian / Ubuntu">
To install Jan, run the following command:
```sh
# Install Jan using dpkg
sudo dpkg -i jan-linux-amd64-{version}.deb
# Install Jan using apt-get
sudo apt-get install ./jan-linux-amd64-{version}.deb
# where jan-linux-amd64-{version}.deb is path to the Jan package
```
</TabItem>
<TabItem value = "other" label = "Others">
To install Jan, run the following commands:
```sh
# Install Jan using AppImage
chmod +x jan-linux-x86_64-{version}.AppImage
./jan-linux-x86_64-{version}.AppImage
# where jan-linux-x86_64-{version}.AppImage is path to the Jan package
```
</TabItem>
</Tabs>
:::warning
If you are stuck in a broken build, go to the [Broken Build](/guides/common-error/broken-build) section of Common Errors.
:::
</TabItem>
<TabItem value="docker" label = "Docker" default>
### Pre-requisites
### Pre-requisites
Ensure that your system meets the following requirements:
- Linux or WSL2 Docker
- Latest Docker Engine and Docker Compose
@ -276,9 +128,6 @@ If you are stuck in a broken build, go to the [Broken Build](/guides/common-erro
:::warning
If you are stuck in a broken build, go to the [Broken Build](/guides/common-error/broken-build/) section of Common Errors.
If you are stuck in a broken build, go to the [Broken Build](/troubleshooting/#broken-build) section of Common Errors.
:::
</TabItem>
</Tabs>

View File

@ -0,0 +1,22 @@
---
title: Install on Linux
sidebar_position: 3
slug: /guides/install/linux
hide_table_of_contents: true
description: A step-by-step guide to install Jan on your Linux.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
Install on Linux,
Linux,
]
---
Coming soon

View File

@ -0,0 +1,23 @@
---
title: Install on Mac
sidebar_position: 1
slug: /guides/install/mac
hide_table_of_contents: true
description: A step-by-step guide to install Jan on your Mac.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
MacOs,
Install on Mac,
Apple devices,
]
---
Coming soon

View File

@ -0,0 +1,24 @@
---
title: Install on Windows
sidebar_position: 2
slug: /guides/install/windows
hide_table_of_contents: true
description: A step-by-step guide to install Jan on your Windows.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
Windows 10,
Windows 11,
Install on Windows,
Microsoft devices,
]
---
Coming soon

View File

@ -1,22 +0,0 @@
---
title: Integrations
slug: /guides/integrations/
sidebar_position: 6
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
build extension,
]
---
import DocCardList from "@theme/DocCardList";
<DocCardList />

View File

@ -0,0 +1,22 @@
---
title: CrewAI
sidebar_position: 19
slug: /integrations/crewai
description: A step-by-step guide on how to integrate Jan with CrewAI.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
Continue integration,
CrewAI integration,
CrewAI
]
---
Coming Soon

View File

@ -1,10 +1,25 @@
---
title: Discord
slug: /integrations/discord
sidebar_position: 5
description: A step-by-step guide on how to integrate Jan with a Discord bot.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
Discord integration,
Discord,
bot,
]
---
## How to Integrate Discord Bot with Jan
## Integrate Discord Bot with Jan
Discord bot can enhances your discord server interactions. By integrating Jan with it, you can significantly boost responsiveness and user engaggement in your discord server.

View File

@ -1,11 +1,25 @@
---
title: Open Interpreter
slug: /integrations/interpreter
sidebar_position: 6
description: A step-by-step guide on how to integrate Jan with Open Interpreter.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
Open Interpreter integration,
Open Interpreter,
]
---
## How to Integrate Open Interpreter with Jan
## Integrate Open Interpreter with Jan
[Open Interpreter](https://github.com/KillianLucas/open-interpreter/) lets LLMs run code (Python, Javascript, Shell, and more) locally. You can chat with Open Interpreter through a ChatGPT-like interface in your terminal by running `interpreter` after installing. To integrate Open Interpreter with Jan, follow the steps below:

View File

@ -0,0 +1,19 @@
---
title: Overview
slug: /integrations
description: Jan Docs | Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
sidebar_position: 1
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
]
---
Coming Soon

View File

@ -1,11 +1,25 @@
---
title: Raycast
sidebar_position: 4
slug: /integrations/raycast
sidebar_position: 17
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
raycast integration,
Raycast,
]
description: A step-by-step guide on how to integrate Jan with Raycast.
---
## How to Integrate Raycast
## Integrate Raycast with Jan
[Raycast](https://www.raycast.com/) is a productivity tool designed for macOS that enhances workflow efficiency by providing quick access to various tasks and functionalities through a keyboard-driven interface. To integrate Raycast with Jan, follow the steps below:
### Step 1: Download the TinyLlama Model

View File

@ -1,11 +1,25 @@
---
title: OpenRouter
slug: /integrations/openrouter
sidebar_position: 2
description: A step-by-step guide on how to integrate Jan with OpenRouter.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
OpenRouter integration,
OpenRouter
]
---
## How to Integrate OpenRouter with Jan
## Integrate OpenRouter with Jan
[OpenRouter](https://openrouter.ai/docs#quick-start) is a tool that gathers AI models. Developers can utilize its API to engage with diverse large language models, generative image models, and generative 3D object models.
@ -16,7 +30,7 @@ To connect Jan with OpenRouter for accessing remote Large Language Models (LLMs)
1. Find your API keys in the [OpenRouter API Key](https://openrouter.ai/keys).
2. Set the OpenRouter API key in `~/jan/engines/openai.json` file.
### Step 2: MModel Configuration
### Step 2: Model Configuration
1. Go to the directory `~/jan/models`.
2. Make a new folder called `openrouter-(modelname)`, like `openrouter-dolphin-mixtral-8x7b`.
@ -50,7 +64,7 @@ To connect Jan with OpenRouter for accessing remote Large Language Models (LLMs)
```
:::note
For more details regarding the `model.json` settings and parameters fields, please see [here](../models/integrate-remote.mdx#modeljson).
For more details regarding the `model.json` settings and parameters fields, please see [here](/guides/providers/remote-server/#modeljson).
:::
### Step 3 : Start the Model

View File

@ -0,0 +1,21 @@
---
title: Unsloth
sidebar_position: 20
slug: /integrations/unsloth
description: A step-by-step guide on how to integrate Jan with Unsloth.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
Continue integration,
Unsloth integration,
]
---
Coming Soon

View File

@ -1,6 +1,7 @@
---
title: Continue
sidebar_position: 1
title: Continue Integration
sidebar_position: 18
slug: /integrations/continue
description: A step-by-step guide on how to integrate Jan with Continue and VS Code.
keywords:
[
@ -17,10 +18,11 @@ keywords:
]
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
## How to Integrate with Continue VS Code
## Integrate with Continue VS Code
[Continue](https://continue.dev/docs/intro) is an open-source autopilot compatible with Visual Studio Code and JetBrains, offering the simplest method to code with any LLM (Local Language Model).

View File

@ -1,7 +1,7 @@
---
title: Common Error
slug: /guides/common-error/
sidebar_position: 8
title: Local Providers
slug: /guides/providers/local
sidebar_position: 13
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
keywords:
[

View File

Before

Width:  |  Height:  |  Size: 111 KiB

After

Width:  |  Height:  |  Size: 111 KiB

View File

Before

Width:  |  Height:  |  Size: 145 KiB

After

Width:  |  Height:  |  Size: 145 KiB

View File

Before

Width:  |  Height:  |  Size: 155 KiB

After

Width:  |  Height:  |  Size: 155 KiB

View File

Before

Width:  |  Height:  |  Size: 27 KiB

After

Width:  |  Height:  |  Size: 27 KiB

View File

Before

Width:  |  Height:  |  Size: 109 KiB

After

Width:  |  Height:  |  Size: 109 KiB

View File

Before

Width:  |  Height:  |  Size: 258 KiB

After

Width:  |  Height:  |  Size: 258 KiB

View File

Before

Width:  |  Height:  |  Size: 7.3 MiB

After

Width:  |  Height:  |  Size: 7.3 MiB

View File

Before

Width:  |  Height:  |  Size: 163 KiB

After

Width:  |  Height:  |  Size: 163 KiB

View File

Before

Width:  |  Height:  |  Size: 12 MiB

After

Width:  |  Height:  |  Size: 12 MiB

View File

Before

Width:  |  Height:  |  Size: 106 KiB

After

Width:  |  Height:  |  Size: 106 KiB

View File

Before

Width:  |  Height:  |  Size: 105 KiB

After

Width:  |  Height:  |  Size: 105 KiB

View File

Before

Width:  |  Height:  |  Size: 115 KiB

After

Width:  |  Height:  |  Size: 115 KiB

View File

Before

Width:  |  Height:  |  Size: 111 KiB

After

Width:  |  Height:  |  Size: 111 KiB

View File

Before

Width:  |  Height:  |  Size: 644 KiB

After

Width:  |  Height:  |  Size: 644 KiB

View File

Before

Width:  |  Height:  |  Size: 128 KiB

After

Width:  |  Height:  |  Size: 128 KiB

View File

Before

Width:  |  Height:  |  Size: 144 KiB

After

Width:  |  Height:  |  Size: 144 KiB

View File

@ -1,7 +1,8 @@
---
title: Customize Engine Settings
title: LlamaCPP Extension
slug: /guides/providers/llamacpp
sidebar_position: 1
description: A step-by-step guide to change your engine's settings.
description: A step-by-step guide on how to customize the LlamaCPP extension.
keywords:
[
Jan AI,
@ -12,14 +13,19 @@ keywords:
conversational AI,
no-subscription fee,
large language model,
import-models-manually,
customize-engine-settings,
Llama CPP integration,
LlamaCPP Extension,
]
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
## Overview
[Nitro](https://github.com/janhq/nitro) is an inference server on top of [llama.cpp](https://github.com/ggerganov/llama.cpp). It provides an OpenAI-compatible API, queue, & scaling.
Nitro is the default AI engine downloaded with Jan. There is no additional setup needed.
In this guide, we'll walk you through the process of customizing your engine settings by configuring the `nitro.json` file

View File

@ -1,5 +1,6 @@
---
title: LM Studio
slug: /guides/providers/lmstudio
sidebar_position: 8
description: A step-by-step guide on how to integrate Jan with LM Studio.
keywords:
@ -16,7 +17,7 @@ keywords:
]
---
## How to Integrate LM Studio with Jan
## Integrate LM Studio with Jan
[LM Studio](https://lmstudio.ai/) enables you to explore, download, and run local Large Language Models (LLMs). You can integrate Jan with LM Studio using two methods:
1. Integrate the LM Studio server with Jan UI
@ -81,7 +82,7 @@ Replace `(port)` with your chosen port number. The default is 1234.
}
```
:::note
For more details regarding the `model.json` settings and parameters fields, please see [here](../models/integrate-remote.mdx#modeljson).
For more details regarding the `model.json` settings and parameters fields, please see [here](/guides/providers/remote-server/#modeljson).
:::

View File

@ -1,6 +1,7 @@
---
title: Ollama
sidebar_position: 9
slug: /guides/providers/ollama
sidebar_position: 4
description: A step-by-step guide on how to integrate Jan with Ollama.
keywords:
[
@ -16,7 +17,7 @@ keywords:
]
---
## How to Integrate Ollama with Jan
## Integrate Ollama with Jan
Ollama provides you with largen language that you can run locally. There are two methods to integrate Ollama with Jan:
1. Integrate Ollama server with Jan.
@ -80,7 +81,7 @@ ollama run <model-name>
}
```
:::note
For more details regarding the `model.json` settings and parameters fields, please see [here](../models/integrate-remote.mdx#modeljson).
For more details regarding the `model.json` settings and parameters fields, please see [here](/guides/providers/remote-server/#modeljson).
:::
### Step 3: Start the Model

View File

@ -1,87 +1,104 @@
---
title: TensorRT-LLM
slug: /guides/providers/tensorrt-llm
---
Users with Nvidia GPUs can get **20-40% faster\* token speeds** on their laptop or desktops by using [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM). The greater implication is that you are running FP16, which is also more accurate than quantized models.
This guide walks you through how to install Jan's official [TensorRT-LLM Extension](https://github.com/janhq/nitro-tensorrt-llm). This extension uses [Nitro-TensorRT-LLM](https://github.com/janhq/nitro-tensorrt-llm) as the AI engine, instead of the default [Nitro-Llama-CPP](https://github.com/janhq/nitro). It includes an efficient C++ server to natively execute the [TRT-LLM C++ runtime](https://nvidia.github.io/TensorRT-LLM/gpt_runtime.html). It also comes with additional feature and performance improvements like OpenAI compatibility, tokenizer improvements, and queues.
*Compared to using LlamaCPP engine.
:::warning
This feature is only available for Windows users. Linux is coming soon.
Additionally, we only prebuilt a few demo models. You can always build your desired models directly on your machine. [Read here](#build-your-own-tensorrt-models).
:::
## Requirements
- A Windows PC
- Nvidia GPU(s): Ada or Ampere series (i.e. RTX 4000s & 3000s). More will be supported soon.
- 3GB+ of disk space to download TRT-LLM artifacts and a Nitro binary
- Jan v0.4.9+ or Jan v0.4.8-321+ (nightly)
- Nvidia Driver v535+ ([installation guide](https://jan.ai/guides/common-error/not-using-gpu/#1-ensure-gpu-mode-requirements))
- CUDA Toolkit v12.2+ ([installation guide](https://jan.ai/guides/common-error/not-using-gpu/#1-ensure-gpu-mode-requirements))
## Install TensorRT-Extension
1. Go to Settings > Extensions
2. Click install next to the TensorRT-LLM Extension
3. Check that files are correctly downloaded
```sh
ls ~\jan\extensions\@janhq\tensorrt-llm-extension\dist\bin
# Your Extension Folder should now include `nitro.exe`, among other artifacts needed to run TRT-LLM
```
## Download a Compatible Model
TensorRT-LLM can only run models in `TensorRT` format. These models, aka "TensorRT Engines", are prebuilt specifically for each target OS+GPU architecture.
We offer a handful of precompiled models for Ampere and Ada cards that you can immediately download and play with:
1. Restart the application and go to the Hub
2. Look for models with the `TensorRT-LLM` label in the recommended models list. Click download. This step might take some time. 🙏
![image](https://hackmd.io/_uploads/rJewrEgRp.png)
3. Click use and start chatting!
4. You may need to allow Nitro in your network
![alt text](image.png)
:::warning
If you are our nightly builds, you may have to reinstall the TensorRT-LLM extension each time you update the app. We're working on better extension lifecyles - stay tuned.
:::
## Configure Settings
You can customize the default parameters for how Jan runs TensorRT-LLM.
:::info
coming soon
:::
## Troubleshooting
### Incompatible Extension vs Engine versions
For now, the model versions are pinned to the extension versions.
### Uninstall Extension
1. Quit the app
2. Go to Settings > Extensions
3. Delete the entire Extensions folder.
4. Reopen the app, only the default extensions should be restored.
### Install Nitro-TensorRT-LLM manually
To manually build the artifacts needed to run the server and TensorRT-LLM, you can reference the source code. [Read here](https://github.com/janhq/nitro-tensorrt-llm?tab=readme-ov-file#quickstart).
### Build your own TensorRT models
:::info
coming soon
:::
---
title: TensorRT-LLM Extension
slug: /guides/providers/tensorrt-llm
sidebar_position: 2
description: A step-by-step guide on how to customize the TensorRT-LLM extension.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
TensorRT-LLM Extension,
TensorRT,
tensorRT,
extension,
]
---
## Overview
Users with Nvidia GPUs can get **20-40% faster token speeds** compared to using LlamaCPP engine on their laptop or desktops by using [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM). The greater implication is that you are running FP16, which is also more accurate than quantized models.
## TensortRT-LLM Extension
This guide walks you through how to install Jan's official [TensorRT-LLM Extension](https://github.com/janhq/nitro-tensorrt-llm). This extension uses [Nitro-TensorRT-LLM](https://github.com/janhq/nitro-tensorrt-llm) as the AI engine, instead of the default [Nitro-Llama-CPP](https://github.com/janhq/nitro). It includes an efficient C++ server to natively execute the [TRT-LLM C++ runtime](https://nvidia.github.io/TensorRT-LLM/gpt_runtime.html). It also comes with additional feature and performance improvements like OpenAI compatibility, tokenizer improvements, and queues.
:::warning
This feature is only available for Windows users. Linux is coming soon.
Additionally, we only prebuilt a few demo models. You can always build your desired models directly on your machine. [Read here](#build-your-own-tensorrt-models).
:::
### Pre-requisites
- A Windows PC
- Nvidia GPU(s): Ada or Ampere series (i.e. RTX 4000s & 3000s). More will be supported soon.
- 3GB+ of disk space to download TRT-LLM artifacts and a Nitro binary
- Jan v0.4.9+ or Jan v0.4.8-321+ (nightly)
- Nvidia Driver v535+ (For installation guide please see [here](/troubleshooting/#1-ensure-gpu-mode-requirements))
- CUDA Toolkit v12.2+ (For installation guide please see [here](/troubleshooting/#1-ensure-gpu-mode-requirements))
### Step 1: Install TensorRT-Extension
1. Go to Settings > Extensions
2. Click install next to the TensorRT-LLM Extension
3. Check that files are correctly downloaded
```sh
ls ~\jan\extensions\@janhq\tensorrt-llm-extension\dist\bin
# Your Extension Folder should now include `nitro.exe`, among other artifacts needed to run TRT-LLM
```
### Step 2: Download a Compatible Model
TensorRT-LLM can only run models in `TensorRT` format. These models, aka "TensorRT Engines", are prebuilt specifically for each target OS+GPU architecture.
We offer a handful of precompiled models for Ampere and Ada cards that you can immediately download and play with:
1. Restart the application and go to the Hub
2. Look for models with the `TensorRT-LLM` label in the recommended models list. Click download. This step might take some time. 🙏
![image](https://hackmd.io/_uploads/rJewrEgRp.png)
3. Click use and start chatting!
4. You may need to allow Nitro in your network
![alt text](./assets/image.png)
:::warning
If you are our nightly builds, you may have to reinstall the TensorRT-LLM extension each time you update the app. We're working on better extension lifecyles - stay tuned.
:::
### Step 3: Configure Settings
You can customize the default parameters for how Jan runs TensorRT-LLM.
:::info
coming soon
:::
## Troubleshooting
### Incompatible Extension vs Engine versions
For now, the model versions are pinned to the extension versions.
### Uninstall Extension
1. Quit the app
2. Go to Settings > Extensions
3. Delete the entire Extensions folder.
4. Reopen the app, only the default extensions should be restored.
### Install Nitro-TensorRT-LLM manually
To manually build the artifacts needed to run the server and TensorRT-LLM, you can reference the source code. [Read here](https://github.com/janhq/nitro-tensorrt-llm?tab=readme-ov-file#quickstart).
### Build your own TensorRT models
:::info
coming soon
:::

View File

@ -1,70 +0,0 @@
---
title: Pre-configured Models
sidebar_position: 3
---
## Overview
Jan provides various pre-configured AI models with different capabilities. Please see the following list for details.
| Model | Description |
| ----- | ----------- |
| Mistral Instruct 7B Q4 | A model designed for a comprehensive understanding through training on extensive internet data |
| OpenHermes Neural 7B Q4 | A merged model using the TIES method. It performs well in various benchmarks |
| Stealth 7B Q4 | This is a new experimental family designed to enhance Mathematical and Logical abilities |
| Trinity-v1.2 7B Q4 | An experimental model merge using the Slerp method |
| Openchat-3.5 7B Q4 | An open-source model that has a performance that surpasses that of ChatGPT-3.5 and Grok-1 across various benchmarks |
| Wizard Coder Python 13B Q5 | A Python coding model that demonstrates high proficiency in specific domains like coding and mathematics |
| OpenAI GPT 3.5 Turbo | The latest GPT-3.5 Turbo model with higher accuracy at responding in requested formats and a fix for a bug that caused a text encoding issue for non-English language function calls |
| OpenAI GPT 3.5 Turbo 16k 0613 | A Snapshot model of gpt-3.5-16k-turbo from June 13th 2023 |
| OpenAI GPT 4 | The latest GPT-4 model intended to reduce cases of “laziness” where the model doesn't complete a task |
| TinyLlama Chat 1.1B Q4 | A tiny model with only 1.1B. It's a good model for less powerful computers |
| Deepseek Coder 1.3B Q8 | A model that excelled in project-level code completion with advanced capabilities across multiple programming languages |
| Phi-2 3B Q8 | a 2.7B model, excelling in common sense and logical reasoning benchmarks, trained with synthetic texts and filtered websites |
| Llama 2 Chat 7B Q4 | A model that is specifically designed for a comprehensive understanding through training on extensive internet data |
| CodeNinja 7B Q4 | A model that is good for coding tasks and can handle various languages, including Python, C, C++, Rust, Java, JavaScript, and more |
| Noromaid 7B Q5 | A model designed for role-playing with human-like behavior. |
| Starling alpha 7B Q4 | An upgrade of Openchat 3.5 using RLAIF, is good at various benchmarks, especially with GPT-4 judging its performance |
| Yarn Mistral 7B Q4 | A language model for long context and supports a 128k token context window |
| LlaVa 1.5 7B Q5 K | A model can bring vision understanding to Jan |
| BakLlava 1 | A model can bring vision understanding to Jan |
| Solar Slerp 10.7B Q4 | A model that uses the Slerp merge method from SOLAR Instruct and Pandora-v1 |
| LlaVa 1.5 13B Q5 K | A model can bring vision understanding to Jan |
| Deepseek Coder 33B Q5 | A model that excelled in project-level code completion with advanced capabilities across multiple programming languages |
| Phind 34B Q5 | A multi-lingual model that is fine-tuned on 1.5B tokens of high-quality programming data, excels in various programming languages, and is designed to be steerable and user-friendly |
| Yi 34B Q5 | A specialized chat model is known for its diverse and creative responses and excels across various NLP tasks and benchmarks |
| Capybara 200k 34B Q5 | A long context length model that supports 200K tokens |
| Dolphin 8x7B Q4 | An uncensored model built on Mixtral-8x7b and it is good at programming tasks |
| Mixtral 8x7B Instruct Q4 | A pre-trained generative Sparse Mixture of Experts, which outperforms 70B models on most benchmarks |
| Tulu 2 70B Q4 | A strong model alternative to Llama 2 70b Chat to act as helpful assistants |
| Llama 2 Chat 70B Q4 | A model that is specifically designed for a comprehensive understanding through training on extensive internet data |
:::note
OpenAI GPT models require a subscription to use them further. To learn more, [click here](https://openai.com/pricing).
:::
## Model details
| Model | Author | Model ID | Format | Size |
| ----- | ------ | -------- | ------ | ---- |
| Mistral Instruct 7B Q4 | MistralAI, The Bloke | `mistral-ins-7b-q4` | **GGUF** | 4.07GB |
| OpenHermes Neural 7B Q4 | Intel, Jan | `openhermes-neural-7b` | **GGUF** | 4.07GB |
| Stealth 7B Q4 | Jan | `stealth-v1.2-7b` | **GGUF** | 4.07GB |
| Trinity-v1.2 7B Q4 | Jan | `trinity-v1.2-7b` | **GGUF** | 4.07GB |
| Openchat-3.5 7B Q4 | Openchat | `openchat-3.5-7b` | **GGUF** | 4.07GB |
| Wizard Coder Python 13B Q5 | WizardLM, The Bloke | `wizardcoder-13b` | **GGUF** | 7.33GB | - |
| OpenAI GPT 3.5 Turbo | OpenAI | `gpt-3.5-turbo` | **GGUF** | - |
| OpenAI GPT 3.5 Turbo 16k 0613 | OpenAI | `gpt-3.5-turbo-16k-0613` | **GGUF** | - |
| OpenAI GPT 4 | OpenAI | `gpt-4` | **GGUF** | - |
| TinyLlama Chat 1.1B Q4 | TinyLlama | `tinyllama-1.1b` | **GGUF** | 638.01MB |
| Deepseek Coder 1.3B Q8 | Deepseek, The Bloke | `deepseek-coder-1.3b` | **GGUF** | 1.33GB |
| Phi-2 3B Q8 | Microsoft | `phi-2-3b` | **GGUF** | 2.76GB |
| Llama 2 Chat 7B Q4 | MetaAI, The Bloke | `llama2-chat-7b-q4` | **GGUF** | 3.80GB |
| CodeNinja 7B Q4 | Beowolx | `codeninja-1.0-7b` | **GGUF** | 4.07GB |
| Noromaid 7B Q5 | NeverSleep | `noromaid-7b` | **GGUF** | 4.07GB |
| Starling alpha 7B Q4 | Berkeley-nest, The Bloke | `starling-7b` | **GGUF** | 4.07GB |
| Yarn Mistral 7B Q4 | NousResearch, The Bloke | `yarn-mistral-7b` | **GGUF** | 4.07GB |
| LlaVa 1.5 7B Q5 K | Mys | `llava-1.5-7b-q5` | **GGUF** | 5.03GB |
| BakLlava 1 | Mys | `bakllava-1` | **GGUF** | 5.36GB |

View File

@ -1,22 +0,0 @@
---
title: Models Setup
slug: /guides/models-setup/
sidebar_position: 5
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
build extension,
]
---
import DocCardList from "@theme/DocCardList";
<DocCardList />

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.3 MiB

View File

@ -1,257 +0,0 @@
---
title: Manual Import
sidebar_position: 3
description: A step-by-step guide on how to perform manual import feature.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
import-models-manually,
absolute-filepath,
]
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import janModel from './assets/jan-model-hub.png';
This guide will show you how to perform manual import. In this guide, we are using a GGUF model from [HuggingFace](https://huggingface.co/) and our latest model, [Trinity](https://huggingface.co/janhq/trinity-v1-GGUF), as an example.
## Newer versions - nightly versions and v0.4.8+
Starting with version 0.4.8, Jan has introduced the capability to import models using a UI drag-and-drop method. This allows you to import models directly into the Jan application UI by dragging the `.GGUF` file from your directory into the Jan application.
### 1. Get the Model
Download the model from HuggingFace in the `.GGUF` format.
### 2. Import the Model
1. Open your Jan application.
2. Click the **Import Model** button.
3. Open your downloaded model.
4. Drag the `.GGUF` file from your directory into the Jan **Import Model** window.
### 3. Done!
If your model doesn't show up in the **Model Selector** in conversations, **restart the app** or contact us via our [Discord community](https://discord.gg/Dt7MxDyNNZ).
## Newer versions - nightly versions and v0.4.7+
Starting from version 0.4.7, Jan has introduced the capability to import models using an absolute file path. It allows you to import models from any directory on your computer.
### 1. Get the Absolute Filepath of the Model
After downloading the model from HuggingFace, get the absolute filepath of the model.
### 2. Configure the Model JSON
1. Navigate to the `~/jan/models` folder.
2. Create a folder named `<modelname>`, for example, `tinyllama`.
3. Create a `model.json` file inside the folder, including the following configurations:
- Ensure the `id` property matches the folder name you created.
- Ensure the `url` property is the direct binary download link ending in `.gguf`. Now, you can use the absolute filepath of the model file.
- Ensure the `engine` property is set to `nitro`.
```json
{
"sources": [
{
"filename": "tinyllama.gguf",
// highlight-next-line
"url": "<absolute-filepath-of-the-model-file>"
}
],
"id": "tinyllama-1.1b",
"object": "model",
"name": "(Absolute Path) TinyLlama Chat 1.1B Q4",
"version": "1.0",
"description": "TinyLlama is a tiny model with only 1.1B. It's a good model for less powerful computers.",
"format": "gguf",
"settings": {
"ctx_len": 4096,
"prompt_template": "<|system|>\n{system_message}<|user|>\n{prompt}<|assistant|>",
"llama_model_path": "tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf"
},
"parameters": {
"temperature": 0.7,
"top_p": 0.95,
"stream": true,
"max_tokens": 2048,
"stop": [],
"frequency_penalty": 0,
"presence_penalty": 0
},
"metadata": {
"author": "TinyLlama",
"tags": ["Tiny", "Foundation Model"],
"size": 669000000
},
"engine": "nitro"
}
```
:::warning
- If you are using Windows, you need to use double backslashes in the url property, for example: `C:\\Users\\username\\filename.gguf`.
:::
### 3. Done!
If your model doesn't show up in the **Model Selector** in conversations, **restart the app** or contact us via our [Discord community](https://discord.gg/Dt7MxDyNNZ).
## Newer versions - nightly versions and v0.4.4+
### 1. Create a Model Folder
1. Navigate to the `App Settings` > `Advanced` > `Open App Directory` > `~/jan/models` folder.
<Tabs groupId = "operating-systems" >
<TabItem value="mac" label = "MacOS" default>
```sh
cd ~/jan/models
```
</TabItem>
<TabItem value = "windows" label = "Windows" default>
```sh
C:/Users/<your_user_name>/jan/models
```
</TabItem>
<TabItem value = "linux" label = "Linux" default>
```sh
cd ~/jan/models
```
</TabItem>
</Tabs>
2. In the `models` folder, create a folder with the name of the model.
```sh
mkdir trinity-v1-7b
```
### 2. Drag & Drop the Model
Drag and drop your model binary into this folder, ensuring the `modelname.gguf` is the same name as the folder name, e.g. `models/modelname`.
### 3. Done!
If your model doesn't show up in the **Model Selector** in conversations, **restart the app** or contact us via our [Discord community](https://discord.gg/Dt7MxDyNNZ).
## Older versions - before v0.44
### 1. Create a Model Folder
1. Navigate to the `App Settings` > `Advanced` > `Open App Directory` > `~/jan/models` folder.
<Tabs groupId = "operating-systems" >
<TabItem value="mac" label = "MacOS" default>
```sh
cd ~/jan/models
```
</TabItem>
<TabItem value = "windows" label = "Windows" default>
```sh
C:/Users/<your_user_name>/jan/models
```
</TabItem>
<TabItem value = "linux" label = "Linux" default>
```sh
cd ~/jan/models
```
</TabItem>
</Tabs>
2. In the `models` folder, create a folder with the name of the model.
```sh
mkdir trinity-v1-7b
```
### 2. Create a Model JSON
Jan follows a folder-based, [standard model template](https://jan.ai/docs/engineering/models/) called a `model.json` to persist the model configurations on your local filesystem.
This means that you can easily reconfigure your models, export them, and share your preferences transparently.
<Tabs groupId = "operating-systems" >
<TabItem value="mac" label = "MacOS" default>
```sh
cd trinity-v1-7b
touch model.json
```
</TabItem>
<TabItem value = "windows" label = "Windows" default>
```sh
cd trinity-v1-7b
echo {} > model.json
```
</TabItem>
<TabItem value = "linux" label = "Linux" default>
```sh
cd trinity-v1-7b
touch model.json
```
</TabItem>
</Tabs>
To update `model.json`:
- Match `id` with folder name.
- Ensure GGUF filename matches `id`.
- Set `source.url` to direct download link ending in `.gguf`. In HuggingFace, you can find the direct links in the `Files and versions` tab.
- Verify that you are using the correct `prompt_template`. This is usually provided in the HuggingFace model's description page.
```json title="model.json"
{
"sources": [
{
"filename": "trinity-v1.Q4_K_M.gguf",
"url": "https://huggingface.co/janhq/trinity-v1-GGUF/resolve/main/trinity-v1.Q4_K_M.gguf"
}
],
"id": "trinity-v1-7b",
"object": "model",
"name": "Trinity-v1 7B Q4",
"version": "1.0",
"description": "Trinity is an experimental model merge of GreenNodeLM & LeoScorpius using the Slerp method. Recommended for daily assistance purposes.",
"format": "gguf",
"settings": {
"ctx_len": 4096,
"prompt_template": "{system_message}\n### Instruction:\n{prompt}\n### Response:",
"llama_model_path": "trinity-v1.Q4_K_M.gguf"
},
"parameters": {
"max_tokens": 4096
},
"metadata": {
"author": "Jan",
"tags": ["7B", "Merged"],
"size": 4370000000
},
"engine": "nitro"
}
```
:::note
For more details regarding the `model.json` settings and parameters fields, please see [here](/docs/guides/models/integrate-remote.mdx#modeljson).
:::
### 3. Download the Model
1. Restart Jan and navigate to the Hub.
2. Locate your model.
3. Click **Download** button to download the model binary.
:::info[Assistance and Support]
If you have questions, please join our [Discord community](https://discord.gg/Dt7MxDyNNZ) for support, updates, and discussions.
:::

View File

@ -1,8 +0,0 @@
---
title: Inference Providers
slug: /guides/providers
---
import DocCardList from "@theme/DocCardList";
<DocCardList />

View File

@ -1,10 +0,0 @@
---
title: llama.cpp
slug: /guides/providers/llama-cpp
---
## Overview
[Nitro](https://github.com/janhq/nitro) is an inference server on top of [llama.cpp](https://github.com/ggerganov/llama.cpp). It provides an OpenAI-compatible API, queue, & scaling.
Nitro is the default AI engine downloaded with Jan. There is no additional setup needed.

View File

@ -1,68 +0,0 @@
---
title: Quickstart
slug: /guides
description: Jan Docs | Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
sidebar_position: 1
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
]
---
import installImageURL from './assets/jan-ai-quickstart.png';
import flow from './assets/quick.png';
# Quickstart
{/* After finish installing, here are steps for using Jan
## Run Jan
<Tabs>
<TabItem value="mac" label="MacOS" default>
1. Search Jan in the Dock and run the program.
</TabItem>
<TabItem value="windows" label="Windows" default>
1. Search Jan in the Start menu and run the program.
</TabItem>
<TabItem value="linux" label="Linux" default>
1. Go to the Jan directory and run the program.
</TabItem>
</Tabs>
2. After you run Jan, the program will take you to the Threads window, with list of threads and each thread is a chatting box between you and the AI model.
3. Go to the **Hub** under the **Thread** section and select the AI model that you want to use. For more info, go to the [Using Models](category/using-models) section.
4. A new thread will be added. You can use Jan in the thread with the AI model that you selected before. */}
To get started quickly with Jan, follow the steps below:
### Step 1: Install Jan
Go to [Jan.ai](https://jan.ai/) > Select your operating system > Install the program.
:::note
To learn more about system requirements for your operating system, go to [Installation guide](/guides/install).
:::
### Step 2: Select AI Model
Before using Jan, you need to select an AI model that based on your hardware capabilities and specifications. Each model has their purposes, capabilities, and different requirements. To select AI models:
Go to the **Hub** > select the models that you would like to install.
:::note
For more info, go to [list of supported models](/guides/models-list/).
:::
### Step 3: Use the AI Model
After you install the AI model, you use it immediately under **Thread** tab.

View File

@ -1,7 +1,7 @@
---
title: Error Codes
slug: /guides/error-codes/
sidebar_position: 7
title: Remote Providers
slug: /guides/providers/remote
sidebar_position: 14
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
keywords:
[

Binary file not shown.

After

Width:  |  Height:  |  Size: 111 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 145 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 155 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 109 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 258 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.3 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 163 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 106 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 105 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 115 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 111 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 644 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 128 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 144 KiB

View File

@ -0,0 +1,21 @@
---
title: Claude
sidebar_position: 6
slug: /guides/providers/claude
description: A step-by-step guide on how to integrate Jan with LM Studio.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
Claude integration,
claude,
]
---
Coming Soon

View File

@ -1,7 +1,7 @@
---
title: Groq
sidebar_position: 10
slug: /guides/integration/groq
sidebar_position: 5
slug: /guides/providers/groq
description: Learn how to integrate Groq API with Jan for enhanced functionality.
keywords:
[

View File

@ -1,6 +1,7 @@
---
title: Mistral AI
sidebar_position: 7
sidebar_position: 4
slug: /guides/providers/mistral
description: A step-by-step guide on how to integrate Jan with Mistral AI.
keywords:
[
@ -76,7 +77,7 @@ This tutorial demonstrates integrating Mistral AI with Jan using the API.
```
:::note
- For more details regarding the `model.json` settings and parameters fields, please see [here](../models/integrate-remote.mdx#modeljson).
- For more details regarding the `model.json` settings and parameters fields, please see [here](/guides/providers/remote-server/#modeljson).
- Mistral AI offers various endpoints. Refer to their [endpoint documentation](https://docs.mistral.ai/platform/endpoints/) to select the one that fits your requirements. Here, we use the `mistral-tiny` model as an example.
:::

View File

@ -1,6 +1,7 @@
---
title: Azure OpenAI
sidebar_position: 3
sidebar_position: 2
slug: /guides/providers/openai
description: A step-by-step guide on how to integrate Jan with Azure OpenAI.
keywords:
[
@ -17,7 +18,7 @@ keywords:
]
---
## How to Integrate Azure OpenAI with Jan
## Integrate Azure OpenAI with Jan
The [Azure OpenAI Service](https://learn.microsoft.com/en-us/azure/ai-services/openai/overview?source=docs) offers robust APIs, making it simple for you to incorporate OpenAI's language models into your applications. You can integrate Azure OpenAI with Jan by following the steps below:
@ -71,7 +72,7 @@ The [Azure OpenAI Service](https://learn.microsoft.com/en-us/azure/ai-services/o
```
:::note
For more details regarding the `model.json` settings and parameters fields, please see [here](../models/integrate-remote.mdx#modeljson).
For more details regarding the `model.json` settings and parameters fields, please see [here](/guides/providers/remote-server/#modeljson).
:::
### Step 3: Start the Model

View File

@ -1,6 +1,7 @@
---
title: Remote Server Integration
sidebar_position: 2
sidebar_position: 1
slug: /guides/providers/remote-server
description: A step-by-step guide on how to set up Jan to connect with any remote or local API server.
keywords:
[

View File

@ -0,0 +1,434 @@
---
title: Troubleshooting
slug: /troubleshooting
description: Jan Docs | Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
sidebar_position: 21
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
troubleshooting,
error codes,
broken build,
something amiss,
unexpected token,
undefined issue,
permission denied,
]
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
## Broken Build
To resolve the issue where your Jan is stuck in a broken build after installation.
<Tabs>
<TabItem value="mac" label="Mac" default>
#### 1. Uninstall Jan
Delete Jan from your `/Applications` folder.
#### 2. Delete Application Data, Cache, and User Data
```zsh
# Step 1: Delete the application data
## Newer versions
rm -rf ~/Library/Application\ Support/jan
## Versions 0.2.0 and older
rm -rf ~/Library/Application\ Support/jan-electron
# Step 2: Clear application cache
rm -rf ~/Library/Caches/jan*
# Step 3: Remove all user data
rm -rf ~/jan
```
#### 3. Additional Step for Versions Before 0.4.2
If you are using a version before `0.4.2`, you need to run the following commands:
```zsh
ps aux | grep nitro
# Looks for processes like `nitro` and `nitro_arm_64`, and kill them one by one by process ID
kill -9 <PID>
```
#### 4. Download the Latest Version
Download the latest version of Jan from our [homepage](https://jan.ai/).
</TabItem>
<TabItem value="windows" label="Windows">
#### 1. Uninstall Jan
To uninstall Jan on Windows, use the [Windows Control Panel](https://support.microsoft.com/en-us/windows/uninstall-or-remove-apps-and-programs-in-windows-4b55f974-2cc6-2d2b-d092-5905080eaf98).
#### 2. Delete Application Data, Cache, and User Data
```sh
# You can delete the `/Jan` directory in Windows's AppData Directory by visiting the following path `%APPDATA%\Jan`
cd C:\Users\%USERNAME%\AppData\Roaming
rmdir /S jan
```
#### 3. Additional Step for Versions Before 0.4.2
If you are using a version before `0.4.2`, you need to run the following commands:
```sh
# Find the process ID (PID) of the nitro process by filtering the list by process name
tasklist | findstr "nitro"
# Once you have the PID of the process you want to terminate, run the `taskkill`
taskkill /F /PID <PID>
```
#### 4. Download the Latest Version
Download the latest version of Jan from our [homepage](https://jan.ai/).
</TabItem>
<TabItem value="linux" label="Linux">
#### 1. Uninstall Jan
<Tabs groupId = "linux_type">
<TabItem value="linux_main" label = "Linux">
To uninstall Jan, you should use your package manager's uninstall or remove option.
This will return your system to its state before the installation of Jan.
This method can also reset all settings if you are experiencing any issues with Jan.
</TabItem>
<TabItem value = "deb_ub" label = "Debian / Ubuntu">
To uninstall Jan, run the following command.MDXContent
```sh
sudo apt-get remove jan
# where jan is the name of Jan package
```
This will return your system to its state before the installation of Jan.
This method can also be used to reset all settings if you are experiencing any issues with Jan.
</TabItem>
<TabItem value = "other" label = "Others">
To uninstall Jan, you can uninstall Jan by deleting the `.AppImage` file.
If you wish to completely remove all user data associated with Jan after uninstallation, you can delete the user data at `~/jan`.
This method can also reset all settings if you are experiencing any issues with Jan.
</TabItem>
</Tabs>
#### 2. Delete Application Data, Cache, and User Data
```sh
# You can delete the user data folders located at the following `~/jan`
rm -rf ~/jan
```
#### 3. Additional Step for Versions Before 0.4.2
If you are using a version before `0.4.2`, you need to run the following commands:
```zsh
ps aux | grep nitro
# Looks for processes like `nitro` and `nitro_arm_64`, and kill them one by one by process ID
kill -9 <PID>
```
#### 4. Download the Latest Version
Download the latest version of Jan from our [homepage](https://jan.ai/).
</TabItem>
</Tabs>
By following these steps, you can cleanly uninstall and reinstall Jan, ensuring a smooth and error-free experience with the latest version.
:::note
Before reinstalling Jan, ensure it's completely removed from all shared spaces if it's installed on multiple user accounts on your device.
:::
## Troubleshooting NVIDIA GPU
To resolve issues when the Jan app does not utilize the NVIDIA GPU on Windows and Linux systems.
#### 1. Ensure GPU Mode Requirements
<Tabs>
<TabItem value="windows" label="Windows">
##### NVIDIA Driver
- Install an [NVIDIA Driver](https://www.nvidia.com/Download/index.aspx) supporting CUDA 11.7 or higher.
- Use the following command to verify the installation:
```sh
nvidia-smi
```
##### CUDA Toolkit
- Install a [CUDA toolkit](https://developer.nvidia.com/cuda-downloads) compatible with your NVIDIA driver.
- Use the following command to verify the installation:
```sh
nvcc --version
```
</TabItem>
<TabItem value="linux" label="Linux">
##### NVIDIA Driver
- Install an [NVIDIA Driver](https://www.nvidia.com/Download/index.aspx) supporting CUDA 11.7 or higher.
- Use the following command to verify the installation:
```sh
nvidia-smi
```
##### CUDA Toolkit
- Install a [CUDA toolkit](https://developer.nvidia.com/cuda-downloads) compatible with your NVIDIA driver.
- Use the following command to verify the installation:
```sh
nvcc --version
```
##### Linux Specifics
- Ensure that `gcc-11`, `g++-11`, `cpp-11`, or higher is installed.
- See [instructions](https://gcc.gnu.org/projects/cxx-status.html#cxx17) for Ubuntu installation.
- **Post-Installation Actions**: Add CUDA libraries to `LD_LIBRARY_PATH`.
- Follow the [Post-installation Actions](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#post-installation-actions) instructions.
</TabItem>
</Tabs>
#### 2. Switch to GPU Mode
Jan defaults to CPU mode but automatically switches to GPU mode if your system supports it, selecting the GPU with the highest VRAM. Check this setting in `Settings` > `Advanced Settings`.
##### Troubleshooting Tips
If GPU mode isn't enabled by default:
1. Confirm that you have installed an NVIDIA driver supporting CUDA 11.7 or higher. Refer to [CUDA compatibility](https://docs.nvidia.com/deploy/cuda-compatibility/index.html#binary-compatibility__table-toolkit-driver).
2. Ensure compatibility of the CUDA toolkit with your NVIDIA driver. Refer to [CUDA compatibility](https://docs.nvidia.com/deploy/cuda-compatibility/index.html#binary-compatibility__table-toolkit-driver).
3. For Linux, add CUDA's `.so` libraries to the `LD_LIBRARY_PATH`. For Windows, ensure that CUDA's `.dll` libraries are in the PATH. Refer to [Windows setup](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html#environment-setup).
#### 3. Check GPU Settings
1. Navigate to `Settings` > `Advanced Settings` > `Jan Data Folder` to access GPU settings.
2. Open the `settings.json` file in the `settings` folder. Here's an example:
```json title="~/jan/settings/settings.json"
{
"notify": true,
"run_mode": "gpu",
"nvidia_driver": {
"exist": true,
"version": "531.18"
},
"cuda": {
"exist": true,
"version": "12"
},
"gpus": [
{
"id": "0",
"vram": "12282"
},
{
"id": "1",
"vram": "6144"
},
{
"id": "2",
"vram": "6144"
}
],
"gpu_highest_vram": "0"
}
```
#### 4. Restart Jan
Restart Jan application to make sure it works.
##### Troubleshooting Tips
- Ensure `nvidia_driver` and `cuda` fields indicate installed software.
- If `gpus` field is empty or lacks your GPU, check NVIDIA driver and CUDA toolkit installations.
- For further assistance, share the `settings.json` file.
#### Tested Configurations
- **Windows 11 Pro 64-bit:**
- GPU: NVIDIA GeForce RTX 4070ti
- CUDA: 12.2
- NVIDIA driver: 531.18 (Bare metal)
- **Ubuntu 22.04 LTS:**
- GPU: NVIDIA GeForce RTX 4070ti
- CUDA: 12.2
- NVIDIA driver: 545 (Bare metal)
- **Ubuntu 20.04 LTS:**
- GPU: NVIDIA GeForce GTX 1660ti
- CUDA: 12.1
- NVIDIA driver: 535 (Proxmox VM passthrough GPU)
- **Ubuntu 18.04 LTS:**
- GPU: NVIDIA GeForce GTX 1660ti
- CUDA: 12.1
- NVIDIA driver: 535 (Proxmox VM passthrough GPU)
#### Common Issues and Solutions
1. If the issue persists, try installing the [Nightly version](/guides/quickstart/#nightly-releases).
2. Ensure your (V)RAM is accessible; some users with virtual RAM may require additional configuration.
3. Seek assistance in [Jan Discord](https://discord.gg/mY69SZaMaC).
## How to Get Error Logs
To get the error logs of your Jan application, follow the steps below:
#### Jan Application
1. Navigate to the main dashboard.
2. Click the **gear icon (⚙️)** on the bottom left of your screen.
3. Under the **Settings screen**, click the **Advanced Settings**.
4. On the **Jan Data Folder** click the **folder icon (📂)** to access the data.
5. Click the **logs** folder.
#### Jan UI
1. Open your Unix or Linux terminal.
2. Use the following commands to get the recent 50 lines of log files:
```bash
tail -n 50 ~/jan/logs/app.log
```
#### Jan API Server
1. Open your Unix or Linux terminal.
2. Use the following commands to get the recent 50 lines of log files:
```bash
tail -n 50 ~/jan/logs/server.log
```
:::warning
Ensure to redact any private or sensitive information when sharing logs or error details.
:::
:::note
If you have any questions or are looking for support, please don't hesitate to contact us via our [Discord community](https://discord.gg/Dt7MxDyNNZ) or create a new issue in our [GitHub repository](https://github.com/janhq/jan/issues/new/choose).
:::
## Permission Denied
When running Jan, you might encounter the following error message:
```
Uncaught (in promise) Error: Error invoking layout-480796bff433a3a3.js:538 remote method 'installExtension':
Error Package /Applications/Jan.app/Contents/Resources/app.asar.unpacked/pre-install/janhq-assistant-extension-1.0.0.tgz does not contain a valid manifest:
Error EACCES: permission denied, mkdtemp '/Users/username/.npm/_cacache/tmp/ueCMn4'
```
This error mainly caused by permission problem during installation. To resolve this issue, follow these steps:
1. Open your terminal.
2. Execute the following command to change ownership of the `~/.npm` directory to the current user:
```sh
sudo chown -R $(whoami) ~/.npm
```
:::note
- This command ensures that the necessary permissions are granted for Jan installation, resolving the encountered error.
- If you have any questions or are looking for support, please don't hesitate to contact us via our [Discord community](https://discord.gg/Dt7MxDyNNZ) or create a new issue in our [GitHub repository](https://github.com/janhq/jan/issues/new/choose).
:::
## Something's Amiss
When you start a chat with a model and encounter with a Something's Amiss error, here's how to resolve it:
1. Ensure your OS is up to date.
2. Choose a model smaller than 80% of your hardware's V/RAM. For example, on an 8GB machine, opt for models smaller than 6GB.
3. Install the latest [Nightly release](/guides/quickstart/#nightly-releases) or [clear the application cache](/troubleshooting/#broken-build) when reinstalling Jan.
4. Confirm your V/RAM accessibility, particularly if using virtual RAM.
5. Nvidia GPU users should download [CUDA](https://developer.nvidia.com/cuda-downloads).
6. Linux users, ensure your system meets the requirements of gcc 11, g++ 11, cpp 11, or higher. Refer to this [link](/troubleshooting/#troubleshooting-nvidia-gpu) for details.
7. You might use the wrong port when you [check the app logs](/troubleshooting/#how-to-get-error-logs) and encounter the Bind address failed at 127.0.0.1:3928 error. To check the port status, try use the `netstat` command, like the following:
<Tabs>
<TabItem value="mac" label="MacOS" default>
```sh
netstat -an | grep 3928
```
</TabItem>
<TabItem value="windows" label="Windows" default>
```sh
netstat -ano | find "3928"
tasklist /fi "PID eq 3928"
```
</TabItem>
<TabItem value="linux" label="Linux" default>
```sh
netstat -anpe | grep "3928"
```
</TabItem>
</Tabs>
:::note
`Netstat` displays the contents of various network-related data structures for active connections
:::
:::tip
Jan uses the following ports:
- Nitro: `3928`
- Jan API Server: `1337`
- Jan Documentation: `3001`
:::
:::note
If you have any questions or are looking for support, please don't hesitate to contact us via our [Discord community](https://discord.gg/Dt7MxDyNNZ) or create a new issue in our [GitHub repository](https://github.com/janhq/jan/issues/new/choose).
:::
## Undefined Issue
Encountering an `undefined issue` in Jan is caused by errors related to the Nitro tool or other internal processes. It can be resolved through the following steps:
1. Clearing the Jan folder and then reopen the application to determine if the problem persists
2. Manually run the nitro tool located at `~/jan/extensions/@janhq/inference-nitro-extensions/dist/bin/(your-os)/nitro` to check for error messages.
3. Address any nitro error messages that are identified and reassess the persistence of the issue.
4. Reopen Jan to determine if the problem has been resolved after addressing any identified errors.
5. If the issue persists, please share the [app logs](/troubleshooting/#how-to-get-error-logs) via [Jan Discord](https://discord.gg/mY69SZaMaC) for further assistance and troubleshooting.
:::note
If you have any questions or are looking for support, please don't hesitate to contact us via our [Discord community](https://discord.gg/Dt7MxDyNNZ) or create a new issue in our [GitHub repository](https://github.com/janhq/jan/issues/new/choose).
:::
## Unexpected Token
Encountering the `Unexpected token` error when initiating a chat with OpenAI models mainly caused by either your OpenAI key or where you access your OpenAI from. This issue can be solved through the following steps:
1. Obtain an OpenAI API key from [OpenAI's developer platform](https://platform.openai.com/) and integrate it into your application.
2. Trying a VPN could potentially solve the issue, especially if it's related to region locking for accessing OpenAI services. By connecting through a VPN, you may bypass such restrictions and successfully initiate chats with OpenAI models.
:::note
If you have any questions or are looking for support, please don't hesitate to contact us via our [Discord community](https://discord.gg/Dt7MxDyNNZ) or create a new issue in our [GitHub repository](https://github.com/janhq/jan/issues/new/choose).
:::

View File

@ -1,6 +1,8 @@
---
title: Advanced Settings
sidebar_position: 1
slug: /guides/advanced
description: Jan Docs | Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
sidebar_position: 11
keywords:
[
Jan AI,
@ -11,7 +13,11 @@ keywords:
conversational AI,
no-subscription fee,
large language model,
advanced-settings,
Advanced Settings,
HTTPS Proxy,
SSL,
settings,
Jan settings
]
---
@ -33,7 +39,7 @@ To access the Jan's advanced settings, follow the steps below:
| **Experimental Mode** | Enables experimental features that may be unstable. |
| **GPU Acceleration** | Enables the boosting of your model performance by using your GPU devices for acceleration. |
| **Jan Data Folder** | Location for messages, model configurations, and user data. Changeable to a different location. |
| **HTTPS Proxy & Ignore SSL Certificate** | Use a proxy server for internet connections and ignore SSL certificates for self-signed certificates. Please check out the guide on how to set up your own HTTPS proxy server [here](http-proxy.mdx). |
| **HTTPS Proxy & Ignore SSL Certificate** | Use a proxy server for internet connections and ignore SSL certificates for self-signed certificates. Please check out the guide on how to set up your own HTTPS proxy server [here](advanced-settings.mdx#https-proxy). |
| **Clear Logs** | Removes all logs from the Jan application. |
| **Reset To Factory Default** | Resets the application to its original state, deleting all data including model customizations and conversation history. |
@ -99,7 +105,7 @@ To try out new fetures that are still in testing phase, follow the steps below:
To enhance your model performance, follow the steps below:
:::warning
Ensure that you have read the [troubleshooting guide](/docs/guides/common-error/not-using-gpu.mdx) here for further assistance.
Ensure that you have read the [troubleshooting guide](/troubleshooting/#troubleshooting-nvidia-gpu) here for further assistance.
:::
1. Navigate to the main dashboard.
2. Click the **gear icon (⚙️)** on the bottom left of your screen.
@ -113,14 +119,105 @@ To access the folder where messages, model configurations and user data are stor
3. Under the **Settings screen**, click the **Advanced Settings**.
4. On the **Jan Data Folder** click the **folder icon (📂)** to access the data or the **pencil icon (✏️)** to change the folder where you keep your data.
## Enable the HTTPS Proxy
To enable the HTTPS Proxy feature, follow the steps below:
1. Make sure to set up your HTTPS Proxy. Check out this [guide](http-proxy.mdx) for instructions on how to do it.
2. Navigate to the main dashboard.
3. Click the **gear icon (⚙️)** on the bottom left of your screen.
4. Under the **Settings screen**, click the **Advanced Settings**.
5. On the **HTTPS Proxy** click the slider to enable.
6. Input your domain in the blank field.
## HTTPS Proxy
HTTPS Proxy encrypts data between your browser and the internet, making it hard for outsiders to intercept or read. It also helps you to maintain your privacy and security while being able to bypass regional restrictions on internet.
:::note
- When configuring Jan using an HTTPS proxy, the speed of the downloading model may be affected due to the encryption and decryption process. It also depends on the networking of the cloud service provider.
- HTTPS Proxy does not affect the remote model usage.
:::
### Setting Up Your Own HTTPS Proxy Server
This guide provides a simple overview of setting up an HTTPS proxy server using **Squid**, a widely used open-source proxy software.
:::note
Other software options are also available depending on your requirements.
:::
#### Step 1: Choosing a Server
1. Firstly, you need to choose a server to host your proxy server.
:::note
We recommend using a well-known cloud provider service like:
- Amazon AWS
- Google Cloud
- Microsoft Azure
- Digital Ocean
:::
2. Ensure that your server has a public IP address and is accessible from the internet.
#### Step 2: Installing Squid
Instal **Squid** using the following command:
```bash
sudo apt-get update
sudo apt-get install squid
```
#### Step 3: Configure Squid for HTTPS
To enable HTTPS, you will need to configure Squid with SSL support.
1. Squid requires an SSL certificate to be able to handle HTTPS traffic. You can generate a self-signed certificate or obtain one from a Certificate Authority (CA). For a self-signed certificate, you can use OpenSSL:
```bash
openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout squid-proxy.pem -out squid-proxy.pem
```
2. Edit the Squid configuration file `/etc/squid/squid.conf` to include the path to your SSL certificate and enable the HTTPS port:
```bash
http_port 3128 ssl-bump cert=/path/to/your/squid-proxy.pem
ssl_bump server-first all
ssl_bump bump all
```
3. To intercept HTTPS traffic, Squid uses a process called SSL Bumping. This process allows Squid to decrypt and re-encrypt HTTPS traffic. To enable SSL Bumping, ensure the `ssl_bump` directives are configured correctly in your `squid.conf` file.
#### Step 4 (Optional): Configure ACLs and Authentication
1. You can define rules to control who can access your proxy. This is done by editing the squid.conf file and defining ACLs:
```bash
acl allowed_ips src "/etc/squid/allowed_ips.txt"
http_access allow allowed_ips
```
2. If you want to add an authentication layer, Squid supports several authentication schemes. Basic authentication setup might look like this:
```bash
auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/passwords
acl authenticated proxy_auth REQUIRED
http_access allow authenticated
```
#### Step 5: Restart and Test Your Proxy
1. After configuring, restart Squid to apply the changes:
```bash
sudo systemctl restart squid
```
2. To test, configure your browser or another client to use the proxy server with its IP address and port (default is 3128).
3. Check if you can access the internet through your proxy.
:::tip
Tips for Secure Your Proxy:
- **Firewall rules**: Ensure that only intended users or IP addresses can connect to your proxy server. This can be achieved by setting up appropriate firewall rules.
- **Regular updates**: Keep your server and proxy software updated to ensure that you are protected against known vulnerabilities.
- **Monitoring and logging**: Monitor your proxy server for unusual activity and enable logging to keep track of the traffic passing through your proxy.
:::
### Setting Up Jan to Use Your HTTPS Proxy
Once you have your HTTPS proxy server set up, you can configure Jan to use it.
1. Navigate to **Settings** > **Advanced Settings**.
2. On the **HTTPS Proxy** click the slider to enable.
3. Input your domain in the blank field.
## Ignore SSL Certificate
To Allow self-signed or unverified certificates, follow the steps below:

View File

@ -0,0 +1,22 @@
---
title: Jan Data Folder
slug: /guides/data-folder
description: Jan Docs | Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
sidebar_position: 6
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
data folder,
source folder,
Jan data,
]
---
Coming Soon

View File

@ -1,10 +1,24 @@
---
title: Local Server
sidebar_position: 4
title: Local Server or API Endpoint
slug: /guides/local-api
description: A step-by-step guide to start Jan Local Server.
sidebar_position: 10
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
local server,
start server,
api endpoint,
]
---
Jan provides a built-in API server that can be used as a drop-in for OpenAI's API local replacement. This guide will walk you through on how to start the local server and use it to make request to the local server.
## Step 1: Set the Local Server

View File

@ -0,0 +1,21 @@
---
title: Manage Assistants
slug: /guides/assistants
description: Jan Docs | Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
sidebar_position: 8
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
manage assistants,
assistants,
]
---
Coming Soon

View File

@ -0,0 +1,23 @@
---
title: Manage Models
slug: /guides/models
description: Jan Docs | Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
sidebar_position: 7
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
models,
remote models,
local models,
manage models,
]
---
Coming Soon

View File

@ -1,11 +1,24 @@
---
title: Thread Management
sidebar_position: 3
hide_table_of_contents: true
title: Manage Threads
slug: /guides/threads
description: Manage your interaction with AI locally.
sidebar_position: 9
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
threads,
chat history,
thread history,
]
---
Jan provides a straightforward and private solution for managing your threads with AI on your own device. As you interact with AI using Jan, you'll accumulate a history of threads.
Jan offers easy tools to organize, delete, or review your past threads with AI. This guide will show you how to keep your threads private and well-organized.
@ -17,7 +30,7 @@ Jan offers easy tools to organize, delete, or review your past threads with AI.
3. To view a specific thread, simply choose the one you're interested in and then scroll up or down to explore the entire conversation.
### Manage Thread via Jan Data Folder
### Manage the Threads via Folder
To manage your thread history and configurations, follow the steps below:
1. Navigate to the Thread that you want to manage via the list of threads on the left side of the dashboard.
2. Click on the **three dots (⋮)** in the Thread section.

View File

@ -0,0 +1,19 @@
---
title: Overview
slug: /guides/overview
description: Jan Docs | Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
sidebar_position: 5
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
]
---
Coming Soon

View File

@ -170,141 +170,234 @@ const sidebars = {
collapsible: false,
className: "head_Menu",
items: [
"guides/quickstart",
"guides/install",
"guides/start-server",
"guides/models-list"
"guides/get-started/overview",
"guides/get-started/quickstart",
"guides/get-started/hardware-setup",
{
type: "category",
label: "Installation",
className: "head_SubMenu",
link: {
type: 'doc',
id: "guides/installation/README",
},
items: [
"guides/installation/docker",
"guides/installation/linux",
"guides/installation/mac",
"guides/installation/windows"
]
},
]
},
{
type: "category",
label: "Guides",
label: "User Guides",
collapsible: false,
className: "head_Menu",
items: [
"guides/best-practices",
"guides/thread",
"guides/user-guides/overview-guides",
"guides/user-guides/jan-data-folder",
"guides/user-guides/manage-models",
"guides/user-guides/manage-assistants",
"guides/user-guides/manage-threads",
"guides/user-guides/local-server",
"guides/user-guides/advanced-settings"
]
},
{
type: "category",
label: "Advanced Features",
label: "Inference Providers",
collapsible: false,
className: "head_Menu",
items: [
"guides/inference/overview-inference",
{
type: "category",
label: "Advanced Settings",
label: "Local Providers",
className: "head_SubMenu",
link: {
type: 'doc',
id: "guides/advanced-settings/advanced-settings",
id: "guides/local-providers/README",
},
items: [
"guides/advanced-settings/http-proxy",
"guides/local-providers/llamacpp",
"guides/local-providers/lmstudio",
"guides/local-providers/ollama",
"guides/local-providers/tensorrt",
]
},
{
type: "category",
label: "Advanced Model Setup",
label: "Remote Providers",
className: "head_SubMenu",
link: {
type: 'doc',
id: "guides/models/README",
id: "guides/remote-providers/README",
},
items: [
"guides/models/customize-engine",
"guides/models/import-models",
"guides/models/integrate-remote",
]
},
{
type: "category",
label: "Inference Providers",
className: "head_SubMenu",
link: {
type: 'doc',
id: "guides/providers/README",
},
items: [
"guides/providers/llama-cpp",
"guides/providers/tensorrt-llm",
]
},
{
type: "category",
label: "Extensions",
className: "head_SubMenu",
link: {
type: 'doc',
id: "guides/extensions/README",
},
items: [
"guides/extensions/import-ext",
"guides/extensions/setup-ext",
]
},
{
type: "category",
label: "Integrations",
className: "head_SubMenu",
link: {
type: 'doc',
id: "guides/integration/README",
},
items: [
"guides/integration/azure",
"guides/integration/discord",
"guides/integration/groq",
"guides/integration/lmstudio",
"guides/integration/mistral",
"guides/integration/ollama",
"guides/integration/openinterpreter",
"guides/integration/openrouter",
"guides/integration/raycast",
"guides/integration/vscode",
"guides/remote-providers/claude",
"guides/remote-providers/groq",
"guides/remote-providers/mistral",
"guides/remote-providers/openai",
"guides/remote-providers/remote-server-integration"
]
},
]
},
{
type: "category",
label: "Extensions",
collapsible: false,
className: "head_Menu",
items: [
"guides/extensions/extensions",
]
},
{
type: "category",
label: "Integrations",
collapsible: false,
className: "head_Menu",
items: [
"guides/integrations/overview-integration",
"guides/integrations/crewai",
"guides/integrations/discord",
"guides/integrations/interpreter",
"guides/integrations/raycast",
"guides/integrations/router",
"guides/integrations/unsloth",
"guides/integrations/vscode"
]
},
{
type: "category",
label: "Troubleshooting",
collapsible: false,
className: "head_Menu",
items: [
{
type: "category",
label: "Error Codes",
className: "head_SubMenu",
link: {
type: 'doc',
id: "guides/error-codes/README",
},
items: [
"guides/error-codes/how-to-get-error-logs",
"guides/error-codes/permission-denied",
"guides/error-codes/something-amiss",
"guides/error-codes/undefined-issue",
"guides/error-codes/unexpected-token",
]
},
{
type: "category",
label: "Common Error",
className: "head_SubMenu",
link: {
type: 'doc',
id: "guides/common-error/README",
},
items: [
"guides/common-error/broken-build",
"guides/common-error/not-using-gpu",
]
},
"guides/faq"
"guides/troubleshooting",
]
},
// {
// type: "category",
// label: "Advanced Features",
// collapsible: false,
// className: "head_Menu",
// items: [
// {
// type: "category",
// label: "Advanced Settings",
// className: "head_SubMenu",
// link: {
// type: 'doc',
// id: "guides/advanced-settings/advanced-settings",
// },
// items: [
// "guides/advanced-settings/http-proxy",
// ]
// },
// {
// type: "category",
// label: "Advanced Model Setup",
// className: "head_SubMenu",
// link: {
// type: 'doc',
// id: "guides/models/README",
// },
// items: [
// "guides/models/customize-engine",
// "guides/models/import-models",
// "guides/models/integrate-remote",
// ]
// },
// {
// type: "category",
// label: "Inference Providers",
// className: "head_SubMenu",
// link: {
// type: 'doc',
// id: "guides/providers/README",
// },
// items: [
// "guides/providers/llama-cpp",
// "guides/providers/tensorrt-llm",
// ]
// },
// {
// type: "category",
// label: "Extensions",
// className: "head_SubMenu",
// link: {
// type: 'doc',
// id: "guides/extensions/README",
// },
// items: [
// "guides/extensions/import-ext",
// "guides/extensions/setup-ext",
// ]
// },
// {
// type: "category",
// label: "Integrations",
// className: "head_SubMenu",
// link: {
// type: 'doc',
// id: "guides/integration/README",
// },
// items: [
// "guides/integration/azure",
// "guides/integration/discord",
// "guides/integration/groq",
// "guides/integration/lmstudio",
// "guides/integration/mistral",
// "guides/integration/ollama",
// "guides/integration/openinterpreter",
// "guides/integration/openrouter",
// "guides/integration/raycast",
// "guides/integration/vscode",
// ]
// },
// ]
// },
// {
// type: "category",
// label: "Troubleshooting",
// collapsible: false,
// className: "head_Menu",
// items: [
// {
// type: "category",
// label: "Error Codes",
// className: "head_SubMenu",
// link: {
// type: 'doc',
// id: "guides/error-codes/README",
// },
// items: [
// "guides/error-codes/how-to-get-error-logs",
// "guides/error-codes/permission-denied",
// "guides/error-codes/something-amiss",
// "guides/error-codes/undefined-issue",
// "guides/error-codes/unexpected-token",
// ]
// },
// {
// type: "category",
// label: "Common Error",
// className: "head_SubMenu",
// link: {
// type: 'doc',
// id: "guides/common-error/README",
// },
// items: [
// "guides/common-error/broken-build",
// "guides/common-error/not-using-gpu",
// ]
// },
// "guides/faq"
// ]
// },
],
developerSidebar: [
{