Merge branch 'dev' of https://github.com/janhq/jan into dev

This commit is contained in:
Ashley 2025-01-07 14:52:02 +07:00
commit 867a51cf61
72 changed files with 1222 additions and 1151 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 159 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 106 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 106 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 160 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 271 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 97 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 144 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 85 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 152 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 733 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 288 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 308 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 306 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 230 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 191 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 184 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 160 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 185 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 70 KiB

After

Width:  |  Height:  |  Size: 186 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 137 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 160 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 200 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 177 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 155 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 179 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 184 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 171 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 173 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 173 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 173 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 182 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 182 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 182 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 188 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 188 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 188 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 188 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 188 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 188 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 766 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 188 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 190 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 157 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 141 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 168 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 169 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 168 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 172 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 170 KiB

View File

@ -11,7 +11,7 @@
"quickstart": { "quickstart": {
"title": "Quickstart" "title": "Quickstart"
}, },
"desktop": "Desktop", "desktop": "Installation",
"data-folder": "Jan Data Folder", "data-folder": "Jan Data Folder",
"privacy": "Privacy", "privacy": "Privacy",
"user-guides": { "user-guides": {
@ -23,7 +23,6 @@
"assistants": "Assistants", "assistants": "Assistants",
"threads": "Threads", "threads": "Threads",
"settings": "Settings", "settings": "Settings",
"shortcuts": "Keyboard Shortcuts",
"inference-engines": { "inference-engines": {
"title": "MODEL PROVIDER", "title": "MODEL PROVIDER",
"type": "separator" "type": "separator"

View File

@ -20,15 +20,47 @@ keywords:
import { Callout, Steps } from 'nextra/components' import { Callout, Steps } from 'nextra/components'
# Assistants # Assistants
This guide explains how to set the Assistant instructions in the Jan application. Assistant is a configuration profile that determines how the AI should behave and respond to your inputs. It consists of:
- A set of instructions that guide the AI's behavior
- Model settings for AI responses
- Tool configurations (like [knowlegde retrieval](/docs/tools/retrieval) settings)
## Applied the Instructions to All Threads Currently, Jan comes with a single default Assistant named **Jan**, which is used across all your threads. We're working on the ability to create and switch between multiple assistants.
To apply the instructions to all the new threads, follow these steps:
1. Select a **Thread**.
2. Click the **Assistant** tab.
3. Toggle the **slider** to ensure these instructions are applied to all new threads. (Activate the **Experimental Mode** feature to enable this option.)
<br/>
![Assistant Slider](./_assets/assistant-slider.png) ## Set Assistant Instructions
By modifying assistant instructions, you can customize how Jan understands and responds to your queries, what context it should consider, and how it should format its responses.
1. In any **Thread**, click the **Assistant** tab in the **right sidebar**
2. Enter your custom instructions in **Instructions** input field
3. Your instructions will be applied to the current thread right after you click out of the instruction field .
![Set Instructions](./_assets/quick-start-02.png)
**Best Practices for Instructions:**
- Be clear and specific about the desired behavior
- Include any consistent preferences for formatting, tone, or style
**Examples:**
```
Act as a software development mentor focused on Python and JavaScript.
Provide detailed explanations with code examples when relevant.
Use markdown formatting for code blocks.
```
```
Respond in a casual, friendly tone. Keep explanations brief and use simple language.
Provide examples when explaining complex topics.
```
## Apply Instructions to New Threads
You can save Assistant instructions to be automatically applied to all new threads:
1. In any **Thread**, click the **Assistant** tab in the **right sidebar**
2. Toggle the **Save instructions for new threads** slider
3. When enabled, all **new threads** will use these instructions as their default, old threads are not affected
<br/>
![Assistant Slider](./_assets/assistant-01.png)
<br/> <br/>

View File

@ -26,13 +26,11 @@ keywords:
import { Tabs } from 'nextra/components' import { Tabs } from 'nextra/components'
import { Callout, Steps } from 'nextra/components' import { Callout, Steps } from 'nextra/components'
# llama.cpp (Default) # llama.cpp (Cortex)
## Overview ## Overview
Jan has a default [C++ inference server](https://github.com/janhq/cortex) built on top of [llama.cpp](https://github.com/ggerganov/llama.cpp). This server provides an OpenAI-compatible API, queues, scaling, and additional features on top of the wide capabilities of `llama.cpp`. Jan has [**Cortex**](https://github.com/janhq/cortex) - a default C++ inference server built on top of [llama.cpp](https://github.com/ggerganov/llama.cpp). This server provides an OpenAI-compatible API, queues, scaling, and additional features on top of the wide capabilities of `llama.cpp`.
## llama.cpp Engine
This guide shows you how to initialize the `llama.cpp` to download and install the required dependencies to start chatting with a model using the `llama.cpp` engine. This guide shows you how to initialize the `llama.cpp` to download and install the required dependencies to start chatting with a model using the `llama.cpp` engine.
@ -56,16 +54,14 @@ Enable the GPU acceleration option within the Jan application by following the [
## Step-by-step Guide ## Step-by-step Guide
<Steps> <Steps>
### Step 1: Open the `model.json` ### Step 1: Open the `model.json`
1. Navigate to the **Advanced Settings**. 1. Open [Jan Data Folder](/docs/data-folder#open-jan-data-folder)
<br/> <br/>
![Settings](../_assets/advance-set.png) ![Jan Data Folder](../_assets/settings-11.png)
<br/> <br/>
2. On the **Jan Data Folder** click the **folder icon (📂)** to access the data.
<br/> 2. Select **models** folder > Click **model folder** that you want to modify > click `model.json`
![Jan Data Folder](../_assets/data-folder.png) 3. Once open, `model.json` file looks like below, use model "TinyLlama Chat 1.1B Q4" as an example:
<br/>
3. Select **models** folder > Click the **name** of the model folder that you want to modify > click the `model.json`.
4. This will open up a `model.json`. For example, the `model.json` of `TinyLlama Chat 1.1B Q4` is shown below:
```json ```json
{ {
"sources": [ "sources": [

View File

@ -21,29 +21,34 @@ keywords:
import { Tabs } from 'nextra/components' import { Tabs } from 'nextra/components'
import { Callout, Steps } from 'nextra/components' import { Callout, Steps } from 'nextra/components'
import { Settings, FolderOpen } from 'lucide-react'
# Jan Data Folder # Jan Data Folder
Jan stores your data locally in your own filesystem in a universal file format (JSON). We build for privacy by default and do not collect or sell your data. Jan stores your data locally in your own filesystem in a universal file format (JSON). We build for privacy by default and do not collect or sell your data.
This guide helps you understand where and how this data is stored. This guide helps you understand where and how this data is stored.
## Open the Data Folder ## Open Jan Data Folder
To open the Jan data folder from the app: To open from Jan's interface:
1. Click the System monitor button on your Jan app. 1. In Jan, navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Advanced Settings**
2. Click the App Log button. 2. Click <FolderOpen width={16} height={16} style={{display:"inline"}}/> icon to open Jan Data Folder
3. This redirects you to the Jan data folder. <br/>
![Open Jan Data Folder](./_assets/settings-11.png)
<br/>
To open through Terminal:
```bash ```bash
# Windows # Windows
~/AppData/Roaming/Jan/data %APPDATA%/Jan/data
# Mac # Mac
~/Library/Application\ Support/Jan/data ~/Library/Application Support/Jan/data
# Linux # Linux
## Custom installation directory ## Custom installation directory
$XDG_CONFIG_HOME = /home/username/custom_config $XDG_CONFIG_HOME/Jan/data
or or
@ -233,18 +238,8 @@ Threads history is kept in this directory. Each session or thread is stored in a
| `model` | The selected model and its settings/parameters for the thread. Changes made by users to thread settings are written here, rather than in model.json. Also contains the ID and engine of the selected model for quick querying by extensions. | | `model` | The selected model and its settings/parameters for the thread. Changes made by users to thread settings are written here, rather than in model.json. Also contains the ID and engine of the selected model for quick querying by extensions. |
| `metadata` | Additional thread data, such as `lastMessage`, which provides GUI information but does not use OpenAI-compatible fields. | | `metadata` | Additional thread data, such as `lastMessage`, which provides GUI information but does not use OpenAI-compatible fields. |
## Open the Data Folder
To open the Jan data folder, follow the steps in the [Settings](/docs/settings#access-the-jan-data-folder) guide.
## Delete Jan Data Folder ## Delete Jan Data Folder
If you have uninstalled the Jan app, you may also want to delete the Jan data folder. You can automatically remove this folder during uninstallation by selecting **OK** when prompted. If you have uninstalled Jan, you may also want to delete the Jan data folder.
See detail instructions on [Mac](/docs/desktop/mac#step-2-clean-up-data-optional), [Window](/docs/desktop/windows#step-2-handle-jan-data), [Linux](docs/desktop/linux#uninstall-jan).
![Delete Data Folder](./_assets/delete-data.png)
If you missed this step and need to delete the folder manually, please follow these instructions:
1. Go to the root data folder in your Users directory.
2. Locate the Jan data folder.
3. Delete the folder manually.

View File

@ -1,5 +1,5 @@
--- ---
title: Desktop Installation title: Installation
description: Jan is a ChatGPT-alternative that runs on your computer, with a local API server. description: Jan is a ChatGPT-alternative that runs on your computer, with a local API server.
keywords: keywords:
[ [
@ -20,7 +20,7 @@ keywords:
import { Cards, Card } from 'nextra/components' import { Cards, Card } from 'nextra/components'
import childPages from './desktop/_meta.json'; import childPages from './desktop/_meta.json';
# Desktop Installation # Installation
<br/> <br/>

View File

@ -20,21 +20,24 @@ keywords:
] ]
--- ---
import { Tabs } from 'nextra/components'
import { Callout } from 'nextra/components'
import FAQBox from '@/components/FaqBox' import FAQBox from '@/components/FaqBox'
import { Tabs, Callout, Steps } from 'nextra/components'
# Linux Installation # Linux Installation
To install Jan desktop on Linux, follow the steps below: To install Jan desktop on Linux, follow the steps below:
## Compatibility ## Compatibility
Ensure that your system meets the following requirements to use Jan effectively: Ensure that your system meets the following requirements to use Jan effectively:
<Tabs items={['OS', 'CPU', 'RAM', 'GPU', 'Disk']}> <Tabs items={['OS', 'CPU', 'RAM', 'GPU', 'Disk']}>
<Tabs.Tab> <Tabs.Tab>
- Debian-based (Supports `.deb` and `AppImage` ) #### Debian-based (Supports `.deb` and `AppImage`)
- Ubuntu-based - Debian
- Ubuntu Desktop LTS (official)/ Ubuntu Server LTS (only for server) - Ubuntu and derivatives:
- Ubuntu Desktop LTS (official)/Ubuntu Server LTS (only for server)
- Edubuntu (Mainly desktop) - Edubuntu (Mainly desktop)
- Kubuntu (Desktop only) - Kubuntu (Desktop only)
- Lubuntu (Both desktop and server, though mainly desktop) - Lubuntu (Both desktop and server, though mainly desktop)
@ -42,152 +45,136 @@ Ensure that your system meets the following requirements to use Jan effectively:
- Ubuntu Cinnamon (Desktop only) - Ubuntu Cinnamon (Desktop only)
- Ubuntu Kylin (Both desktop and server) - Ubuntu Kylin (Both desktop and server)
- Ubuntu MATE (Desktop only) - Ubuntu MATE (Desktop only)
- Pacman-based
- Arch Linux based #### RHEL-based (Supports `.rpm` and `AppImage`)
- Arch Linux (Mainly desktop) - RHEL-based (Server only)
- SteamOS (Desktop only) - Fedora
- RPM-based (Supports `.rpm` and `AppImage` )
- Fedora-based #### Arch-based
- RHEL-based (Server only) - Arch Linux (Mainly desktop)
- SteamOS (Desktop only)
#### Independent
- openSUSE (Both desktop and server) - openSUSE (Both desktop and server)
<Callout type="info"> <Callout type="info">
- Please check whether your Linux distribution supports desktop, server, or both environments. Please check whether your Linux distribution supports desktop, server, or both environments.
</Callout>
</Tabs.Tab>
</Callout>
</Tabs.Tab>
<Tabs.Tab>
<Tabs items={['Intel', 'AMD']}>
<Tabs.Tab> <Tabs.Tab>
- Haswell processors (Q2 2013) and newer
- Tiger Lake (Q3 2020) and newer for Celeron and Pentium processors
- Excavator processors (Q2 2015) and newer
<Callout type="info"> <Callout type="info">
- Jan supports a processor that can handle AVX2. For the full list, please see [here](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX2). Jan requires a processor with AVX2 for best performance. See [full list of supported processors.](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX2) While older processors with AVX/AVX-512 will work, you may experience slower performance.
- We support older processors with AVX and AVX-512, though this is not recommended.
</Callout> </Callout>
- Haswell processors (Q2 2013) and newer.
- Tiger Lake (Q3 2020) and newer for Celeron and Pentium processors.
</Tabs.Tab> </Tabs.Tab>
<Tabs.Tab> <Tabs.Tab>
- 8GB → 3B models (int4)
- 16GB → 7B models (int4)
- 32GB → 13B models (int4)
<Callout type="info"> <Callout type="info">
- Jan supports a processor that can handle AVX2. For the full list, please see [here](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX2). DDR2 RAM minimum supported, newer generations recommended for better performance.
- We support older processors with AVX and AVX-512, though this is not recommended.
</Callout> </Callout>
- Excavator processors (Q2 2015) and newer.
</Tabs.Tab> </Tabs.Tab>
<Tabs.Tab>
- 6GB → 3B model (int4) with `ngl` at 120
- 8GB → 7B model (int4) with `ngl` at 120
- 12GB → 13B model (int4) with `ngl` at 120
<Callout type="info">
Minimum 6GB VRAM recommended for NVIDIA, AMD, or Intel Arc GPUs.
</Callout>
</Tabs.Tab>
<Tabs.Tab>
At least 10GB for app installation and model downloads.
</Tabs.Tab>
</Tabs> </Tabs>
</Tabs.Tab>
<Tabs.Tab>
- 8GB for running up to 3B models (int4).
- 16GB for running up to 7B models (int4).
- 32GB for running up to 13B models (int4).
<Callout type="info"> ## Install Jan
We support DDR2 RAM as the minimum requirement but recommend using newer generations of RAM for improved performance.
</Callout>
</Tabs.Tab>
<Tabs.Tab>
- 6GB can load the 3B model (int4) with `ngl` at 120 ~ full speed on CPU/ GPU.
- 8GB can load the 7B model (int4) with `ngl` at 120 ~ full speed on CPU/ GPU.
- 12GB can load the 13B model (int4) with `ngl` at 120 ~ full speed on CPU/ GPU.
<Callout type="info">
Having at least 6GB VRAM when using NVIDIA, AMD, or Intel Arc GPUs is recommended.
</Callout>
</Tabs.Tab>
<Tabs.Tab>
- At least 10GB for app storage and model download.
</Tabs.Tab>
</Tabs>
## Prerequisites
- **System Libraries**:
- glibc 2.27 or higher. You can verify this by running `ldd --version`.
- Install gcc-11, g++-11, cpp-11, or later versions. Refer to the [Ubuntu installation guide](https://gcc.gnu.org/projects/cxx-status.html#cxx17) for assistance.
- **Post-Installation Actions**:
- Add CUDA libraries to the `LD_LIBRARY_PATH` per the instructions in the [Post-installation Actions](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#post-installation-actions).
## Installing Jan
To install Jan, follow the steps below: To install Jan, follow the steps below:
### Step 1: Download the Jan Application <Steps>
Jan provides two types of releases: ### Step 1: Download Application
<Tabs items={['Stable Releases', 'Nightly Releases']}>
Jan provides 3 types of releases:
<Tabs items={['Stable Release', 'Beta Release', 'Nightly Release']}>
<Tabs.Tab> <Tabs.Tab>
#### Stable Releases Please download Jan from official distributions, or build it from source.
- Download Jan on **Ubuntu**: [jan.deb](https://app.jan.ai/download/latest/linux-amd64-deb)
- Download Jan on **Fedora**: [jan.AppImage](https://app.jan.ai/download/latest/linux-amd64-appimage)
- Official Website: https://jan.ai/download
The stable release is a stable version of Jan. You can download a stable release Jan app via the following: </Tabs.Tab>
<Tabs.Tab>
- **Official Website**: [https://jan.ai](https://jan.ai/) Beta releases let you test new features, which may be buggy:
- **Jan GitHub repository**: [Github](https://github.com/janhq/jan/releases) - Download Jan's Beta Version on **Ubuntu**: [jan.deb](https://app.jan.ai/download/beta/linux-amd64-deb)
- Download Jan's Beta Version on **Fedora**: [jan.AppImage](https://app.jan.ai/download/beta/linux-amd64-appimage)
<Callout type="info"> <Callout type="info">
Make sure to verify the URL to ensure that it's the official Jan website and GitHub repository. Keep in mind that this build might crash frequently or contain bugs!
</Callout> </Callout>
</Tabs.Tab> </Tabs.Tab>
<Tabs.Tab> <Tabs.Tab>
#### Nightly Releases
The nightly Release allows you to test out new features and get a sneak peek at what might be included in future stable releases. You can download this version via: Nightly releases are for internal team to test new feature builds everyday, which is very buggy:
- **Jan GitHub repository**: [Github](https://github.com/janhq/jan/actions/workflows/jan-electron-build-nightly.yml) - Download Jan's Nightly Version on **Ubuntu**: [jan.deb](https://app.jan.ai/download/nightly/linux-amd64-deb)
- Download Jan's Nightly Version on **Fedora**: [jan.AppImage](https://app.jan.ai/download/nightly/linux-amd64-appimage)
<Callout type="info"> <Callout type="info">
Keep in mind that this build might crash frequently and may contain bugs! Keep in mind that this build crashes frequently or contains bugs!
</Callout> </Callout>
</Tabs.Tab> </Tabs.Tab>
</Tabs> </Tabs>
For Linux, Jan provides two types of downloads:
1. **Ubuntu**: `.deb` ### Step 2: Install Application
2. **Fedora**: `.AppImage`
### Step 2: Install the Jan Application
Here are the steps to install Jan on Linux based on your Linux distribution: Here are the steps to install Jan on Linux based on your Linux distribution:
<Tabs items={['Ubuntu', 'Fedora']}> <Tabs items={['Ubuntu', 'Fedora']}>
<Tabs.Tab> <Tabs.Tab>
### Ubuntu Install Jan using either **dpkg** or **apt-get**:
Install Jan using the following command: ##### dpkg
<Tabs items={['dpkg', 'apt-get']}> ```bash
<Tabs.Tab>
```
# Install Jan using dpkg # Install Jan using dpkg
sudo dpkg -i jan-linux-amd64-{version}.deb sudo dpkg -i jan-linux-amd64-{version}.deb
``` ```
</Tabs.Tab>
<Tabs.Tab>
```json ##### apt-get
```bash
# Install Jan using apt-get # Install Jan using apt-get
sudo apt-get install ./jan-linux-amd64-{version}.deb sudo apt-get install ./jan-linux-amd64-{version}.deb
# where jan-linux-amd64-{version}.deb is the path to the Jan package # where jan-linux-amd64-{version}.deb is the path to the Jan package
``` ```
</Tabs.Tab> </Tabs.Tab>
</Tabs>
</Tabs.Tab>
<Tabs.Tab> <Tabs.Tab>
### Fedora
1. Make the AppImage executable using the following command: 1. Make the AppImage executable using the following command:
```bash
``` chmod +x jan-linux-x86_64-{version}.AppImage
chmod +x jan-linux-x86_64-{version}.AppImage
``` ```
2. Run the AppImage file using the following command: 2. Run the AppImage file using the following command:
```bash
``` ./jan-linux-x86_64-{version}.AppImage
./jan-linux-x86_64-{version}.AppImage
``` ```
</Tabs.Tab> </Tabs.Tab>
</Tabs> </Tabs>
</Steps>
## Data Folder ## Data Folder
By default, Jan is installed in the following directory: By default, Jan is installed in the following directory:
@ -202,65 +189,96 @@ or
~/.config/Jan/data ~/.config/Jan/data
``` ```
See [Jan Data Folder](/docs/data-folder) for more details about the data folder structure.
<Callout type="info">
- Please see the [Jan Data Folder](/docs/data-folder) for more details about the data folder structure.
</Callout>
## GPU Acceleration ## GPU Acceleration
Once Jan is installed and you have a GPU, you can use your GPU to accelerate the model's performance. Once Jan is installed and you have a GPU, you can use your GPU to accelerate the model's performance.
### NVIDIA GPU <Tabs items={['NVIDIA GPU', 'AMD GPU', 'Intel Arc GPU']}>
To enable the use of your NVIDIA GPU in the Jan app, follow the steps below: <Tabs.Tab>
<Steps>
<Callout type="info"> ### Step 1: Verify Hardware & Install Dependencies
Ensure that you have installed the following to use NVIDIA GPU: **1.1. Check GPU Detection**
- NVIDIA GPU with CUDA Toolkit 11.7 or higher.
- NVIDIA driver 470.63.01 or higher.
</Callout> To verify that your system recognizes the NVIDIA GPU:
```
lspci | grep -i nvidia
```
**1.2. Install Required components**
**NVIDIA Driver:**
1. Install [NVIDIA Driver](https://www.nvidia.com/en-us/drivers/) for your GPU (NVIDIA driver **470.63.01 or higher**).
2. Verify installation:
```
nvidia-smi
```
Expected output should show your GPU model and driver version.
**CUDA Toolkit:**
1. Download and install [CUDA toolkit](https://developer.nvidia.com/cuda-downloads) (**CUDA 11.7 or higher**)
2. Verify installation:
```
nvcc --version
```
**Linux Additional Requirements:**
1. Required packages are installed:
```
sudo apt update
sudo apt install gcc-11 g++-11 cpp-11
```
2. Set up CUDA environment:
```
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64
```
See [detailed instructions](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#post-installation-actions).
### Step 2: Enable GPU Acceleration
1. In Jan, navigate to **Settings** > **Hardware**
3. Select and enable your prefered NVIDIA GPU(s)
4. App reload is required after the selection
1. Open Jan application.
2. Go to **Settings** -> **Advanced Settings** -> **GPU Acceleration**.
3. Enable and choose the NVIDIA GPU you want.
4. A success notification saying **Successfully turned on GPU acceleration** will appear when GPU acceleration is activated.
<Callout type="info"> <Callout type="info">
While **Vulkan** can enable Nvidia GPU acceleration in the Jan app, **CUDA** is recommended for faster performance. While **Vulkan** can enable Nvidia GPU acceleration in the Jan app, **CUDA** is recommended for faster performance.
</Callout> </Callout>
### AMD GPU </Steps>
</Tabs.Tab>
<Tabs.Tab>
To enable the use of your AMD GPU in the Jan app, you need to activate the Vulkan support first by following the steps below: To enable the use of your AMD GPU in the Jan app, you need to activate the Vulkan support first by following the steps below:
1. Open Jan application. 1. Open Jan application
2. Go to **Settings** -> **Advanced Settings** -> enable the **Experimental Mode**. 2. Go to **Settings** → **Advanced Settings** → enable the **Experimental Mode**
3. Enable the **Vulkan Support** under the **GPU Acceleration**. 3. Enable the **Vulkan Support** under the **GPU Acceleration**
4. Enable the **GPU Acceleration** and choose the GPU you want to use. 4. Enable the **GPU Acceleration** and choose the AMD GPU you want to use
5. A success notification saying **Successfully turned on GPU acceleration** will appear when GPU acceleration is activated. 5. A success notification saying **Successfully turned on GPU acceleration** will appear when GPU acceleration is activated
</Tabs.Tab>
### Intel Arc GPU
<Tabs.Tab>
To enable the use of your Intel Arc GPU in the Jan app, you need to activate the Vulkan support first by following the steps below: To enable the use of your Intel Arc GPU in the Jan app, you need to activate the Vulkan support first by following the steps below:
1. Open Jan application. 1. Open Jan application
2. Go to **Settings** -> **Advanced Settings** -> enable the **Experimental Mode**. 2. Go to **Settings** → **Advanced Settings** → enable the **Experimental Mode**
3. Enable the **Vulkan Support** under the **GPU Acceleration**. 3. Enable the **Vulkan Support** under the **GPU Acceleration**
4. Enable the **GPU Acceleration** and choose the GPU you want to use. 4. Enable the **GPU Acceleration** and choose the Intel Arc GPU you want to use
5. A success notification saying **Successfully turned on GPU acceleration** will appear when GPU acceleration is activated. 5. A success notification saying **Successfully turned on GPU acceleration** will appear when GPU acceleration is activated
</Tabs.Tab>
</Tabs>
## Uninstalling Jan ## Uninstall Jan
To uninstall Jan, follow the steps below: Open **Terminal** and run these commands to remove all Jan-related data:
<Tabs items={['Ubuntu', 'Fedora']}> <Tabs items={['Ubuntu', 'Fedora']}>
<Tabs.Tab> <Tabs.Tab>
### Ubuntu
```bash ```bash
# Uninstall Jan # Uninstall Jan
sudo apt-get remove jan sudo apt-get remove jan
@ -276,7 +294,6 @@ rm -rf ~/.config/Jan/cache
``` ```
</Tabs.Tab> </Tabs.Tab>
<Tabs.Tab> <Tabs.Tab>
### Fedora
```bash ```bash
# Uninstall Jan # Uninstall Jan
@ -291,9 +308,8 @@ rm -rf ~/.config/Jan/cache
</Tabs.Tab> </Tabs.Tab>
</Tabs> </Tabs>
<Callout type="info"> <Callout type="warning">
The deleted Data Folder cannot be restored. Deleted data folders cannot be restored. Make sure to backup any important data before proceeding with deletion.
</Callout> </Callout>
{/* ## FAQs {/* ## FAQs

View File

@ -21,105 +21,89 @@ keywords:
--- ---
import { Tabs } from 'nextra/components' import { Tabs } from 'nextra/components'
import { Callout } from 'nextra/components'
import FAQBox from '@/components/FaqBox' import FAQBox from '@/components/FaqBox'
import { Callout, Steps } from 'nextra/components'
# Mac Installation # Mac Installation
Jan has been developed as a Mac Universal application, allowing it to run natively on both Apple Silicon and Intel-based Macs. Jan has been developed as a Mac Universal application, allowing it to run natively on both Apple Silicon and Intel-based Macs.
## Compatibility ## Compatibility
### Minimum Requirements
Ensure that your system meets the following requirements to use Jan effectively: Ensure that your system meets the following requirements to use Jan effectively:
<Tabs items={['Mac Intel CPU', 'Mac Apple Silicon']}> - **Operating System:** MacOSX 13.6 or higher
<Tabs.Tab> - **Memory:**
<Tabs items={['Operating System', 'Memory', 'Disk']}> - 8GB → up to 3B parameter models
<Tabs.Tab> - 16GB → up to 7B parameter models
- MacOSX 13.6 or higher. - 32GB → up to 13B parameter models
</Tabs.Tab> - **Storage:** 10GB+ free space
<Tabs.Tab>
- 8GB for running up to 3B models.
- 16GB for running up to 7B models.
- 32GB for running up to 13B models.
</Tabs.Tab>
<Tabs.Tab>
- At least 10GB for app and model download.
</Tabs.Tab>
</Tabs>
</Tabs.Tab>
<Tabs.Tab>
<Tabs items={['Operating System', 'Memory', 'Disk']}>
<Tabs.Tab>
- MacOSX 13.6 or higher.
</Tabs.Tab>
<Tabs.Tab>
- 8GB for running up to 3B models.
- 16GB for running up to 7B models.
- 32GB for running up to 13B models.
<Callout type="info">
Apple Silicon Macs leverage Metal for GPU acceleration, providing faster performance than Intel Macs, which rely solely on CPU processing.
### Mac Performance Guide
<Callout type="info">
**Apple Silicon Macs** leverage Metal for GPU acceleration, providing faster performance than **Appple Intel Macs**, which rely solely on CPU processing.
</Callout> </Callout>
</Tabs.Tab> **Apple Silicon (M1, M2, M3)**
<Tabs.Tab> - Metal acceleration enabled by default, no configuration required
- At least 10GB for app and model download. - Optimized GPU-accelerated performance
</Tabs.Tab>
</Tabs> **Intel-based Mac**
</Tabs.Tab> - CPU-only processing
</Tabs> - Uses regular processor, works slower
## Installing Jan
_To verify your Mac's processor architecture: Apple menu  → About This Mac._
## Install Jan
To install Jan, follow the steps below: To install Jan, follow the steps below:
### Step 1: Download the Jan Application <Steps>
Jan provides two types of releases: ### Step 1: Download Application
<Tabs items={['Stable Releases', 'Nightly Releases']}>
Jan provides 3 types of releases:
<Tabs items={['Stable Release', 'Beta Release', 'Nightly Release']}>
<Tabs.Tab> <Tabs.Tab>
#### Stable Releases
Please download Jan from official distributions, or build it from source. Please download Jan from official distributions, or build it from source.
- **Official Website**: [https://jan.ai](https://jan.ai/) - [Download Jan's Stable Version](https://app.jan.ai/download/latest/mac-universal)
- **Jan GitHub repository**: [Github](https://github.com/janhq/jan/releases) - Official Website: https://jan.ai/download
</Tabs.Tab>
<Tabs.Tab>
Beta releases let you test new features, which may be buggy:
[Download Jan's Beta Version](https://app.jan.ai/download/beta/mac-universal)
<Callout type="info"> <Callout type="info">
Make sure to verify the URL to ensure that it's the official Jan website and GitHub repository. Keep in mind that this build might crash frequently or contain bugs!
</Callout> </Callout>
</Tabs.Tab> </Tabs.Tab>
<Tabs.Tab> <Tabs.Tab>
#### Nightly Releases
Nightly Releases let you test out new features, which may be buggy: Nightly releases are for internal team to test new feature builds everyday, which is very buggy:
- **Jan GitHub repository**: [Github](https://github.com/janhq/jan/actions/workflows/jan-electron-build-nightly.yml) [Download Jan's Nightly Version](https://app.jan.ai/download/nightly/mac-universal)
<Callout type="info"> <Callout type="info">
Keep in mind that this build might crash frequently and may contain bugs! Keep in mind that this build crashes frequently or contains bugs!
</Callout> </Callout>
</Tabs.Tab> </Tabs.Tab>
</Tabs> </Tabs>
### Step 2: Install the Jan Application
1. Once you have downloaded the Jan app `.dmg` file, open the file.
2. Drag the application icon to the Applications folder shortcut.
3. Wait for the installation process.
4. Once installed, you can access Jan on your machine.
#### Install Jan with Homebrew ### Step 2: Install Application
You can also install Jan using the following Homebrew command: 1. Download and open the Jan installer (`.dmg` file)
2. Drag the Jan icon to your **Applications** folder
3. Installation will take a few moments to complete
4. Launch Jan from your Applications folder
```bash </Steps>
brew install --cask jan
```
<Callout type="warning">
- Ensure that you have installed Homebrew and its dependencies.
- Homebrew package installation is currently limited to **Apple Silicon Macs**, with upcoming support for Windows and Linux.
</Callout>
## Data Folder ## Jan Data Folder
By default, Jan is installed in the following directory: By default, Jan is installed in the following directory:
@ -127,43 +111,41 @@ By default, Jan is installed in the following directory:
# Default installation directory # Default installation directory
~/Library/Application\ Support/Jan/data ~/Library/Application\ Support/Jan/data
``` ```
See [Jan Data Folder](/docs/data-folder) for more details about the data folder structure.
<Callout type="info">
- Please see the [Jan Data Folder](/docs/data-folder) for more details about the data folder structure.
</Callout>
## Metal Acceleration ## Uninstall Jan
Jan is specially designed to work well on Mac Silicon, using `llama.cpp` as its main engine for processing AI tasks efficiently. It **automatically uses [Metal](https://developer.apple.com/documentation/metal)**, Apple's latest technology that can be used for GPU acceleration, so you dont need to turn on this feature manually. <Steps>
<Callout type="info"> ### Step 1: Remove Application
💡 Metal, used for GPU acceleration, is not supported on Intel-based Mac devices. 1. If Jan is currently open, exit the app
2. Open **Finder** menu.
3. Navigate to **Applications**
4. Locate Jan (or use the search bar)
5. Remove the application using any method:
- Drag to **Trash**
- Right-click → **Move to Trash**
- Select and press **Command-Delete**
</Callout> ### Step 2: Clean Up Data (Optional)
## Uninstalling Jan Open **Terminal** and run these commands to remove all Jan-related data:
To uninstall Jan, follow the steps below:
1. If the app is currently open, exit the app before continuing.
2. Open the **Finder** menu.
3. Click the **Applications** option from the sidebar.
4. Find the **Jan** app or type in the search bar.
5. Use any of these ways to move the **Jan** app to the Trash:
- Drag the app to the Trash.
- Select the app and choose the Move to Trash option.
- Select the app and press Command-Delete on your keyboard.
6. Use the following command to delete Jan's user data and app cache:
```bash ```bash
# Remove all user data # Remove all user data
rm -rf ~/jan rm -rf ~/jan
# Delete the application data # Delete application data
rm -rf ~/Library/Application\ Support/Jan/data rm -rf ~/Library/Application\ Support/Jan/data
# Delete the application cache # Delete application cache
rm -rf ~/Library/Application\ Support/Jan/cache rm -rf ~/Library/Application\ Support/Jan/cache
``` ```
</Steps>
<Callout type="warning">
Deleted data folders cannot be restored. Make sure to backup any important data before proceeding with deletion.
</Callout>
{/* ## FAQs {/* ## FAQs

View File

@ -27,93 +27,85 @@ import FAQBox from '@/components/FaqBox'
# Windows Installation # Windows Installation
To install Jan desktop on Windows, follow the steps below: To install Jan desktop on Windows, follow the steps below:
## Compatibility ## Compatibility
Ensure that your system meets the following requirements to use Jan effectively: Ensure that your system meets the following requirements to use Jan effectively:
<Tabs items={['OS', 'CPU', 'RAM', 'GPU', 'Disk']}> - **Operating System**: Windows 10 or higher.
<Tabs.Tab> - **CPU**
- Windows 10 or higher.
</Tabs.Tab>
<Tabs.Tab>
<Tabs items={['Intel', 'AMD']}> <Tabs items={['Intel', 'AMD']}>
<Tabs.Tab> <Tabs.Tab>
<Callout type="info"> - Intel: Haswell (Q2 2013) or newer
- Jan supports a processor that can handle AVX2. For the full list, please see [here](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX2). - Intel Celeron/Pentium: Tiger Lake (Q3 2020) or newer
- We support older processors with AVX and AVX-512, though this is not recommended.
</Callout>
- Haswell processors (Q2 2013) and newer.
- Tiger Lake (Q3 2020) and newer for Celeron and Pentium processors.
</Tabs.Tab> </Tabs.Tab>
<Tabs.Tab> <Tabs.Tab>
<Callout type="info">
- Jan supports a processor that can handle AVX2. For the full list, please see [here](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX2).
- We support older processors with AVX and AVX-512, though this is not recommended.
</Callout>
- Excavator processors (Q2 2015) and newer. - Excavator processors (Q2 2015) and newer.
</Tabs.Tab> </Tabs.Tab>
</Tabs> </Tabs>
</Tabs.Tab>
<Tabs.Tab>
- 8GB for running up to 3B models (int4).
- 16GB for running up to 7B models (int4).
- 32GB for running up to 13B models (int4).
<Callout type="info"> <Callout type="info">
We support DDR2 RAM as the minimum requirement but recommend using newer generations of RAM for improved performance. Jan requires a processor with AVX2 for best performance. See [full list of supported processors.](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX2) While older processors with AVX/AVX-512 will work, you may experience slower performance.
</Callout> </Callout>
</Tabs.Tab> - **Memory (RAM)**
<Tabs.Tab> - 8GB > 3B parameter models (int4)
- 6GB can load the 3B model (int4) with `ngl` at 120 ~ full speed on CPU/ GPU. - 16GB > 7B parameter models (int4)
- 8GB can load the 7B model (int4) with `ngl` at 120 ~ full speed on CPU/ GPU. - 32GB > 13B parameter models (int4)
- 12GB can load the 13B model (int4) with `ngl` at 120 ~ full speed on CPU/ GPU.
<Callout type="info"> <Callout type="info">
Having at least 6GB VRAM when using NVIDIA, AMD, or Intel Arc GPUs is recommended. DDR2 RAM is supported but newer RAM generations are recommended for better performance.
</Callout> </Callout>
</Tabs.Tab> - **GPU**:
<Tabs.Tab> - 6GB > 3B models with **ngl** at 120 (full speed)
- At least 10GB for app storage and model download. - 8GB > 7B models with **ngl** at 120 (full speed)
</Tabs.Tab> - 12GB > 13B models with **ngl** at 120 (full speed)
</Tabs> <Callout type="info">
## Installing Jan Minimum 6GB VRAM recommended for NVIDIA, AMD, or Intel Arc GPUs.
</Callout>
- **Storage:** Minimum 10GB free space for application and model downloads
## Install Jan
To install Jan, follow the steps below: To install Jan, follow the steps below:
### Step 1: Download the Jan Application <Steps>
Jan provides two types of releases: ### Step 1: Download Application
<Tabs items={['Stable Releases', 'Nightly Releases']}>
Jan provides 3 types of releases:
<Tabs items={['Stable Release', 'Beta Release', 'Nightly Release']}>
<Tabs.Tab> <Tabs.Tab>
#### Stable Releases Please download Jan from official distributions, or build it from source.
The stable release is a stable version of Jan. You can download a stable release Jan app via the following: - [Download Jan's Stable Version](https://app.jan.ai/download/latest/win-x64)
- Official Website: https://jan.ai/download
- **Official Website**: [https://jan.ai](https://jan.ai/) </Tabs.Tab>
- **Jan GitHub repository**: [Github](https://github.com/janhq/jan/releases) <Tabs.Tab>
Beta releases let you test new features, which may be buggy:
[Download Jan's Beta Version](https://app.jan.ai/download/beta/win-x64)
<Callout type="info"> <Callout type="info">
Make sure to verify the URL to ensure that it's the official Jan website and GitHub repository. Keep in mind that this build might crash frequently or contain bugs!
</Callout> </Callout>
</Tabs.Tab> </Tabs.Tab>
<Tabs.Tab> <Tabs.Tab>
#### Nightly Releases
The nightly Release allows you to test out new features and get a sneak peek at what might be included in future stable releases. You can download this version via: Nightly releases are for internal team to test new feature builds everyday, which is very buggy:
- **Jan GitHub repository**: [Github](https://github.com/janhq/jan/actions/workflows/jan-electron-build-nightly.yml) [Download Jan's Nightly Version](https://app.jan.ai/download/nightly/win-x64)
<Callout type="info"> <Callout type="info">
Keep in mind that this build might crash frequently and may contain bugs! Keep in mind that this build crashes frequently or contains bugs!
</Callout> </Callout>
</Tabs.Tab> </Tabs.Tab>
</Tabs> </Tabs>
### Step 2: Install the Jan Application
### Step 2: Install Application
1. Once you have downloaded the Jan app `.exe` file, open the file. 1. Once you have downloaded the Jan app `.exe` file, open the file.
2. Wait for Jan to be completely installed on your machine. 2. Wait for Jan to be completely installed on your machine.
3. Once installed, you can access Jan on your machine. 3. Once installed, you can access Jan on your machine.
</Steps>
## Data Folder ## Data Folder
By default, Jan is installed in the following directory: By default, Jan is installed in the following directory:
@ -124,78 +116,112 @@ By default, Jan is installed in the following directory:
``` ```
<Callout type="info"> See [Jan Data Folder](/docs/data-folder) for more details about the data folder structure.
- Please see the [Jan Data Folder](/docs/data-folder) for more details about the data folder structure.
</Callout>
## GPU Acceleration ## GPU Acceleration
Jan can leverage your GPU to significantly improve model performance and inference speed. Here's how to enable GPU acceleration for different hardware:
Once Jan is installed and you have a GPU, you can use your GPU to accelerate the model's performance. <Tabs items={['NVIDIA GPU', 'AMD GPU', 'Intel Arc GPU']}>
### NVIDIA GPU <Tabs.Tab>
To enable the use of your NVIDIA GPU in the Jan app, follow the steps below: <Steps>
### Step 1: Verify Hardware & Install Dependencies
**1.1. Check GPU Detection**
To verify that your system recognizes the NVIDIA GPU:
- Right-click desktop > NVIDIA Control Panel
- Or check Device Manager > Display Adapters
**1.2. Install Required components**
**NVIDIA Driver:**
1. Install [NVIDIA Driver](https://www.nvidia.com/en-us/drivers/) for your GPU (NVIDIA driver **470.63.01 or higher**).
2. Verify installation:
```
nvidia-smi
```
Expected output should show your GPU model and driver version.
**CUDA Toolkit:**
1. Download and install [CUDA toolkit](https://developer.nvidia.com/cuda-downloads) (**CUDA 11.7 or higher**)
2. Verify installation:
```
nvcc --version
```
### Step 2: Enable GPU Acceleration
1. In Jan, navigate to **Settings** > **Hardware**
3. Select and enable your prefered NVIDIA GPU(s)
4. App reload is required after the selection
<Callout type="info"> <Callout type="info">
Ensure that you have installed the following to use NVIDIA GPU: While Jan supports both CUDA and Vulkan for NVIDIA GPUs, we strongly recommend using CUDA for optimal performance.
- NVIDIA GPU with CUDA Toolkit 11.7 or higher.
- NVIDIA driver 470.63.01 or higher.
</Callout> </Callout>
1. Open Jan application. </Steps>
2. Go to **Settings** -> **Advanced Settings** -> **GPU Acceleration**.
3. Enable and choose the NVIDIA GPU you want.
4. A success notification saying **Successfully turned on GPU acceleration** will appear when GPU acceleration is activated.
<Callout type="info"> </Tabs.Tab>
While **Vulkan** can enable Nvidia GPU acceleration in the Jan app, **CUDA** is recommended for faster performance.
</Callout>
### AMD GPU <Tabs.Tab>
AMD GPUs require **Vulkan** support, which must be activated through **Experimental Mode**.
1. Launch Jan
2. Navigate to **Settings** > **Advanced Settings**
3. Enable **Experimental Mode**
4. Under **GPU Acceleration**, enable **Vulkan Support**
5. Enable **GPU Acceleration** and select your AMD GPU
6. App reload is required after the selection
</Tabs.Tab>
To enable the use of your AMD GPU in the Jan app, you need to activate the Vulkan support first by following the steps below: <Tabs.Tab>
Intel Arc GPUs require **Vulkan** support, which must be activated through **Experimental Mode**.
1. Launch Jan
2. Navigate to **Settings** > **Advanced Settings**
3. Enable **Experimental Mode**
4. Under **GPU Acceleration**, enable **Vulkan Support**
5. Enable **GPU Acceleration** and select your Intel Arc GPU
6. App reload is required after the selection
</Tabs.Tab>
1. Open Jan application. </Tabs>
2. Go to **Settings** -> **Advanced Settings** -> enable the **Experimental Mode**.
3. Enable the **Vulkan Support** under the **GPU Acceleration**.
4. Enable the **GPU Acceleration** and choose the AMD GPU you want to use.
5. A success notification saying **Successfully turned on GPU acceleration** will appear when GPU acceleration is activated.
### Intel Arc GPU
To enable the use of your Intel Arc GPU in the Jan app, you need to activate the Vulkan support first by following the steps below:
1. Open Jan application.
2. Go to **Settings** -> **Advanced Settings** -> enable the **Experimental Mode**.
3. Enable the **Vulkan Support** under the **GPU Acceleration**.
4. Enable the **GPU Acceleration** and choose the Intel Arc GPU you want to use.
5. A success notification saying **Successfully turned on GPU acceleration** will appear when GPU acceleration is activated.
## Uninstalling Jan
To uninstall Jan, follow the steps below: ## Uninstall Jan
### Step 1: Open the Control Panels <Steps>
1. Open the **Control Panels**. ### Step 1: Remove Application through Control Panel
2. Click **Uninstall Program** under the **Programs** section.
### Step 2: Uninstall Jan App 1. Open **Control Panels**
2. Go to **Programs** section
3. Click **Uninstall Program**
4. Search for **Jan**
5. Click the **Three Dots Icon** > **Uninstall**
6. Click **Uninstall** again to confirm
7. Click **OK**
1. Search for **Jan**. ### Step 2: Handle Jan Data
2. Click the **three dots icon** -> **Uninstall**. When prompted with: `Do you also want to delete the DEFAULT Jan data folder at C:\Users{username}\Jan?`
3. Click **Uninstall** once again to confirm the action. - Click **OK** to remove all Jan data
4. Click **OK**. - Click **Cancel** to keep your Jan data for future installations
5. A message will appear: **"Do you also want to delete the DEFAULT Jan data folder at C:\Users\{username}\Jan?"**.
6. Click **OK** to delete the entire Jan data folder, or click **Cancel** to save your Jan Data folder so you can use this on the new installation folder. ### Step 3: Clean Up Remaining Files
7. Navigate to `users/{username}/AppData/Roaming`.
8. Delete the `Jan` folder that contains the app cache. To ensure a complete uninstallation, remove the app cache:
1. Navigate to `C:\Users\[username]\AppData\Roaming`
2. Delete Jan folder
</Steps>
<Callout type="warning"> <Callout type="warning">
The deleted Data Folder cannot be restored. Deleted data folders cannot be restored. Make sure to backup any important data before proceeding with deletion.
</Callout> </Callout>
{/* ## FAQs {/* ## FAQs
<FAQBox title="What are Nightly Releases, and how can I access them?"> <FAQBox title="What are Nightly Releases, and how can I access them?">

View File

@ -323,9 +323,7 @@ To turn off the extension, follow the steps below:
![Extensions](./_assets/extensions-page2.png) ![Extensions](./_assets/extensions-page2.png)
<br/> <br/>
3. Click the slider button to turn off the extension. 3. Click the slider button to turn off the extension.
<br/>
![Extensions](./_assets/turn-off.png)
<br/>
4. Restart the app to see that the extension has been disabled. 4. Restart the app to see that the extension has been disabled.
## Model Management ## Model Management
@ -336,13 +334,8 @@ The Model Management extension allows Jan's app to download specific models from
![Settings](./_assets/settings.png) ![Settings](./_assets/settings.png)
<br/> <br/>
3. Under the **Core Extensions** section, select the **Model Management** extension. 3. Under the **Core Extensions** section, select the **Model Management** extension.
<br/>
![Model Management extension](./_assets/model-management1.png)
<br/>
4. Enter the HuggingFace access token. 4. Enter the HuggingFace access token.
<br/>
![Model Management Enable](./_assets/model-management2.png)
<br/>
## System Monitor ## System Monitor
The System Monitor extension now offers enhanced customization for app logging. Users can toggle the application logging feature on or off and set a custom interval for clearing the app logs. To configure the app log feature, follow these steps: The System Monitor extension now offers enhanced customization for app logging. Users can toggle the application logging feature on or off and set a custom interval for clearing the app logs. To configure the app log feature, follow these steps:

View File

@ -37,17 +37,19 @@ You'll be able to use it with [Continue.dev](https://jan.ai/integrations/coding/
### Features ### Features
- [Model Library](https://jan.ai/docs/models/manage-models#add-models) with popular LLMs like Llama3, Gemma or Mistral - Download popular open-source LLMs (Llama3, Gemma or Mistral,...) from [Model Hub](./docs/models/manage-models.mdx) or import any GGUF models
- Connect to [Remote AI APIs](https://jan.ai/docs/remote-inference/openai) like Groq and OpenRouter - Connect to [cloud model services](https://jan.ai/docs/remote-inference/openai) (OpenAI, Anthropic, Mistral, Groq,...)
- [Local API Server](https://jan.ai/api-reference) with OpenAI-equivalent API - [Chat](./docs/threads.mdx) with AI models & [customize their parameters](./docs/models/model-parameters.mdx) in an intuitive interface
- [Extensions](https://jan.ai/docs/extensions) for customizing Jan - Use [local API server](https://jan.ai/api-reference) with OpenAI-equivalent API
- Customize Jan with [extensions](https://jan.ai/docs/extensions)
### Philosophy ### Philosophy
Jan is built to be [User-owned](about#-user-owned): Jan is built to be [user-owned](about#-user-owned):
- Open source via the [AGPLv3 license](https://github.com/janhq/jan/blob/dev/LICENSE) - Open source via the [AGPLv3 license](https://github.com/janhq/jan/blob/dev/LICENSE)
- [Local-first](https://www.inkandswitch.com/local-first/), with all data stored locally - [Local-first](https://www.inkandswitch.com/local-first/), with all data stored locally
- Runs 100% offline, with privacy by default - Runs 100% offline, with privacy by default
- Free choice of AI models, both local and cloud-based
- We do not [collect or sell user data](/privacy) - We do not [collect or sell user data](/privacy)
<Callout> <Callout>
@ -70,8 +72,8 @@ Jan is built on the shoulders of many upstream open-source projects:
## FAQs ## FAQs
<FAQBox title="What is Jan"> <FAQBox title="What is Jan?">
Jan runs and trains models (LLMs) on your laptop or desktop computers. Jan is a customizable AI assistant that runs offline on your computer - a privacy-focused alternative to ChatGPT, with optional cloud AI support.
</FAQBox> </FAQBox>
<FAQBox title="How do I use Jan?"> <FAQBox title="How do I use Jan?">
@ -79,27 +81,33 @@ Jan is built on the shoulders of many upstream open-source projects:
</FAQBox> </FAQBox>
<FAQBox title="Is Jan compatible with my operating system?"> <FAQBox title="Is Jan compatible with my operating system?">
Jan is available for Mac, Windows, and Linux via Docker/Helm, ensuring wide compatibility. Jan is available for Mac, Windows, and Linux, ensuring wide compatibility.
GPU-wise, Jan supports Nvidia, AMD (through Vulkan), and Intel. GPU-wise, Jan supports:
- NVIDIA GPUs (CUDA)
- AMD GPUs (Vulkan)
- Intel Arc GPUs (Vulkan)
- Other GPUs with Vulkan support
</FAQBox> </FAQBox>
<FAQBox title="Do you use my data?"> <FAQBox title="Do you use my data?">
No. Not even a little. Your usage data is entirely local and private and never leaves your computer. No data is collected. Everything stays local on your device.
<Callout type="warning">
We also don't track IP or other identifying information. When using cloud AI services (like GPT-4 or Claude) through Jan, their data collection is outside our control. Please check their privacy policies.
<Callout type="warning">
If you use Jan in remote/API mode, i.e., chatting with ChatGPT, they may still collect your info.
</Callout> </Callout>
You can help improve Jan by choosing to opt in anonymous basic usage data (like feature usage and user counts). Even so, your chats and personal information are never collected. Read more about what data you can contribute to us at [Privacy](./docs/privacy.mdx).
</FAQBox> </FAQBox>
<FAQBox title="Do you sell my data?"> <FAQBox title="Do you sell my data?">
No, and we never will. No, and we never will.
</FAQBox> </FAQBox>
<FAQBox title="How does Jan ensure my data remains private?"> <FAQBox title="How does Jan ensure my data remains private?">
Jan prioritizes your privacy by running open-source AI models 100% offline on your computer. Conversations, documents, and files stay private. You can find your own user data at `~/jan` on your local filesystem. Jan prioritizes your privacy by running open-source AI models 100% offline on your computer. Conversations, documents, and files stay on your device in the Jan Data Folder located at:
- Windows: `%APPDATA%/Jan/data`
- Linux: `$XDG_CONFIG_HOME/Jan/data` or `~/.config/Jan/data`
- macOS: `~/Library/Application Support/Jan/data`
</FAQBox> </FAQBox>
<FAQBox title="What does Jan stand for?"> <FAQBox title="What does Jan stand for?">
@ -107,20 +115,24 @@ If you use Jan in remote/API mode, i.e., chatting with ChatGPT, they may still c
</FAQBox> </FAQBox>
<FAQBox title="Can I use Jan without an internet connection?"> <FAQBox title="Can I use Jan without an internet connection?">
Yes, Jan defaults to running locally without an internet connection. Yes, Jan can run without an internet connection, but you'll need to download a local model first. Once you've downloaded your preferred models, Jan will work entirely offline by default.
</FAQBox> </FAQBox>
<FAQBox title="Are there any costs associated with using Jan?"> <FAQBox title="Are there any costs associated with using Jan?">
Jan is free to use. However, if you want to connect to remote APIs, like Jan is free and open-source. There are no subscription fees or hidden costs for all local models & features.
GPT-4, you will need to put in your own API key.
</FAQBox>
To use cloud AI models (like GPT-4 or Claude):
- You'll need to have your own API keys & pay the standard rates charged by those providers.
- Jan doesn't add any markup.
</FAQBox>
<FAQBox title="What types of AI models can I download or import with Jan?"> <FAQBox title="What types of AI models can I download or import with Jan?">
You can download popular AI models through Jan's Hub or import any model you choose directly from HuggingFace. - Models from Jan's Hub are recommended for best compatibility.
- You can also import GGUF models from Hugging Face or your device.
</FAQBox> </FAQBox>
<FAQBox title="How do I customize Jan using the programmable API?"> <FAQBox title="How do I customize Jan using the programmable API?">
Jan is built like VSCode and Obsidian. It supports 3rd party extensions. In fact, most of the UI-level features were built with extensions in a few lines of code. Jan has an extensible architecture like VSCode and Obsidian - you can build custom features using our extensions API. Most of Jan's features are actually built as extensions.
</FAQBox> </FAQBox>
<FAQBox title="How can I contribute to Jan's development or suggest features?"> <FAQBox title="How can I contribute to Jan's development or suggest features?">
@ -128,24 +140,22 @@ If you use Jan in remote/API mode, i.e., chatting with ChatGPT, they may still c
</FAQBox> </FAQBox>
<FAQBox title="How can I get involved with the Jan community?"> <FAQBox title="How can I get involved with the Jan community?">
Joining [Jan's Discord server](https://discord.gg/qSwXFx6Krr) is a great way Joining our [Discord](https://discord.gg/qSwXFx6Krr) is a great way to get involved with the community.
to get involved with the community.
</FAQBox> </FAQBox>
<FAQBox title="How do I troubleshoot issues with installing or using Jan?"> <FAQBox title="How do I troubleshoot issues with installing or using Jan?">
For troubleshooting, you should reach out on Discord and check GitHub for For troubleshooting, please visit [Troubleshooting](./docs/troubleshooting.mdx).
assistance and support from the community and the development team. In case you can't find what you need in our troubleshooting guide, please reach out to us for extra help on our [Discord](https://discord.com/invite/FTk2MvZwJH) in the **#🆘|get-help** channel.
</FAQBox> </FAQBox>
<FAQBox title="Can I self-host?"> <FAQBox title="Can I self-host?">
Yes! We love the self-hosted movement. Jan is available as a Helm chart/ Yes! We love the self-hosted movement. You can:
Docker composes which can be run across home servers or even production-level - [Download Jan](./download.mdx) and run it directly.
environments. - Fork and build from our [GitHub](https://github.com/janhq/jan) repository.
</FAQBox> </FAQBox>
<FAQBox title="Are you hiring?"> <FAQBox title="Are you hiring?">
We often hire directly from our community. If you want to apply, Yes! We love hiring from our community. Check out our open positions at [Careers](https://homebrew.bamboohr.com/careers).
please see our careers page [here](https://homebrew.bamboohr.com/careers).
</FAQBox> </FAQBox>

View File

@ -1,6 +1,6 @@
{ {
"manage-models": { "manage-models": {
"title": "Managing Models", "title": "Model Management",
"href": "/docs/models/manage-models" "href": "/docs/models/manage-models"
}, },
"model-parameters": { "model-parameters": {

View File

@ -19,125 +19,97 @@ keywords:
--- ---
import { Callout, Steps } from 'nextra/components' import { Callout, Steps } from 'nextra/components'
# Overview # Model Management
This guide provides comprehensive instructions on adding, customizing, and deleting models within the Jan platform. This guide provides comprehensive instructions on adding, customizing, and deleting models within Jan.
## Add Models ## Local Model
There are various ways to add models to Jan. Jan offers flexible options for managing local models through its [Cortex](https://cortex.so/) engine. Currently, Jan only supports **GGUF format** models.
Currently, Jan natively supports the following model formats:
- GGUF (through a llama.cpp engine)
- TensorRT (through a TRT-LLM engine)
### Download from Jan Hub
Jan Hub provides three convenient methods to access machine learning models. Heres a clear step-by-step guide for each method:
#### 1. Download from the Recommended List
The Recommended List is a great starting point if you're looking for popular and pre-configured models that work well and quickly on most computers.
1. Open the Jan app and navigate to the Hub.
<br/>
![Jan Hub](../_assets/hub.png)
<br/>
2. Select models, clicking the `v` dropdown for more information.
<Callout type="info"> <Callout type="info">
Models with the `Recommended` label will likely run faster on your computer. Local models run directly on your computer, which means they use your computer's memory (RAM) and processing power. Please choose models carefully based on your hardware specifications for [Mac](/docs/desktop/mac#compatibility), [Windows](/docs/desktop/windows#compatibility), & [Linux](/docs/desktop/linux#compatibility).
</Callout> </Callout>
3. Click **Download** to download the model.
<br/>
![Download Model](../_assets/download-button.png)
#### 2. Download with HuggingFace Model's ID or URL ### Add Models
If you need a specific model from [Hugging Face](https://huggingface.co/models), Jan Hub lets you download it directly using the models ID or URL.
#### 1. Download from Jan Hub (Recommended)
The easiest way to get started is using Jan's built-in model hub:
1. Go to the **Hub**
2. Browse available models and click on any model to see details about it
3. Choose a model that fits your needs & hardware specifications
4. Click **Download** on your chosen model
<Callout type="info">
Jan will indicate if a model might be **Slow on your device** or requires **Not enough RAM** based on your system specifications.
</Callout>
<br/>
![Download Model](../_assets/model-management-01.png)
<br/>
#### 2. Import from [Hugging Face](https://huggingface.co/)
You can import GGUF models directly from [Hugging Face](https://huggingface.co/):
##### Option A: Import in Jan
1. Visit [Hugging Face Models](https://huggingface.co/models).
2. Find a GGUF model you want to use
3. Copy the **model ID** (e.g., TheBloke/Mistral-7B-v0.1-GGUF) or its **URL**
4. In Jan, paste the model ID/URL to **Search** bar in **Hub** or in **Settings** > **My Models**
5. Select your preferred quantized version to download
<br/>
![Download Model](../_assets/model-management-02.png)
<br/>
##### Option B: Use Deep Link
You can use Jan's deep link feature to quickly import models:
1. Visit [Hugging Face Models](https://huggingface.co/models).
2. Find the GGUF model you want to use
3. Copy the **model ID**, for example: `TheBloke/Mistral-7B-v0.1-GGUF`
4. Create a **deep link URL** in this format:
```
jan://models/huggingface/<model_id>
```
5. Enter the URL in your browser & **Enter**, for example:
```
jan://models/huggingface/TheBloke/Mistral-7B-v0.1-GGUF
```
6. A prompt will appear: `This site is trying to open Jan`, click **Open** to open Jan app.
7. Select your preferred quantized version to download
<Callout type="warning"> <Callout type="warning">
Only `GGUF` models are supported for this feature. Deep linking won't work for models requiring API tokens or usage agreements. You'll need to download these models manually through the Hugging Face website.
</Callout> </Callout>
1. Go to the [Hugging Face](https://huggingface.co/models).
2. Select the model you want to use.
3. Copy the Model's ID or URL, for example: `MaziyarPanahi/Mixtral-8x22B-Instruct-v0.1-GGUF` or `https://huggingface.co/MaziyarPanahi/Mixtral-8x22B-Instruct-v0.1-GGUF`.
4. Return to the Jan app and click on the Hub tab.
<br/> <br/>
![Jan Hub](../_assets/hub.png) ![Downlosd Model](../_assets/model-management-03.png)
<br/> <br/>
5. Paste the **URL** or the **model ID** you have copied into the search bar.
<br/>
![Search Bar](../_assets/search-bar.png)
<br/> #### 3. Import Local Files
6. The app will show all available versions of the model. If you already have GGUF model files on your computer:
7. Click **Download** to download the model. 1. In Jan, go to **Hub** or **Settings** > **My Models**
<br/> 2. Click **Import Model**
![Download Model](../_assets/download-button2.png) 3. Select your **GGUF** file
<br/> 4. Choose how you want to import:
#### 3. Download with Deep Link - **Link Files:** Creates symbolic links to your model files (saves space)
You can also use Jan's deep link feature to download a specific model from [Hugging Face](https://huggingface.co/models). The deep link format is: `jan://models/huggingface/<model's ID>`. - **Duplicate:** Makes a copy of model files in Jan's directory
5. Click **Import** to complete
<Callout type="warning"> <Callout type="warning">
The deep link feature cannot be used for models that require: You need to own your **model configurations**, use at your own risk. Misconfigurations may result in lower quality or unexpected outputs.
- API Token.
- Acceptance of usage agreement.
You will need to download such models manually.
</Callout> </Callout>
1. Go to the [Hugging Face](https://huggingface.co/models).
2. Select the model you want to use.
3. Copy the Model's ID or URL, for example: `TheBloke/Magicoder-S-DS-6.7B-GGUF`.
4. Enter the deep link URL with your chosen model's ID in your browser. For example: `jan://models/huggingface/TheBloke/Magicoder-S-DS-6.7B-GGUF`
<br/>
![Paste the URL](../_assets/browser1.png)
<br/>
5. A prompt will appear, click **Open** to open the Jan app.
<br/>
![Click Open](../_assets/browser2.png)
<br/>
6. The app will show all available versions of the model.
7. Click **Download** to download the model.
<br/>
![Download Model](../_assets/download-button3.png)
<br/>
### Import or Symlink Local Models
You can also point to existing model binary files on your local filesystem. <br/>
This is the easiest and most space-efficient way if you have already used other local AI applications. ![Download Model](../_assets/model-management-04.png)
<br/>
1. Navigate to the Settings #### 4. Manual Setup
<br/> For advanced users who add a specific model that is not available within Jan **Hub**:
![Jan Hub](../_assets/hub.png) 1. Navigate to `~/jan/data/models/`
<br/> 2. Create a new **Folder** for your model
2. Click on `My Models` at the top. 3. Add a `model.json` file with your configuration:
<br/> ```
![Import Model](../_assets/import.png)
<br/>
3. Click the `Import Model` button on the top-right of your screen.
4. Click the upload icon button.
<br/>
![Download Icon](../_assets/download-icon.png)
<br/>
4. Import using `.GGUF` file or a folder.
<br/>
![Import Model](../_assets/import2.png)
<br/>
5. Select the model or the folder containing multiple models.
### Add a Model Manually
You can also add a specific model that is not available within the **Hub** section by following the steps below:
1. Open the Jan app.
2. Click the **gear icon (⚙️)** on the bottom left of your screen.
<br/>
![Settings](../_assets/settings.png)
<br/>
3. Under the **Settings screen**, click **Advanced Settings**.
<br/>
![Settings](../_assets/advance-set.png)
<br/>
4. Open the **Jan Data folder**.
<br/>
![Jan Data Folder](../_assets/data-folder.png)
<br/>
5. Head to the `~/jan/data/models/`.
6. Make a new model folder and put a file named `model.json` in it.
7. Insert the following `model.json` default code:
```json
{
"id": "<unique_identifier_of_the_model>", "id": "<unique_identifier_of_the_model>",
"object": "<type_of_object, e.g., model, tool>", "object": "<type_of_object, e.g., model, tool>",
"name": "<name_of_the_model>", "name": "<name_of_the_model>",
@ -155,20 +127,11 @@ You can also add a specific model that is not available within the **Hub** secti
}, },
"engine": "<engine_or_platform_the_model_runs_on>", "engine": "<engine_or_platform_the_model_runs_on>",
"source": "<url_or_source_of_the_model_information>" "source": "<url_or_source_of_the_model_information>"
}
``` ```
There are two important fields in `model.json` that you need to set: Key fields to configure:
1. **Settings** is where you can set your engine configurations.
#### Settings 2. [**Parameters**](/docs/models#model-parameters) are the adjustable settings that affect how your model operates or processes the data. The fields in parameters are typically general and can be the same across models. Here is an example of model parameters:
```
This is the field where you can set your engine configurations.
#### Parameters
`parameters` are the adjustable settings that affect how your model operates or processes the data.
The fields in `parameters` are typically general and can be the same across models. Here is an example of model parameters:
```json
"parameters":{ "parameters":{
"temperature": 0.7, "temperature": 0.7,
"top_p": 0.95, "top_p": 0.95,
@ -176,24 +139,35 @@ The fields in `parameters` are typically general and can be the same across mode
"max_tokens": 4096, "max_tokens": 4096,
"frequency_penalty": 0, "frequency_penalty": 0,
"presence_penalty": 0 "presence_penalty": 0
}
``` ```
<Callout type='info'>
To see the complete list of a model's parameters, please see [Model Parameters](/docs/models#model-parameters). ### Delete Models
1. Go to **Settings** > **My Models**
2. Find the model you want to remove
3. Select the three dots next to it and select **Delete Model**
<br/>
![Delete Model](../_assets/model-management-05.png)
<br/>
## Cloud model
<Callout type="info">
When using cloud models, be aware of any associated costs and rate limits from the providers.
</Callout> </Callout>
## Delete Models Jan supports connecting to various AI cloud providers that are OpenAI API-compatible, including: OpenAI (GPT-4, o1,...), Anthropic (Claude), Groq, Mistral, and more.
To delete a model: 1. Open **Settings**
2. Under **Model Provider** section in left sidebar (OpenAI, Anthropic, etc.), choose a provider
3. Enter your API key
4. The activated cloud models will be available in your model selector in **Threads**
1. Go to **Settings**.
<br/> <br/>
![Settings](../_assets/settings.png) ![Download Model](../_assets/model-management-06.png)
<br/> <br/>
2. Go to **My Models**.
Cloud models cannot be deleted, but you can hide them by disabling their respective provider engines in **Settings** > **Engines**.
<br/> <br/>
![My Models](../_assets/mymodels.png) ![Download Model](../_assets/model-management-07.png)
<br/> <br/>
3. Select the three dots next and select **Delete model**.
<br/>
![Delete Model](../_assets/delete.png)

View File

@ -19,51 +19,47 @@ keywords:
--- ---
import { Callout, Steps } from 'nextra/components' import { Callout, Steps } from 'nextra/components'
## Model Parameters # Model Parameters
A model has three main parameters to configure: To customize model settings for a conversation:
- Inference Parameters 1. In any **Threads**, click **Model** tab in the **right sidebar**
- Model Parameters 2. You can customize the following parameter types:
- Engine Parameters - **Inference Parameters:** Control how the model generates responses
- **Model Parameters:** Define the model's core properties and capabilities
- **Engine Parameters:** Configure how the model runs on your hardware
<br/>
![Download Model](../_assets/model-parameters.png)
<br/>
### Inference Parameters ### Inference Parameters
Inference parameters are settings that control how an AI model generates outputs. These parameters include the following: These settings determine how the model generates and formats its outputs.
| Parameter | Description | | Parameter | Description |
|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **Temperature** | - Influences the randomness of the model's output.<br></br>- A higher temperature leads to more random and diverse responses, while a lower temperature produces more predictable outputs. | | **Temperature** | - Controls response randomness.<br></br>- Lower values (0.0-0.5) give focused, deterministic outputs. Higher values (0.8-2.0) produce more creative, varied responses. |
| **Top P** | - Sets a probability threshold, allowing only the most likely tokens whose cumulative probability exceeds the threshold to be considered for generation.<br></br>- A lower top-P value (e.g., 0.9) may be more suitable for focused, task-oriented applications, while a higher top-P value (e.g., 0.95 or 0.97) may be better for more open-ended, creative tasks. | | **Top P** | - Sets the cumulative probability threshold for token selection.<br></br>- Lower values (0.1-0.7) make responses more focused and conservative. Higher values (0.8-1.0) allow more diverse word choices.|
| **Stream** | - Enables real-time data processing, which is useful for applications needing immediate responses, like live interactions. It accelerates predictions by processing data as it becomes available.<br></br>- Turned on by default. | | **Stream** | - Enables real-time response streaming. |
| **Max Tokens** | - Sets the upper limit on the number of tokens the model can generate in a single output.<br></br>- A higher limit benefits detailed and complex responses, while a lower limit helps maintain conciseness.| | **Max Tokens** | - Limits the length of the model's response.<br></br>- A higher limit benefits detailed and complex responses, while a lower limit helps maintain conciseness.|
| **Stop Sequences** | - Defines specific tokens or phrases that signal the model to stop producing further output.<br></br>- Use common concluding phrases or tokens specific to your applications domain to ensure outputs terminate appropriately. | | **Stop Sequences** | - Defines tokens or phrases that will end the model's response.<br></br>- Use common concluding phrases or tokens specific to your applications domain to ensure outputs terminate appropriately. |
| **Frequency Penalty** | - Modifies the likelihood of the model repeating the same words or phrases within a single output, reducing redundancy in the generated text.<br></br>- Increase the penalty to avoid repetition in scenarios where varied language is preferred, such as creative writing or content generation.| | **Frequency Penalty** | - Reduces word repetition.<br></br>- Higher values (0.5-2.0) encourage more varied language. Useful for creative writing and content generation.|
| **Presence Penalty** | - Encourages the generation of new and varied concepts by penalizing tokens that have already appeared, promoting diversity and novelty in the output.<br></br>- Use a higher penalty for tasks requiring high novelty and variety, such as brainstorming or ideation sessions.| | **Presence Penalty** | - Encourages the model to explore new topics.<br></br>- Higher values (0.5-2.0) help prevent the model from fixating on already-discussed subjects.|
### Model Parameter
Model parameters are the settings that define and configure the model's behavior. These parameters include the following:
### Model Parameters
This setting defines and configures the model's behavior.
| Parameter | Description | | Parameter | Description |
|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **Prompt Template** | - This predefined text or framework generates responses or predictions. It is a structured guide that the AI model fills in or expands upon during the generation process.<br></br>- For example, a prompt template might include placeholders or specific instructions that direct how the model should formulate its outputs. | | **Prompt Template** | A structured format that guides how the model should respond. Contains **placeholders** and **instructions** that help shape the model's output in a consistent way.|
### Engine Parameters ### Engine Parameters
Engine parameters are the settings that define how the model processes input data and generates output. These parameters include the following: These settings parameters control how the model runs on your hardware.
| Parameter | Description | | Parameter | Description |
|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **Number of GPU Layers (ngl)** | - This parameter specifies the number of transformer layers in the model that are offloaded to the GPU for accelerated computation. Utilizing the GPU for these layers can significantly reduce inference time due to the parallel processing capabilities of GPUs.<br></br>- Adjusting this parameter can help balance between computational load on the GPU and CPU, potentially improving performance for different deployment scenarios. | | **Number of GPU Layers (ngl)** | - Controls how many layers of the model run on your GPU.<br></br>- More layers on GPU generally means faster processing, but requires more GPU memory.|
| **Context Length** | - This parameter determines the maximum input amount the model can generate responses. The maximum context length varies with the model used. This setting is crucial for the models ability to produce coherent and contextually appropriate outputs.<br></br>- For tasks like summarizing long documents that require extensive context, use a higher context length. A lower setting can quicken response times and lessen computational demand for simpler queries or brief interactions. | | **Context Length** | - Controls how much text the model can consider at once.<br></br>- Longer context allows the model to handle more input but uses more memory and runs slower.<br></br>- The maximum context length varies with the model used.<br></br>|
<Callout type="info"> <Callout type="info">
By default, Jan sets the **Context Length** to the maximum supported by your model, which may slow down response times. For lower-spec devices, reduce **Context Length** to **1024** or **2048**, depending on your device's specifications, to improve speed. By default, Jan defaults to the minimum between **8192** and the model's maximum context length, you can adjust this based on your needs.
</Callout> </Callout>
## Customize the Model Settings
Adjust model settings for a specific conversation:
1. Navigate to a **thread**.
2. Click the **Model** tab.
<br/>
![Specific Conversation](../_assets/model-tab.png)
3. You can customize the following parameters:
- Inference parameters
- Model parameters
- Engine parameters
<br/>
![Specific Conversation](../_assets/model-parameters.png)

View File

@ -1,5 +1,5 @@
--- ---
title: Desktop installation title: Installation
description: Get started quickly with Jan, a ChatGPT-alternative that runs on your own computer, with a local API server. Learn how to install Jan and select an AI model to start chatting. description: Get started quickly with Jan, a ChatGPT-alternative that runs on your own computer, with a local API server. Learn how to install Jan and select an AI model to start chatting.
sidebar_position: 2 sidebar_position: 2
keywords: keywords:
@ -28,88 +28,91 @@ import { Callout, Steps } from 'nextra/components'
<Steps> <Steps>
### Step 1: Install Jan ### Step 1: Install Jan
You can run Jan either on your desktop using the Jan desktop app or on a server by installing the Jan server. To get started, check out the [Desktop](/docs/desktop) installation pages. 1. [Download Jan](/download)
2. Install the application on your system ([Mac](/docs/desktop/mac), [Windows](/docs/desktop/windows), [Linux](/docs/desktop/linux))
3. Launch Jan
Once you have installed Jan, you should see the Jan application as shown below without any local model installed: Once installed, you'll see the Jan application interface with no local models pre-installed yet. You'll be able to:
- Download and run local AI models
- Connect to cloud AI providers if desired
<br/> <br/>
![Default State](./_assets/default.gif) ![Default State](./_assets/quick-start-01.png)
<br/> <br/>
### Step 2: Turn on the GPU Acceleration (Optional) ### Step 2: Download a Model
If you have a graphics card, boost model performance by enabling GPU acceleration: Jan offers various local AI models, from smaller efficient models to larger more capable ones:
1. Open Jan application. 1. Go to **Hub**
2. Go to **Settings** -> **Advanced Settings** -> **GPU Acceleration**. 2. Browse available models and click on any model to see details about it
3. Click the Slider and choose your preferred GPU. 3. Choose a model that fits your needs & hardware specifications
3. A success notification saying **Successfully turned on GPU acceleration** will appear when GPU acceleration is activated. 4. Click **Download** to begin installation
<Callout type="info"> <Callout type="info">
Ensure you have installed your GPU driver. Please see [Desktop](/docs/desktop) for more information on activating the GPU acceleration. Local models run directly on your computer, which means they use your computer's memory (RAM) and processing power. Please choose models carefully based on your hardware specifications ([Mac](/docs/desktop/mac#minimum-requirements), [Windows](docs/desktop/windows#compatibility), [Linux](docs/desktop/linux#compatibility)).
</Callout>
For more model installation methods, please visit [Model Management](/docs/models/manage-models).
<br/>
![Download a Model](./_assets/model-management-01.png)
<br/>
### Step 3: Turn on GPU Acceleration (Optional)
While the model downloads, let's optimize your hardware setup. If you have a compatible graphics card, you can significantly boost model performance by enabling GPU acceleration.
1. Navigate to **Settings** → **Hardware**
2. Enable your preferred GPU(s)
3. App reload is required after the selection
<Callout type="info">
Ensure you have installed all required dependencies and drivers before enabling GPU acceleration. See **GPU Setup Guide** on [Windows](/docs/desktop/windows#gpu-acceleration) & [Linux](/docs/desktop/linux#gpu-acceleration) for detailed instructions.
</Callout> </Callout>
<br/> <br/>
![Turn on GPU acceleration](./_assets/gpu2.gif) ![Turn on GPU acceleration](./_assets/trouble-shooting-01.png)
### Step 3: Download a Model ### Step 4: Customize Assistant Instructions
Once your model has downloaded and you're ready to start your first conversation with Jan, you can customize how it responds by setting specific instructions:
1. In any **Thread**, click the **Assistant** tab in the **right panel**
2. Enter your instructions in the **Instructions** field to define how Jan should respond
Jan offers various local AI models tailored to different needs, all ready for download directly to your device: You can modify these instructions at any time during your conversation to adjust Jan's behavior for that specific thread.
1. Go to the **Hub**.
2. Select the models that you would like to install. To see model details, click the model name.
3. You can also paste the Hugging Face model's **ID** or **URL** in the search bar.
<Callout type="info">
Ensure you select the appropriate model size by balancing performance, cost, and resource considerations in line with your task's specific requirements and hardware specifications.
</Callout>
4. Click the **Download** button.
<br/> <br/>
![Download a Model](./_assets/download-model2.gif) ![Assistant Instruction](./_assets/quick-start-02.png)
<br/>
### Step 5: Start Chatting and Fine-tune Settings
Now that your model is downloaded and instructions are set, you can begin chatting with Jan. Type your message in the **input field** at the bottom of the thread to start the conversation.
You can further customize your experience by:
- Adjusting [model parameters](/docs/models/model-parameters) in the **Model** tab in the **right panel**
- Trying different models for different tasks by clicking the **model selector** in **Model** tab or **input field**
- Creating new threads with different instructions and model configurations
<br/> <br/>
5. Go to the **Thread** tab. ![Chat with a Model](./_assets/model-parameters.png)
6. Click the **Model** tab button.
7. Choose either **On-device** or **Cloud** section.
8. Adjust the configurations as needed.
<Callout type="info">
Please see [Model Parameters](/docs/models#model-parameters) for detailed model configuration.
</Callout>
<br/> <br/>
![Parameters](./_assets/inf.gif)
### Step 6: Connect to cloud models (Optional)
Jan supports both local and remote AI models. You can connect to remote AI services that are OpenAI API-compatible, including: OpenAI (GPT-4, o1,...), Anthropic (Claude), Groq, Mistral, and more.
1. Open any **Thread**
2. Click the **Model** tab in the **right panel** or the **model selector** in input field
3. Choose the **Cloud** tab
4. Choose your preferred provider (Anthropic, OpenAI, etc.)
5. Click the **Add ()** icon next to the provider
6. Obtain a valid API key from your chosen provider, ensure the key has sufficient credits & appropriate permissions
7. Copy & insert your **API Key** in Jan
### Step 4: Customize the Assistant Instruction See [Remote APIs](/docs/remote-models/openai) for detailed configuration.
Customize Jan's assistant behavior by specifying queries, commands, or requests in the Assistant Instructions field to get the most responses from your assistant. To customize, follow the steps below:
1. On the **Thread** section, navigate to the right panel.
2. Select the **Assistant** tab menu.
3. Provide a specific guideline under the **Instructions** field.
<br/>
![Assistant Instruction](./_assets/asst.gif)
<br/>
### Step 5: Start Thread
Once you have downloaded a model and customized your assistant instruction, you can start chatting with the model.
<br/> <br/>
![Chat with a Model](./_assets/chat.gif) ![Connect Remote API](./_assets/quick-start-03.png)
<br/>
### Step 6: Connect to a Remote API
Jan also offers access to remote models hosted on external servers. You can link up with any Remote AI APIs compatible with OpenAI. Jan comes with numerous extensions that facilitate connections to various remote AI APIs. To explore and connect to Remote APIs, follow these steps:
1. On the **Thread** section, navigate to the right panel.
2. Select the **Model** tab menu.
3. Next to the **OpenAI** models -> click the **Gear Icon (⚙️)**.
4. Enter your OpenAI API **Keys**.
<br/>
![Connect Remote API](./_assets/server-openai2.gif)
<br/> <br/>
</Steps> </Steps>

View File

@ -21,211 +21,335 @@ keywords:
--- ---
import { Tabs, Steps, Callout } from 'nextra/components' import { Tabs, Steps, Callout } from 'nextra/components'
import { Settings, EllipsisVertical, Plus, FolderOpen, Pencil } from 'lucide-react'
# Settings # Settings
This guide will show you how to customize your Jan application settings and advanced settings. This guide explains how to customize your Jan application settings.
To access **Settings**, click <Settings width={16} height={16} style={{display:"inline"}}/> icon in the bottom left corner of Jan.
## Settings File
Settings for the Jan application are stored in a `cortex.db` file located at `~jan/`, ensuring they persist across sessions. This file contains all user preferences and configurations.
## Customize the UI
My Settings is where you can customize the color of Jan's desktop app UI. Here's how to personalize the color scheme of Jan's desktop app UI:
1. Navigate to the main dashboard.
2. Click the **Gear Icon (⚙️)** on the bottom left of your screen.
<br/>
![Settings](./_assets/settings.png)
<br/>
3. Select the **Appearance** section.
<br/>
![Settings](./_assets/appearance.png)
<br/>
4. Pick the **Appearance Scheme** for your Jan desktop app. Options include:
- Joi Light
- Joi Dark
- Dark Dimmed
- Night Blue
<br/>
![Settings](./_assets/scheme.png)
<br/>
5. Choose the **Interface theme** for your Jan desktop app. Options include:
- Solid
- Transparent
<br/>
![Settings](./_assets/theme.png)
## Access the Spell Check
1. Navigate to the main dashboard.
2. Click the **Gear Icon (⚙️)** on the bottom left of your screen.
<br/>
![Settings](./_assets/settings.png)
<br/>
3. Select the **Appearance** section.
4. Click the **Spell Check** slider to enable it.
<br/>
![Spell](./_assets/spell.png)
<br/>
## Access Advanced Settings
Advanced Settings is the GUI version of the `settings.json`. To access Jan's advanced settings, follow the steps below:
<Callout type="info"> <Callout type="info">
Whenever you make changes in the Jan application's Settings screen, they are automatically saved to the `settings.json` file. This ensures your customizations are kept and applied every time the application starts. Settings are stored in a `cortex.db` file in [Jan Data Folder](/docs/data-folder), ensuring they persist across sessions.
</Callout> </Callout>
1. Navigate to the main dashboard. ## My models
2. Click the **Gear Icon (⚙️)** on the bottom left of your screen.
Here's at **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **My Models** you can manage all your installed AI models:
### Manage Downloaded Models
**1. Import Models:** You can import model here as how you can do in **Hub**
- Option 1: Import from [Hugging Face](/docs/models/manage-models#option-a-import-in-jan) by entering model Hugging Face URL in **Search** bar
- Option 2: [Import local files](/docs/models/manage-models#option-a-import-in-jan)
<br/> <br/>
![Settings](./_assets/settings.png) ![Import from HF](./_assets/model-management-04.png)
<br/> <br/>
3. Click the **Advanced Settings**.
**2. Remove Models**: Use the same instructions in [Delete Local Models](/docs/models/manage-models#delete-models)
<br/> <br/>
![Settings](./_assets/advance-settings2.png) ![Remove Model](./_assets/model-management-05.png)
<br/> <br/>
4. You can configure the following settings:
**3. Start Models**
1. Choose the model you want to start
2. Click **three dots** <EllipsisVertical width={16} height={16} style={{display:"inline"}}/> icon next to the model
3. Select **Start Model**
<br/>
![Start Model](./_assets/settings-02.png)
<br/>
### Manage Cloud Models
1. To install cloud models, click the **Add** (<Plus width={16} height={16} style={{display:"inline"}}/>) icon next your preferred providers (e.g., Anthropic, OpenAI, Groq), and add **API Key** to use. See [detailed instructions](/docs/remote-models/openai) for each provider.
2. Once a provider is installed, you can use its models & manage its settings by clicking on the **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) icon next to it.
<br/>
![Manage Cloud Provider](./_assets/settings-03.png)
<br/>
## Preferences
At **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Preferences**, you can customize how Jan looks.
### Appearance & Theme
Control the visual theme of Jan's interface.
- **Joi Light:** Clean, bright theme for daytime use
- **Joi Dark:** Dark theme with high contrast
- **Dark Dimmed:** Softer dark theme to reduce eye strain
- **Night Blue:** Dark theme with blue accents
To change:
1. Choose your preferred **Appearance** from the dropdown
2. With **Joi Dark** & **Joi Light**, you can choose additional options:
- **Solid:** Traditional opaque background
- **Translucent:** Semi-transparent interface
3. Changes apply immediately
<br/>
![Appearance](./_assets/settings-04.png)
<br/>
### Chat Width
Adjust how chat content is displayed.
1. In **Chat Width** section, select either
- **Full Width:** Maximizes the chat area to use the full width of the window. This is ideal for viewing longer messages or when working with code snippets that benefit from more horizontal space.
- **Compact Width:** Creates a more focused chat experience with a narrower conversation view. This setting is useful for reading conversations more comfortably, especially on larger screens.
2. Changes apply immediately to your conversation view
<br/>
![Chat Width](./_assets/settings-05.png)
<br/>
### Spell Check
Jan includes a built-in spell check feature to help catch typing errors in your messages.
1. Switch the toggle on to enable spell checking, or off to disable it
2. Changes apply immediately for all new messages you type
<br/>
![Spell Check](./_assets/settings-06.png)
<br/>
## Keyboard Shortcuts
At **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Keyboard shortcuts**, you can see Jan's shortcuts list:
**1. Thread Management**
- `⌘ N` - Create a new thread
- `⌘ Shift Backspace` - Delete current active thread
- `⌘ Shift C` - Clean current active thread
**2. Navigation**
- `⌘ B` - Toggle left panel
- `⌘ Shift B` - Toggle right panel
- `⌘ ,` - Navigate to settings
**3. Message Input**
- `Enter` - Send a message (in input field)
- `Shift Enter` - Insert a new line (in input field)
> Note: On **Windows** and **Linux**, use `Ctrl` (Control) instead of `⌘` (Command)
## Hardware
At **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Hardware**, you can monitor and manage your system resources when running local models.
### CPU & RAM
<Callout type="info">
See detailed minimum system requirements to run local models on [Mac](/docs/desktop/mac#compatibility), [Windows](/docs/desktop/windows#compatibility) & [Linux](/docs/desktop/linux#compatibility).
</Callout>
- **CPU:** important for model processing speed. With CPU only, 20-90% usage during model running is normal.
- **RAM:**
- Different models require different amounts of RAM.
- When running local models, please keep at least **4GB free**.
- If usage is near max, try closing other applications or use smaller models.
### GPU Acceleration
<Callout type="info">
Ensure you have installed all required dependencies and drivers before enabling GPU acceleration. See **GPU Setup Guide** on [Windows](/docs/desktop/windows#gpu-acceleration) & [Linux](/docs/desktop/linux#gpu-acceleration) for detailed instructions.
</Callout>
Turn on GPU acceleration to improve performance:
1. Select and **enable** your prefered GPU(s)
2. App reload is required after the selection
<br/>
![Hardware](./_assets/trouble-shooting-01.png)
<br/>
**GPU Performance Optimization**
- Monitor VRAM usage - should not exceed 90%. Higher **VRAM** capacity typically enables better performance for larger language models.
- Adjust `ngl` ([number of GPU Layers](/docs/models/model-parameters#engine-parameters)) in **Model** settings:
- Higher NGL = more VRAM used with faster performance
- Lower NGL = Less VRAM used with slower performance
- Start with 35 layers for 8GB VRAM. Increase if you have more VRAM available & decrease if you see **out-of-memory** errors.
## Privacy
At **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Privacy**, you can control analytics & logs in Jan:
### Analytics
Jan is built with privacy at its core. By default, no data is collected. Everything stays local on your device.
You can help improve Jan by sharing anonymous usage data:
1. Toggle on **Analytics** to share anonymous data
2. You can change this setting at any time
<Callout type="info">
Read more about that we collect with opt-in users at [Privacy](/docs/privacy).
</Callout>
<br/>
![Analytics](./_assets/settings-07.png)
<br/>
### Log Management
**1. View Logs**
- Logs are stored at:
- App log: `~/Library/Application\ Support/jan/data/logs/app.log`
- Cortex log: `~/Library/Application\ Support/jan/data/logs/cortex.log`
- To open logs from Jan's interface: at **Logs**, click <FolderOpen width={16} height={16} style={{display:"inline"}}/> icon to open App Logs & Cortex Logs:
<br/>
![View Logs](./_assets/settings-08.png)
<br/>
**2. Clear Logs**
Jan retains your logs for only **24 hours**. To remove all logs from Jan, at **Clear Logs**, click the **Clear** button:
<Callout type="warning">
This action cannot be undone.
</Callout>
<br/>
![Clear Logs](./_assets/settings-09.png)
<br/>
## Advanced Settings
At **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Advanced Settings**, Jan stores settings for advanced use cases.
### Experimental Mode
<Callout type="warning">
Experimental features are unstable and are recommended only for testing purposes. Please turn on with caution!
</Callout>
Current experimental features:
| Feature | Description | | Feature | Description |
| ---------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | |---------|-------------|
| **Experimental Mode** | Enables experimental features that may be unstable. | | [Tools](/docs/tools/retrieval) | Advanced tooling capabilities including web search, file operations, and code interpretation |
| **GPU Acceleration** | Enables boosting your model performance by using your GPU devices for acceleration. | | Vulkan Settings | GPU acceleration using Vulkan API for improved model performance |
| **HTTPS Proxy** | Use a proxy server for internet connections. Please check out the guide on setting up your HTTPS proxy server [here](settings#https-proxy). | | [Jan Quick Ask](/docs/settings#jan-quick-ask) | Streamlined interface for rapid AI queries and responses |
| **Ignore SSL Certificates** | Enables the self-signed or unverified certificates. |
| **Migrate Data From Old Version Of Jan App** | Facilitates the transfer of your data from a previous version of the Jan App to the latest version. This feature helps you retain your settings, preferences, and stored information during the upgrade process. |
## Enable the Experimental Mode To try out these beta features, turn on **Experimental Mode** setting:
To try out new features that are still in the testing phase, follow the steps below:
1. Navigate to the **Advanced Settings**.
2. On the **Experimental Mode**, click the slider to enable.
<br/> <br/>
![Experimental](./_assets/exp-mode.png) ![Experimental Mode](./_assets/settings-10.png)
## Enable the GPU Acceleration
To enhance your model performance, follow the steps below:
<Callout type="warning">
Ensure you have read the [troubleshooting
guide](/docs/troubleshooting#troubleshooting-nvidia-gpu) here for further
assistance.
</Callout>
1. Navigate to the **Advanced Settings**.
2. On the **GPU Acceleration**, click the slider to enable.
<br/> <br/>
![Enable GPU](./_assets/gpu-accel.png)
## Enable the Vulkan Support ### Jan Data Folder
<Callout type="warning"> Jan stores your data locally in your own filesystem in a universal file format. See detailed [Jan Folder Structure](docs/data-folder#folder-structure).
This feature is still in experimental phase.
</Callout>
To enable the Vulkan support for AMD or Intel ARC GPU, follow the steps below: **1. Open Jan Data Folder**
1. Enable the **Experimental Mode**. At **Jan Data Folder**, click <FolderOpen width={16} height={16} style={{display:"inline"}}/> icon to open Jan application's folder:
2. Navigate to the **Advanced Settings**.
3. On the **Vulkan Support**, click the slider to enable.
<br/> <br/>
![Vulkan](./_assets/vulkan.png) ![Open Jan Data Folder](./_assets/settings-11.png)
<br/> <br/>
4. Restart the Jan app.
## Enable the Preserve Model Settings **2. Edit Jan Data Folder**
<Callout type="warning">
This feature is still in experimental phase. 1. At **Jan Data Folder**, click <Pencil width={16} height={16} style={{display:"inline"}}/> icon to edit Jan application's folder
</Callout> 2. Choose a new directory & click **Select**, make sure the new folder is empty
To enable the preserve model settings to be applied to the new thread, follow the steps below: 3. Confirmation pop-up shows up:
> Are you sure you want to relocate Jan Data Folder to `new directory`?
Jan Data Folder will be duplicated into the new location while the original folder remains intact.
An app restart will be required afterward.
4. Click **Yes, Proceed**
1. Enable the **Experimental Mode**.
2. Navigate to the **Advanced Settings**.
3. On the **Preserve Model Settings**, click the slider to enable.
<br/> <br/>
![Preserve](./_assets/preserve.png) ![Edit Jan Data Folder](./_assets/settings-12.png)
## Access the Jan Data Folder
To access the folder where messages, model configurations, and user data are stored, follow the steps below:
1. Navigate to the **Advanced Settings**.
<br/> <br/>
![Settings](./_assets/advance-set.png)
<br/>
2. On the **Jan Data Folder** click the **folder icon (📂)** to access the data or the **pencil icon (✏️)** to change the folder where you keep your data.
<br/>
![Jan Data Folder](./_assets/data-folder.png)
<br/>
3. You can also access the Jan Data Folder by clicking **System Monitor** > **App Log**.
<Callout type="warning">
- Uninstalling Jan in Windows and Linux will delete the default Jan Data Folder.
</Callout>
## HTTPS Proxy
### HTTPs Proxy
HTTPS Proxy encrypts data between your browser and the internet, making it hard for outsiders to intercept or read. It also helps you maintain your privacy and security while bypassing regional restrictions on the internet. HTTPS Proxy encrypts data between your browser and the internet, making it hard for outsiders to intercept or read. It also helps you maintain your privacy and security while bypassing regional restrictions on the internet.
<Callout type="info" emoji=""> <Callout type="info">
- When configuring Jan using an HTTPS proxy, the speed of the downloading model may be affected by the encryption and decryption process. It also depends on the cloud service provider's networking. - Model download speeds may be affected due to the encryption/decryption process and your cloud service provider's networking
- HTTPS Proxy does not affect the remote model usage. - HTTPS Proxy does not affect the remote model usage.
</Callout> </Callout>
Once you set up your HTTPS proxy server, follow the steps below: 1. **Enable** the proxy toggle
2. Enter your proxy server details in the following format:
```
http://<user>:<password>@<domain or IP>:<port>
```
Where:
- `<user>`: Your proxy username (if authentication is required)
- `<password>`: Your proxy password (if authentication is required)
- `<domain or IP>`: Your proxy server's domain name or IP address
- `<port>`: The port number for the proxy server
1. Navigate to **Settings** > **Advanced Settings**.
2. On the **HTTPS Proxy**, click the slider to enable.
3. Input your domain in the blank field.
<br/> <br/>
![HTTPS Proxy](./_assets/http.png) ![HTTPs Proxy](./_assets/settings-13.png)
## Ignore SSL Certificate
To Allow self-signed or unverified certificates, follow the steps below:
1. Navigate to the **Advanced Settings**.
2. On the **Ignore SSL Certificates**, click the slider to enable.
<br/> <br/>
![Ignore SSL](./_assets/ssl.png)
## Enable the Jan Quick Ask **Ignore SSL Certificates**
This setting allows Jan to accept self-signed or unverified SSL certificates. This may be necessary when:
- Working with corporate proxies using internal certificates
- Testing in development environments
- Connecting through specialized network security setups
<Callout type="info">
Only enable this option if you trust your network environment.
</Callout>
<br/>
![Ignore SSL Certificates](./_assets/settings-14.png)
<br/>
### Jan Quick Ask
Jan Quick Ask provides a faster way to interact with Jan without opening the full application window. It's designed for quick queries and responses when you need immediate assistance.
<Callout type="warning"> <Callout type="warning">
This feature is still in experimental phase. This feature is currently experimental and must be enabled through [Experimental Mode](/docs/settings#experimental-mode) in **Advanced Settings**.
</Callout> </Callout>
To enable the Jan quick ask mode, follow the steps below:
1. Enable the **Experimental Mode**. 1. Once you've turned on [Experimental Mode](/docs/settings#experimental-mode), toggle on **Jan Quick Ask**
2. Navigate to the **Advanced Settings**. 2. App restart is required upon selection
3. On the **Quick Ask**, click the slider to enable.
<br/> <br/>
![Quick Ask](./_assets/quick-ask.png) ![Jan Quick Ask](./_assets/settings-15.png)
<br/> <br/>
4. Restart the Jan app.
## Clear Logs 3. Once enabled, you can open **Quick Ask** overlay with default shortcut: `CMD/Ctr` + `J`
To clear all logs on your Jan app, follow the steps below: <br/>
![Jan Quick Ask](./_assets/settings-16.png)
<br/>
### Factory Reset
Reset to Factory Settings restores Jan to its initial state by erasing all user data, including downloaded models and chat history. This action is irreversible and should only be used as a last resort when experiencing serious application issues.
<Callout type="warning"> <Callout type="warning">
This feature clears all the data in your **Jan Data Folder**. This action cannot be undone. All data will be permanently deleted.
</Callout> </Callout>
1. Navigate to the **Advanced Settings**. Only use factory reset if:
2. On the **Clear Logs** click the the **Clear** button. - The application is corrupted
- You're experiencing persistent technical issues that other solutions haven't fixed
- You want to completely start fresh with a clean installation
To begin the process:
1. At **Reset to Factory Settings**, click **Reset** button
<br/> <br/>
![Clear Logs](./_assets/clear-logs.png) ![Jan Quick Ask](./_assets/settings-17.png)
<br/>
## Reset To Factory Default 2. In the confirmation dialog:
- Type the word **RESET** to confirm
To reset the Jan app to its original state, follow the steps below: - Optionally check **Keep the current app data location** to maintain the same data folder
- Click **Reset Now**
<Callout type="error"> 3. App restart is required upon confirmation
This irreversible action is only recommended if the application is corrupted. <br/>
</Callout> ![Jan Quick Ask](./_assets/settings-18.png)
1. Navigate to the **Advanced Settings**.
2. On the **Reset To Factory Default** click the the **Reset** button.
<br/> <br/>
![Reset](./_assets/reset-jan.png)

View File

@ -1,83 +0,0 @@
---
title: Keyboard Shortcuts
description: Lists all the available keyboard shortcuts for Windows, Mac, and Linux.
keywords:
[
Jan,
Customizable Intelligence, LLM,
local AI,
privacy focus,
free and open source,
private and offline,
conversational AI,
no-subscription fee,
large language models,
Advanced Settings,
HTTPS Proxy,
SSL,
settings,
Jan settings,
]
---
import { Tabs, Steps, Callout } from 'nextra/components'
## Keyboard Shortcuts
To find the list of all the available shortcuts within Jan app, please follow the steps below:
1. Navigate to the main dashboard.
2. Click the **Gear Icon (⚙️)** on the bottom left of your screen.
<br/>
![Settings](./_assets/settings.png)
<br/>
3. Click the **Hotkey & Shortcut**.
<br/>
![Keyboard Shortcut](./_assets/shortcut.png)
<br/>
Here are some of the keyboard shortcuts that you can use in Jan.
<Tabs items={['Mac', 'Windows', 'Linux']}>
<Tabs.Tab>
| Combination | Description |
| --------------- | -------------------------------------------------- |
| `⌘ N` | Create a new thread. |
| `⌘ B` | Toggle collapsible left panel. |
| `⌘ Shift B` | Toggle collapsible right panel. |
| `⌘ ,` | Navigate to the setting page. |
| `Enter` | Send a message. |
| `Shift + Enter` | Insert new line in input box. |
| `Arrow Up` | Navigate to the previous option (within the search dialog). |
| `Arrow Down` | Navigate to the next option (within the search dialog). |
</Tabs.Tab>
<Tabs.Tab>
| Combination | Description |
| --------------- | ---------------------------------------------------------- |
| `Ctrl N` | Create a new thread. |
| `Ctrl B` | Toggle collapsible left panel. |
| `Ctrl Shift B` | Toggle collapsible right panel. |
| `Ctrl ,` | Navigate to the setting page. |
| `Enter` | Send a message. |
| `Shift + Enter` | Insert new line in input box. |
| `Arrow Up` | Navigate to the previous option (within the search dialog). |
| `Arrow Down` | Navigate to the next option (within the search dialog). |
</Tabs.Tab>
<Tabs.Tab>
| Combination | Description |
| --------------- | ---------------------------------------------------------- |
| `Ctrl N` | Create a new thread. |
| `Ctrl B` | Toggle collapsible left panel. |
| `Ctrl Shift B` | Toggle collapsible right panel. |
| `Ctrl ,` | Navigate to the setting page. |
| `Enter` | Send a message. |
| `Shift + Enter` | Insert new line in input box. |
| `Arrow Up` | Navigate to the previous option (within the search dialog). |
| `Arrow Down` | Navigate to the next option (within the search dialog). |
</Tabs.Tab>
</Tabs>

View File

@ -19,55 +19,75 @@ keywords:
--- ---
import { Callout } from 'nextra/components' import { Callout } from 'nextra/components'
import { SquarePen, Pencil, Ellipsis, Paintbrush, Trash2 } from 'lucide-react'
# Using Threads # Using Threads
Jan provides a straightforward and private solution for managing your threads with AI on your device. As you interact with AI using Jan, you'll accumulate a history of threads. Jan organizes your AI conversations into threads, making it easy to track and revisit your interactions. This guide will help you effectively manage your chat history.
Jan offers easy tools to organize, delete, or review your past threads with AI. This guide will show you how to keep your threads private and well-organized.
## View Thread History ## Creating New Thread
1. Click **New Thread** (<SquarePen width={16} height={16} style={{display:"inline"}}/>) icon at the left of Jan top navigation
2. Select your preferred model in **Model Selector** in input field & start chatting
To view your thread history, follow the steps below:
1. Navigate to the main dashboard.
2. Locate the list of threads screen on the left side.
3. To view a specific thread, choose the one you're interested in and then scroll up or down to explore the entire conversation.
<br/> <br/>
![History](./_assets/history.png) ![Create New Thread](./_assets/threads-02.png)
## Change the Thread's Title ## View Threads History
To change a thread's title, follow the steps below:
1. Once you open Jan, the default screen is **Threads**
2. On the **left sidebar**, you can:
- View **Thread List**, scroll through your threads history
- Click any thread to open the full conversation
1. Navigate to the Thread that you want to edit.
2. Hover to a thread and click on the **three dots (⋮)** in the Thread section.
3. Select the **Edit Title** button.
<br/> <br/>
![Clean Thread](./_assets/title.png) ![View Threads](./_assets/threads-01.png)
## Clean Threads History
To clean all the messages from a thread, follow the steps below:
1. Navigate to the Thread that you want to clean. ## Edit Thread Title
2. Hover to a thread and click on the **three dots (⋮)** in the Thread section. 1. Navigate to the **Thread** that you want to edit title in left sidebar
3. Select the **Clean Thread** button. 2. Hover on the thread and click on **three dots** (<Ellipsis width={16} height={16} style={{display:"inline"}}/>) icon
3. Select <Pencil width={16} height={16} style={{display:"inline"}}/> **Edit Title**
4. Add new title & save
<br/> <br/>
![Clean Thread](./_assets/clean.png) ![Edit Thread](./_assets/threads-03.png)
## Clean Thread
To remove all messages while keeping the thread & its settings:
1. Navigate to the **Thread** that you want to clean in left sidebar
2. Hover on the thread and click on **three dots** (<Ellipsis width={16} height={16} style={{display:"inline"}}/>) icon
3. Select <Paintbrush width={16} height={16} style={{display:"inline"}}/> **Clean Thread**
<Callout type="info"> <Callout type="info">
This will delete all messages in the thread while keeping the thread settings. This will delete all messages in the thread while preserving thread settings
</Callout> </Callout>
### Delete Threads History
To delete a thread, follow the steps below:
1. Navigate to the Thread that you want to delete.
2. Hover to a thread and click on the **three dots (⋮)** in the Thread section.
3. Select the **Delete Thread** button.
<br/> <br/>
![Delete Thread](./_assets/delete-threads.png) ![Clean Thread](./_assets/threads-04.png)
<Callout type="info">
This will delete all messages and the thread settings. ## Delete Thread
<Callout type="warning">
There's no undo for thread deletion, so make sure you want to remove the thread permanently.
</Callout> </Callout>
### Delete a specific thread
When you want to completely remove a thread:
1. Navigate to the **Thread** that you want to delete in left sidebar
2. Hover on the thread and click on **three dots** (<Ellipsis width={16} height={16} style={{display:"inline"}}/>) icon
3. Select <Trash2 width={16} height={16} style={{display:"inline"}}/> **Delete Thread**
<br/>
![Delete Thread](./_assets/threads-05.png)
### Delete all threads at once
In case you need to remove all threads at once, you'll need to manually delete the `threads` folder:
1. Open [Jan Data Folder](docs/settings#access-the-jan-data-folder)
2. Delete the `threads` folder
3. Restart Jan

View File

@ -22,25 +22,35 @@ keywords:
import { Callout, Steps } from 'nextra/components' import { Callout, Steps } from 'nextra/components'
# Knowledge Retrieval # Knowledge Retrieval
This article lists the capabilities of the Jan platform and guides you through using RAG to chat with PDF documents. Chat with your documents and images using Jan's RAG (Retrieval-Augmented Generation) capability.
<Callout type="warning"> <Callout type="warning">
To access this feature, please enable Experimental mode in the [Advanced Settings](/guides/advanced/#enable-the-experimental-mode). This feature is currently experimental and must be enabled through [Experimental Mode](/docs/settings#experimental-mode) in **Advanced Settings**.
</Callout> </Callout>
## Enable the Knowledge Retrieval ## Enable File Search & Vision
To chat with PDFs using RAG in Jan, follow these steps: To chat with PDFs using RAG in Jan, follow these steps:
1. Create a **new thread**. 1. In any **Thread**, click the **Tools** tab in right sidebar
2. Click the **Tools** tab. 2. Enable **Retrieval**
<br/> <br/>
![Retrieval](../_assets/tools.png) ![Retrieval](../_assets/retrieval-01.png)
<br/> <br/>
3. Enable the **Retrieval**.
3. Once enabled, you should be able to **upload file & images** from thread input field
<Callout type="info">
Ensure that you are using a multimodal model.
- File Search: Jan currently supports PDF format
- Vision: only works with local models or [OpenAI](/docs/remote-models/openai) models for now
</Callout>
<br/> <br/>
![Retrieval](../_assets/retrieval1.png) ![Retrieval](../_assets/retrieval-02.png)
<br/> <br/>
4. Adjust the **Retrieval** settings as needed. These settings include the following:
## Knowledge Retrieval Parameters
| Feature | Description | | Feature | Description |
|-----------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------| |-----------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|
@ -51,11 +61,4 @@ To chat with PDFs using RAG in Jan, follow these steps:
| **Chunk Size** | - Sets the maximum number of tokens per data chunk, which is crucial for managing processing load and maintaining performance.<br></br>- Increase the chunk size for processing large blocks of text efficiently, or decrease it when dealing with smaller, more manageable texts to optimize memory usage. | | **Chunk Size** | - Sets the maximum number of tokens per data chunk, which is crucial for managing processing load and maintaining performance.<br></br>- Increase the chunk size for processing large blocks of text efficiently, or decrease it when dealing with smaller, more manageable texts to optimize memory usage. |
| **Chunk Overlap** | - Specifies the overlap in tokens between adjacent chunks to ensure continuous context in split text segments.<br></br>- Adjust the overlap to ensure smooth transitions in text analysis, with higher overlap for complex texts where context is critical. | | **Chunk Overlap** | - Specifies the overlap in tokens between adjacent chunks to ensure continuous context in split text segments.<br></br>- Adjust the overlap to ensure smooth transitions in text analysis, with higher overlap for complex texts where context is critical. |
| **Retrieval Template**| - Defines the query structure using variables like `{CONTEXT}` and `{QUESTION}` to tailor searches to specific needs.<br></br>- Customize templates to closely align with your data's structure and the queries' nature, ensuring that retrievals are as relevant as possible. | | **Retrieval Template**| - Defines the query structure using variables like `{CONTEXT}` and `{QUESTION}` to tailor searches to specific needs.<br></br>- Customize templates to closely align with your data's structure and the queries' nature, ensuring that retrievals are as relevant as possible. |
5. Select the model you want to use.
<Callout type="info">
To upload an image or GIF, ensure that you are using a multimodal model. If not, you are limited to uploading documents only.
</Callout>
6. Click on the 📎 icon in the chat input field.
7. Select **Document** to upload a document file.
<br/>
![Retrieval](../_assets/retrieval2.png)

View File

@ -23,29 +23,51 @@ keywords:
--- ---
import { Tabs } from 'nextra/components' import { Tabs } from 'nextra/components'
import { Callout } from 'nextra/components' import { Callout, Steps } from 'nextra/components'
# Troubleshooting # Troubleshooting
## How to Get Error Logs
Error logs are essential for troubleshooting issues and getting help from Jan team. To get error logs from Jan, follow the steps below:
#### Through Jan Interface
1. Open **System Monitor** in the footer
2. Choose **App Log**
<br/>
![App log](./_assets/trouble-shooting-02.png)
<br/>
#### Through Terminal
**Application Logs**
```bash
tail -n 50 ~/Library/Application\ Support/jan/data/logs/app.log
```
**Server Logs**
```bash
tail -n 50 ~/Library/Application\ Support/jan/data/logs/cortex.log
```
<Callout type="warning">
Ensure to redact any private or sensitive information when sharing logs or error details. We retain your logs for only 24 hours.
</Callout>
## Broken Build ## Broken Build
To resolve the issue where your Jan is stuck in a broken build after installation. To resolve the issue where Jan is stuck in a broken build after installation:
<Tabs items={['Mac', 'Windows', 'Linux']}> <Tabs items={['Mac', 'Windows', 'Linux']}>
<Tabs.Tab> <Tabs.Tab>
1. Uninstall Jan. 1. **Uninstall** Jan
2. Delete Application Data, Cache, and User Data: 2. **Delete** Application Data, Cache, and User Data:
```zsh ```zsh
# Step 1: Delete the application data rm -rf ~/Library/Application\ Support/Jan
rm -rf ~/Library/Application\ Support/jan/data
# Step 2: Clear application cache
rm -rf ~/Library/Application\ Support/Jan/cache
# Step 3: Remove all user data
rm -rf ~/jan
``` ```
3. If you are using a version before `0.4.2`, you need to run the following commands: 3. If you are using a version before `0.4.2`, you need to run the following commands:
@ -56,16 +78,15 @@ To resolve the issue where your Jan is stuck in a broken build after installatio
kill -9 <PID> kill -9 <PID>
``` ```
4. Download the latest version of Jan from our [homepage](https://jan.ai/). 4. **Download** the [latest version of Jan](/download)
</Tabs.Tab> </Tabs.Tab>
<Tabs.Tab> <Tabs.Tab>
1. Uninstall Jan on Windows, by using the [Windows Control Panel](https://support.microsoft.com/en-us/windows/uninstall-or-remove-apps-and-programs-in-windows-4b55f974-2cc6-2d2b-d092-5905080eaf98). 1. **Uninstall** Jan, using the [Windows Control Panel](https://support.microsoft.com/en-us/windows/uninstall-or-remove-apps-and-programs-in-windows-4b55f974-2cc6-2d2b-d092-5905080eaf98)
2. Delete Application Data, Cache, and User Data: 2. **Delete** Application Data, Cache, and User Data:
```bash ```bash
# You can delete the `/Jan` directory in Windows's AppData Directory by visiting the following path `%APPDATA%\Jan`
cd C:\Users\%USERNAME%\AppData\Roaming cd C:\Users\%USERNAME%\AppData\Roaming
rmdir /S jan rmdir /S jan
``` ```
@ -79,51 +100,27 @@ To resolve the issue where your Jan is stuck in a broken build after installatio
taskkill /F /PID <PID> taskkill /F /PID <PID>
``` ```
4. Download the latest version of Jan from our [homepage](https://jan.ai/). 4. **Download** the [latest version of Jan](/download)
</Tabs.Tab> </Tabs.Tab>
<Tabs.Tab> <Tabs.Tab>
1. Uninstall Jan 1. **Uninstall** Jan
<Tabs items={['Linux', 'Debian / Ubuntu', 'Others']}>
<Tabs.Tab >
To uninstall Jan, use your package manager's uninstall or remove option. Choose the appropriate method based on how you installed Jan:
This will return your system to its state before installation in Jan. **For Debian/Ubuntu:**
This method can also reset all settings if you are experiencing any issues with Jan.
</Tabs.Tab>
<Tabs.Tab value = "deb_ub" label = "Debian / Ubuntu">
To uninstall Jan, run the following command.MDXContent
```bash
sudo apt-get remove jan
# where Jan is the name of Jan's package
``` ```
sudo apt-get remove Jan
This will return your system to its state before installation in Jan. ```
**For Others:** Delete the Jan `.AppImage` file from your system
This method can also reset all settings if you are experiencing any issues with Jan.
</Tabs.Tab>
<Tabs.Tab value = "other" label = "Others">
You can uninstall Jan by deleting the `.AppImage` file.
If you wish to remove all user data associated with Jan after uninstallation, you can delete the user data at `~/jan`.
This method can also reset all settings if you are experiencing any issues with Jan.
</Tabs.Tab>
</Tabs>
2. Delete Application Data, Cache, and User Data: 2. Delete Application Data, Cache, and User Data:
```bash ```bash
# You can delete the user data folders located at the following `~/jan` # Default dir
rm -rf ~/jan ~/.config/Jan
# Custom installation directory
$XDG_CONFIG_HOME = /home/username/custom_config/Jan
``` ```
3. If you are using a version before `0.4.2`, you need to run the following commands: 3. If you are using a version before `0.4.2`, you need to run the following commands:
@ -134,7 +131,7 @@ To resolve the issue where your Jan is stuck in a broken build after installatio
kill -9 <PID> kill -9 <PID>
``` ```
4. Download the latest version of Jan from our [homepage](https://jan.ai/). 4. **Download** the [latest version of Jan](/download)
</Tabs.Tab> </Tabs.Tab>
</Tabs> </Tabs>
@ -146,88 +143,81 @@ Following these steps, you can cleanly uninstall and reinstall Jan, ensuring a s
</Callout> </Callout>
## Troubleshooting NVIDIA GPU ## Troubleshooting NVIDIA GPU
To resolve issues when Jan does not utilize the NVIDIA GPU on Windows and Linux systems.
To resolve issues when the Jan app does not utilize the NVIDIA GPU on Windows and Linux systems. <Steps>
#### 1. Ensure GPU Mode Requirements ### Step 1: Verify Hardware and System Requirements
<Tabs items={['Windows', 'Linux']}> #### 1.1. Check GPU Detection
<Tabs.Tab> First, verify that your system recognizes the NVIDIA GPU:
**Windows:**
- Right-click desktop → NVIDIA Control Panel
- Or check Device Manager → Display Adapters
**Linux:**
```
lspci | grep -i nvidia
```
#### 1.2. Install Required components
**NVIDIA Driver:**
1. Install [NVIDIA Driver](https://www.nvidia.com/en-us/drivers/) for your GPU (NVIDIA driver **470.63.01 or higher**).
2. Verify installation:
##### NVIDIA Driver ```
nvidia-smi
```
Expected output should show your GPU model and driver version.
- Install an [NVIDIA Driver](https://www.nvidia.com/Download/index.aspx) supporting CUDA 11.7 or higher. **CUDA Toolkit:**
- Use the following command to verify the installation: 1. Download and install [CUDA toolkit](https://developer.nvidia.com/cuda-downloads) (**CUDA 11.7 or higher**)
2. Verify installation:
```bash ```
nvidia-smi nvcc --version
``` ```
**Linux Additional Requirements:**
1. Required packages are installed:
```
sudo apt update
sudo apt install gcc-11 g++-11 cpp-11
```
See [detailed instructions](https://gcc.gnu.org/projects/cxx-status.html#cxx17).
##### CUDA Toolkit 2. Set up CUDA environment:
```
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64
```
See [detailed instructions](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#post-installation-actions).
- Install a [CUDA toolkit](https://developer.nvidia.com/cuda-downloads) compatible with your NVIDIA driver. <Callout type="info">
- Use the following command to verify the installation: Ensure your (V)RAM is accessible; some users with virtual RAM may require additional configuration.
</Callout>
```bash ### Step 2: Turn on GPU acceleration
nvcc --version
```
</Tabs.Tab> Jan manages GPU usage automatically:
<Tabs.Tab> - Switches to GPU mode when supported
- Automatically selects GPU with highest VRAM
##### NVIDIA Driver To verify GPU acceleration is turned on:
1. Open **Settings** > **Hardware**
2. Verify that **GPU Acceleration** is turned on
3. Verify your selected GPU(s) are visible in **System Monitor** from Jan's footer
- Install an [NVIDIA Driver](https://www.nvidia.com/Download/index.aspx) supporting CUDA 11.7 or higher. <br/>
- Use the following command to verify the installation: ![Hardware](./_assets/trouble-shooting-01.png)
<br/>
```bash
nvidia-smi
```
##### CUDA Toolkit
- Install a [CUDA toolkit](https://developer.nvidia.com/cuda-downloads) compatible with your NVIDIA driver.
- Use the following command to verify the installation:
```bash
nvcc --version
```
##### Linux Specifics
- Ensure that `gcc-11`, `g++-11`, `cpp-11`, or higher is installed.
- See [instructions](https://gcc.gnu.org/projects/cxx-status.html#cxx17) for Ubuntu installation.
- **Post-Installation Actions**: Add CUDA libraries to `LD_LIBRARY_PATH`.
- Follow the [Post-installation Actions](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#post-installation-actions) instructions.
</Tabs.Tab>
</Tabs>
#### 2. Switch to GPU Mode
If your system supports it, Jan defaults to CPU mode but automatically switches to GPU mode, selecting the GPU with the highest VRAM. Check this setting in `Settings` > `Advanced Settings`.
##### Troubleshooting Tips
If GPU mode isn't enabled by default:
1. Confirm that you have installed an NVIDIA driver supporting CUDA 11.7 or higher. Refer to [CUDA compatibility](https://docs.nvidia.com/deploy/cuda-compatibility/index.html#binary-compatibility__table-toolkit-driver).
2. Ensure compatibility of the CUDA toolkit with your NVIDIA driver. Refer to [CUDA compatibility](https://docs.nvidia.com/deploy/cuda-compatibility/index.html#binary-compatibility__table-toolkit-driver).
3. Add CUDA's `.so` libraries to the `LD_LIBRARY_PATH` for Linux. Ensure that CUDA's `.dll` libraries are in the PATH for Windows. Refer to [Windows setup](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html#environment-setup).
If you encounter an error message indicating that loading a model requires additional dependencies, follow these steps: ### Step 3: GPU Settings Check
1. Click on **Install Additional Dependencies** in the error message. 1. Go to **Settings** > **Advanced Settings** > Open **Jan Data Folder**
2. You will be redirected to the **Tensor RT LLM Inference Engine** section. 2. Open **Settings** folder
3. Click the **Install** button to install additional dependencies. 3. Open `settings.json` file
#### 3. Check GPU Settings Example `settings.json`:
1. Navigate to `Settings` > `Advanced Settings` > `Jan Data Folder` to access GPU settings. ```
2. Open the `settings.json` file in the `settings` folder. Here's an example:
```json title="~/jan/data/settings/settings.json"
{ {
"notify": true, "notify": true,
"run_mode": "gpu", "run_mode": "gpu",
@ -256,83 +246,59 @@ If you encounter an error message indicating that loading a model requires addit
"gpu_highest_vram": "0" "gpu_highest_vram": "0"
} }
``` ```
**Key Configuration Values:**
- `run_mode`: Should be "gpu" for GPU acceleration
- `nvidia_driver`: Shows driver status and version
- `cuda`: Shows CUDA toolkit status and version
- `gpus`: Lists available GPUs and their VRAM (in MB)
- `gpu_highest_vram`: ID of GPU with most VRAM
#### 4. Restart Jan
Restart the Jan application to make sure it works.
##### Troubleshooting Tips ### Step 4: Restart Jan
- Ensure the `nvidia_driver` and `cuda` fields indicate installed software. Restart Jan to make sure it works.
- If `gpus` field is empty or lacks your GPU, check the NVIDIA driver and CUDA toolkit installations.
- For further assistance, share the `settings.json` file.
#### Tested Configurations </Steps>
- **Windows 11 Pro 64-bit:** ### Tested Configurations
These configurations have been verified to work with Jan's GPU acceleration. You can use them as reference points for your setup.
- GPU: NVIDIA GeForce RTX 4070ti **Bare Metal Installations**
- CUDA: 12.2
- NVIDIA driver: 531.18 (Bare metal)
- **Ubuntu 22.04 LTS:** Windows 11 Pro (64-bit)
| Component | Version/Model |
|-----------|--------------|
| GPU | NVIDIA GeForce RTX 4070Ti |
| CUDA | 12.2 |
| NVIDIA Driver | 531.18 |
| OS | Windows 11 Pro 64-bit |
| RAM | 32GB |
- GPU: NVIDIA GeForce RTX 4070ti Ubuntu 22.04 LTS
- CUDA: 12.2 | Component | Version/Model |
- NVIDIA driver: 545 (Bare metal) |-----------|--------------|
| GPU | NVIDIA GeForce RTX 4070Ti |
| CUDA | 12.2 |
| NVIDIA Driver | 545 |
| OS | Ubuntu 22.04 LTS |
- **Ubuntu 20.04 LTS:** **Virtual Machine Setups**
- GPU: NVIDIA GeForce GTX 1660ti Ubuntu on Proxmox VM
- CUDA: 12.1 | Component | Version/Model |
- NVIDIA driver: 535 (Proxmox VM passthrough GPU) |-----------|--------------|
| GPU | NVIDIA GeForce GTX 1660Ti |
| CUDA | 12.1 |
| NVIDIA Driver | 535 |
| OS | Ubuntu 20.04/18.04 LTS |
| VM Type | Proxmox |
- **Ubuntu 18.04 LTS:** **Performance Notes**
- GPU: NVIDIA GeForce GTX 1660ti - Bare metal installations provide better performance
- CUDA: 12.1 - VM setups require proper GPU passthrough configuration
- NVIDIA driver: 535 (Proxmox VM passthrough GPU) - Some laptop GPUs may have reduced performance
- Hybrid graphics (Optimus) may need additional configuration
#### Common Issues and Solutions
1. If the issue persists, install the [Nightly version](/guides/quickstart/#nightly-releases).
2. Ensure your (V)RAM is accessible; some users with virtual RAM may require additional configuration.
3. Seek assistance in [Jan Discord](https://discord.gg/mY69SZaMaC).
## How to Get Error Logs
To get the error logs of your Jan application, follow the steps below:
#### Jan Application
1. Navigate to the main dashboard.
2. Click the **gear icon (⚙️)** on the bottom left of your screen.
3. Under the **Settings screen**, click the **Advanced Settings**.
4. On the **Jan Data Folder** click the **folder icon (📂)** to access the data.
5. Click the **logs** folder.
#### Jan UI
1. Open your Unix or Linux terminal.
2. Use the following commands to get the recent 50 lines of log files:
```bash
tail -n 50 ~/jan/data/logs/app.log
```
#### Jan API Server
1. Open your Unix or Linux terminal.
2. Use the following commands to get the recent 50 lines of log files:
```bash
tail -n 50 ~/jan/data/logs/server.log
```
<Callout type="warning">
Ensure to redact any private or sensitive information when sharing logs or error details.
</Callout>
## Permission Denied ## Permission Denied
@ -346,7 +312,7 @@ Error EACCES: permission denied, mkdtemp '/Users/username/.npm/_cacache/tmp/ueCM
Permission problems mainly cause this error during installation. To resolve this issue, follow these steps: Permission problems mainly cause this error during installation. To resolve this issue, follow these steps:
1. Open your terminal. 1. Open your **Terminal**
2. Execute the following command to change ownership of the `~/.npm` directory to the current user: 2. Execute the following command to change ownership of the `~/.npm` directory to the current user:
@ -354,63 +320,82 @@ Permission problems mainly cause this error during installation. To resolve this
sudo chown -R $(whoami) ~/.npm sudo chown -R $(whoami) ~/.npm
``` ```
<Callout type="info"> This command ensures that the necessary permissions are granted for Jan's installation.
This command ensures that the necessary permissions are granted for Jan's installation, resolving the encountered error.
</Callout>
## Something's Amiss
When you start a chat with a model and encounter a Something's Amiss error, here's how to resolve it: ## "Failed to fetch" or "Something's Amiss" errors
1. Ensure your OS is up to date. When you start a chat with a model and encounter a **Failed to Fetch** or **Something's Amiss** error, here are some possible solutions to resolve it:
2. Choose a model smaller than 80% of your hardware's V/RAM. For example, on an 8GB machine, opt for models smaller than 6 GB.
3. Install the latest [Nightly release](/guides/quickstart/#nightly-releases) or [clear the application cache](#broken-build) when reinstalling Jan.
4. Confirm your V/RAM accessibility, mainly if using virtual RAM.
5. Nvidia GPU users should download [CUDA](https://developer.nvidia.com/cuda-downloads).
6. Linux users, ensure your system meets the requirements of gcc 11, g++ 11, cpp 11, or higher. Refer to this [link](#troubleshooting-nvidia-gpu) for details.
7. You might use the wrong port when you [check the app logs](#how-to-get-error-logs) and encounter the Bind address failed at 127.0.0.1:39291 error. To check the port status, try using the `netstat` command, like the following:
<Tabs items={['Mac', 'Windows', 'Linux']}> **1. Check System & Hardware Requirements**
<Tabs.Tab > - Hardware dependencies: Ensure your device meets all [hardware requirements](docs/troubleshooting#step-1-verify-hardware-and-system-requirements)
```bash - OS: Ensure your operating system meets the minimum requirements ([Mac](/docs/desktop/mac#minimum-requirements), [Windows](/docs/desktop/windows#compatibility), [Linux](docs/desktop/linux#compatibility))
netstat -an | grep 39291 - RAM: Choose models that use less than 80% of your available RAM
``` - For 8GB systems: Use models under 6GB
</Tabs.Tab> - For 16GB systems: Use models under 13GB
<Tabs.Tab >
```bash
netstat -ano | find "39291"
tasklist /fi "PID eq 39291"
```
</Tabs.Tab>
<Tabs.Tab >
```bash
netstat -anpe | grep "39291"
```
</Tabs.Tab>
</Tabs>
**2. Check Model Parameters**
- In **Engine Settings** in right sidebar, check your `ngl` ([number of GPU layers](/docs/models/model-parameters#engine-parameters)) setting to see if it's too high
- Start with a lower NGL value and increase gradually based on your GPU memory
**3. Port Conflicts**
If you check your [app logs](/docs/troubleshooting#how-to-get-error-logs) & see "Bind address failed at 127.0.0.1:39291", check port availability:
```
# Mac
netstat -an | grep 39291
# Windows
netstat -ano | find "39291"
tasklist /fi "PID eq 39291"
# Linux
netstat -anpe | grep "39291"
```
<Callout type="info"> <Callout type="info">
`Netstat` displays the contents of various network-related data structures for active connections. `Netstat` displays the contents of various network-related data structures for active connections.
</Callout> </Callout>
<Callout type='warning'> Default Jan ports:
Jan uses the following ports: - Jan and Cortex API Server: `1337`
- Jan and Cortex API Server: `1337` - Jan Documentation: `3001`
- Jan Documentation: `3001`
**4. Factory Reset**
A factory reset can resolve persistent issues by returning Jan to its original state. This will remove all custom settings, downloaded models, and chat history.
1. Go to **Settings** > **Advanced Settings**
2. At **Reset To Factory Settings**, click **Reset**
<Callout type="warning">
This will delete all chat history, models, and settings.
</Callout> </Callout>
## Undefined Issue **5. Try a clean installation**
If you experience an undefined or unusual issue, please follow the steps below: - Uninstall Jan & clean Jan data folders ([Mac](/docs/desktop/mac#uninstall-jan), [Windows](/docs/desktop/windows#uninstall-jan), [Linux](docs/desktop/linux#uninstall-jan))
1. Delete Jan's extension folder located at `~/jan/data`. - Install the latest [stable release](/download)
2. Restart the Jan application.
## Unexpected Token
Encountering the `Unexpected token` error when initiating a chat with OpenAI models is mainly caused by your OpenAI key or where you access your OpenAI from. This issue can be solved through the following steps: <Callout type="warning">
This will delete all your Jan data.
</Callout>
1. Obtain an OpenAI API key from [OpenAI's developer platform](https://platform.openai.com/) and integrate it into your application. ## OpenAI Unexpected Token Issue
The "Unexpected token" error usually relates to OpenAI API authentication or regional restrictions.
**Step 1: API Key Sepup**
1. Get a valid API key from [OpenAI's developer platform](https://platform.openai.com/)
2. Ensure the key has sufficient credits & appropriate permissions
**Step 2: Regional Access**
1. If you're in a region with restricted access, use a VPN service from a supported region
2. Verify your network can reach OpenAI's API endpoints
## Need Further Support?
If you can't find what you need in our troubleshooting guide, feel free reach out to us for extra help:
- **Copy** your [app logs](/docs/troubleshooting#how-to-get-error-logs)
- Go to our [Discord](https://discord.com/invite/FTk2MvZwJH) & send it to **#🆘|get-help** channel for further support.
2. Using a VPN could potentially solve the issue, especially if it's related to region locking for accessing OpenAI services. Connecting through a VPN may bypass such restrictions and successfully initiate chats with OpenAI models.
<Callout type="info"> <Callout type="info">
If you have any questions or are looking for support, please don't hesitate to contact us via our [Discord community](https://discord.gg/Dt7MxDyNNZ) or create a new issue in our [GitHub repository](https://github.com/janhq/jan/issues/new/choose). Check the logs to ensure the information is what you intend to send. We retain your logs for only **24 hours**, so report any issues promptly.
</Callout> </Callout>