docs: removed duplicate guides section
@ -1,61 +0,0 @@
|
||||
---
|
||||
title: Overview
|
||||
slug: /guides
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
]
|
||||
---
|
||||
|
||||
The following docs are aimed at end users who want to troubleshoot or learn how to use the **Jan Desktop** application better.
|
||||
|
||||
:::tip
|
||||
If you are interested to build extensions, please refer to [developer docs](/developer) instead (WIP).
|
||||
|
||||
If you are interested to contribute to the underlying framework, please refer to [framework docs](/docs) instead.
|
||||
:::
|
||||
|
||||
## Jan Desktop
|
||||
|
||||
The desktop client is a ChatGPT alternative that runs on your own computer, with a [local API server](/guides/using-server).
|
||||
|
||||
## Features
|
||||
|
||||
- Compatible with [open-source models](/guides/using-models) (GGUF via [llama.cpp](https://github.com/ggerganov/llama.cpp), TensorRT via [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM), and [remote APIs](https://platform.openai.com/docs/api-reference))
|
||||
- Compatible with most OSes: [Windows](/install/windows/), [Mac](/install/mac), [Linux](/install/linux), with GPU acceleration through [llama.cpp](https://github.com/ggerganov/llama.cpp)
|
||||
- Stores data in [open file formats](/developer/file-based)
|
||||
- Local API [server mode](/guides/using-server)
|
||||
- Customizable via [extensions](/developer/build-extension)
|
||||
- And more in the [roadmap](https://github.com/orgs/janhq/projects/5/views/16). Join us on [Discord](https://discord.gg/5rQ2zTv3be) and tell us what you want to see!
|
||||
|
||||
## Why Jan?
|
||||
|
||||
We believe in the need for an open source AI ecosystem.
|
||||
|
||||
We're focused on building infra, tooling and [custom models](https://huggingface.co/janhq) to allow open source AIs to compete on a level playing field with proprietary offerings.
|
||||
|
||||
Read more about our mission and culture [here](/about).
|
||||
|
||||
#### 💻 Own your AI
|
||||
|
||||
Jan runs 100% on your own machine, predictably, privately and offline. No one else can see your conversations, not even us.
|
||||
|
||||
#### 🏗️ Extensions
|
||||
|
||||
Jan ships with a local-first, AI-native, and cross platform [extensions framework](/developer/build-extension). Developers can extend and customize everything from functionality to UI to branding. In fact, Jan's current main features are actually built as extensions on top of this framework.
|
||||
|
||||
#### 🗂️ Open File Formats
|
||||
|
||||
Jan stores data in your [local filesystem](/developer/file-based). Your data never leaves your computer. You are free to delete, export, migrate your data, even to a different platform.
|
||||
|
||||
#### 🌍 Open Source
|
||||
|
||||
Both Jan and [Nitro](https://nitro.jan.ai), our lightweight inference engine, are licensed via the open source [AGPLv3 license](https://github.com/janhq/jan/blob/main/LICENSE).
|
||||
@ -1,98 +0,0 @@
|
||||
---
|
||||
title: Mac
|
||||
slug: /install/mac
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
installation guide,
|
||||
]
|
||||
---
|
||||
|
||||
# Installing Jan on MacOS
|
||||
|
||||
## System Requirements
|
||||
|
||||
Ensure that your MacOS version is 13 or higher to run Jan.
|
||||
|
||||
## Installation
|
||||
|
||||
Jan is available for download via our homepage, [https://jan.ai/](https://jan.ai/).
|
||||
|
||||
For MacOS, the download should be available as a `.dmg` file in the following format.
|
||||
|
||||
```bash
|
||||
# Intel Mac
|
||||
jan-mac-x64-{version}.dmg
|
||||
# Apple Silicon Mac
|
||||
jan-mac-arm64-{version}.dmg
|
||||
```
|
||||
|
||||
The typical installation process takes around a minute.
|
||||
|
||||
## GitHub Releases
|
||||
|
||||
Jan is also available from [Jan's GitHub Releases](https://github.com/janhq/jan/releases) page, with a recommended [latest stable release](https://github.com/janhq/jan/releases/latest).
|
||||
|
||||
Within the Releases' assets, you will find the following files for MacOS:
|
||||
|
||||
```bash
|
||||
# Intel Mac (dmg file and zip file)
|
||||
jan-mac-x64-{version}.dmg
|
||||
jan-mac-x64-{version}.zip
|
||||
|
||||
# Apple Silicon Mac (dmg file and zip file)
|
||||
jan-mac-arm64-{version}.dmg
|
||||
jan-mac-arm64-{version}.zip
|
||||
```
|
||||
|
||||
## Uninstall Jan
|
||||
|
||||
As Jan is in development mode, you might get stuck on a broken build.
|
||||
To reset your installation
|
||||
|
||||
1. Delete Jan from your `/Applications` folder
|
||||
2. Delete Application data
|
||||
|
||||
```bash
|
||||
# Newer versions
|
||||
rm -rf ~/Library/Application\ Support/jan
|
||||
|
||||
# Versions 0.2.0 and older
|
||||
rm -rf ~/Library/Application\ Support/jan-electron
|
||||
```
|
||||
|
||||
3. Clear Application cache
|
||||
|
||||
```bash
|
||||
rm -rf ~/Library/Caches/jan*
|
||||
```
|
||||
|
||||
4. Use the following commands to remove any dangling backend processes:
|
||||
|
||||
```bash
|
||||
ps aux | grep nitro
|
||||
```
|
||||
|
||||
Look for processes like "nitro" and "nitro_arm_64", and kill them one by one with:
|
||||
|
||||
```bash
|
||||
kill -9 <PID>
|
||||
```
|
||||
|
||||
## Common Questions
|
||||
|
||||
### Does Jan run on Apple Silicon machines?
|
||||
|
||||
Yes, Jan supports MacOS Arm64 builds that can run on Macs with the Apple Silicon chipsets. You can install Jan on your Apple Silicon Mac by downloading the `jan-mac-arm64-<version>.dmg` file from the [Jan's homepage](https://jan.ai/).
|
||||
|
||||
### Which package should I download for my Mac?
|
||||
|
||||
Jan supports both Intel and Apple Silicon Macs. To find which appropriate package to download for your Mac, please follow this official guide from Apple: [Get system information about your Mac - Apple Support](https://support.apple.com/guide/mac-help/syspr35536/mac).
|
||||
@ -1,73 +0,0 @@
|
||||
---
|
||||
title: Windows
|
||||
slug: /install/windows
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
installation guide,
|
||||
]
|
||||
---
|
||||
|
||||
# Installing Jan on Windows
|
||||
|
||||
## System Requirements
|
||||
|
||||
Ensure that your system meets the following requirements:
|
||||
|
||||
- Windows 10 or higher is required to run Jan.
|
||||
|
||||
To enable GPU support, you will need:
|
||||
|
||||
- NVIDIA GPU with CUDA Toolkit 11.7 or higher
|
||||
- NVIDIA driver 470.63.01 or higher
|
||||
|
||||
## Installation
|
||||
|
||||
Jan is available for download via our homepage, [https://jan.ai](https://jan.ai/).
|
||||
|
||||
For Windows, the download should be available as a `.exe` file in the following format.
|
||||
|
||||
```bash
|
||||
jan-win-x64-{version}.exe
|
||||
```
|
||||
|
||||
The typical installation process takes around a minute.
|
||||
|
||||
### GitHub Releases
|
||||
|
||||
Jan is also available from [Jan's GitHub Releases](https://github.com/janhq/jan/releases) page, with a recommended [latest stable release](https://github.com/janhq/jan/releases/latest).
|
||||
|
||||
Within the Releases' assets, you will find the following files for Windows:
|
||||
|
||||
```bash
|
||||
# Windows Installers
|
||||
jan-win-x64-{version}.exe
|
||||
```
|
||||
|
||||
### Default Installation Directory
|
||||
|
||||
By default, Jan is installed in the following directory:
|
||||
|
||||
```bash
|
||||
# Default installation directory
|
||||
C:\Users\{username}\AppData\Local\Programs\Jan
|
||||
```
|
||||
|
||||
## Uninstalling Jan
|
||||
|
||||
To uninstall Jan on Windows, use the [Windows Control Panel](https://support.microsoft.com/en-us/windows/uninstall-or-remove-apps-and-programs-in-windows-4b55f974-2cc6-2d2b-d092-5905080eaf98).
|
||||
|
||||
To remove all user data associated with Jan, you can delete the `/jan` directory in Windows' [AppData directory](https://superuser.com/questions/632891/what-is-appdata).
|
||||
|
||||
```bash
|
||||
cd C:\Users\%USERNAME%\AppData\Roaming
|
||||
rmdir /S jan
|
||||
```
|
||||
@ -1,94 +0,0 @@
|
||||
---
|
||||
title: Linux
|
||||
slug: /install/linux
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
installation guide,
|
||||
]
|
||||
---
|
||||
|
||||
# Installing Jan on Linux
|
||||
|
||||
## System Requirements
|
||||
|
||||
Ensure that your system meets the following requirements:
|
||||
|
||||
- glibc 2.27 or higher (check with `ldd --version`)
|
||||
- gcc 11, g++ 11, cpp 11, or higher, refer to this [link](https://jan.ai/guides/troubleshooting/gpu-not-used/#specific-requirements-for-linux) for more information.
|
||||
|
||||
To enable GPU support, you will need:
|
||||
|
||||
- NVIDIA GPU with CUDA Toolkit 11.7 or higher
|
||||
- NVIDIA driver 470.63.01 or higher
|
||||
|
||||
## Installation
|
||||
|
||||
Jan is available for download via our homepage, [https://jan.ai](https://jan.ai/).
|
||||
|
||||
For Linux, the download should be available as a `.AppImage` file or a `.deb` file in the following format.
|
||||
|
||||
```bash
|
||||
# AppImage
|
||||
jan-linux-x86_64-{version}.AppImage
|
||||
|
||||
# Debian Linux distribution
|
||||
jan-linux-amd64-{version}.deb
|
||||
```
|
||||
|
||||
To install Jan on Linux, you should use your package manager's install or `dpkg``. For Debian/Ubuntu-based distributions, you can install Jan using the following command:
|
||||
|
||||
```bash
|
||||
# Install Jan using dpkg
|
||||
sudo dpkg -i jan-linux-amd64-{version}.deb
|
||||
|
||||
# Install Jan using apt-get
|
||||
sudo apt-get install ./jan-linux-amd64-{version}.deb
|
||||
# where jan-linux-amd64-{version}.deb is path to the Jan package
|
||||
```
|
||||
|
||||
For other Linux distributions, you launch the AppImage file without installation. To do so, you need to make the AppImage file executable and then run it. You can do this either through your file manager's properties dialog or with the following commands:
|
||||
|
||||
```bash
|
||||
# Install Jan using AppImage
|
||||
chmod +x jan-linux-x86_64-{version}.AppImage
|
||||
./jan-linux-x86_64-{version}.AppImage
|
||||
# where jan-linux-x86_64-{version}.AppImage is path to the Jan package
|
||||
```
|
||||
|
||||
The typical installation process takes around a minute.
|
||||
|
||||
### GitHub Releases
|
||||
|
||||
Jan is also available from [Jan's GitHub Releases](https://github.com/janhq/jan/releases) page, with a recommended [latest stable release](https://github.com/janhq/jan/releases/latest).
|
||||
|
||||
Within the Releases' assets, you will find the following files for Linux:
|
||||
|
||||
```bash
|
||||
# Debian Linux distribution
|
||||
jan-linux-amd64-{version}.deb
|
||||
|
||||
# AppImage
|
||||
jan-linux-x86_64-{version}.AppImage
|
||||
```
|
||||
|
||||
## Uninstall Jan
|
||||
|
||||
To uninstall Jan on Linux, you should use your package manager's uninstall or remove option. For Debian/Ubuntu-based distributions, if you installed Jan via the `.deb` package, you can uninstall Jan using the following command:
|
||||
|
||||
```bash
|
||||
sudo apt-get remove jan
|
||||
# where jan is the name of Jan package
|
||||
```
|
||||
|
||||
For other Linux distributions, if you installed Jan via the `.AppImage` file, you can uninstall Jan by deleting the `.AppImage` file.
|
||||
|
||||
In case you wish to completely remove all user data associated with Jan after uninstallation, you can delete the user data folders located at ~/jan. This will return your system to its state prior to the installation of Jan. This method can also be used to reset all settings if you are experiencing any issues with Jan.
|
||||
@ -1,91 +0,0 @@
|
||||
---
|
||||
title: From Source
|
||||
slug: /install/from-source
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
]
|
||||
---
|
||||
|
||||
# Installing Jan from Source
|
||||
|
||||
## Installation
|
||||
|
||||
### Pre-requisites
|
||||
|
||||
Before proceeding with the installation of Jan from source, ensure that the following software versions are installed on your system:
|
||||
|
||||
- Node.js version 20.0.0 or higher
|
||||
- Yarn version 1.22.0 or higher
|
||||
|
||||
### Instructions
|
||||
|
||||
:::note
|
||||
|
||||
This instruction is tested on MacOS only.
|
||||
|
||||
:::
|
||||
|
||||
1. Clone the Jan repository from GitHub
|
||||
|
||||
```bash
|
||||
git clone https://github.com/janhq/jan
|
||||
git checkout DESIRED_BRANCH
|
||||
cd jan
|
||||
```
|
||||
|
||||
2. Install the required dependencies using Yarn
|
||||
|
||||
```bash
|
||||
yarn install
|
||||
|
||||
# Build core module
|
||||
yarn build:core
|
||||
|
||||
# Packing base plugins
|
||||
yarn build:plugins
|
||||
|
||||
# Packing uikit
|
||||
yarn build:uikit
|
||||
```
|
||||
|
||||
3. Run development and using Jan
|
||||
|
||||
```bash
|
||||
yarn dev
|
||||
```
|
||||
|
||||
This will start the development server and open the desktop app. During this step, you may encounter notifications about installing base plugins. Simply click `OK` and `Next` to continue.
|
||||
|
||||
#### For production build
|
||||
|
||||
Build the app for macOS M1/M2 for production and place the result in the dist folder
|
||||
|
||||
```bash
|
||||
# Do step 1 and 2 in the previous section
|
||||
git clone https://github.com/janhq/jan
|
||||
cd jan
|
||||
yarn install
|
||||
|
||||
# Build core module
|
||||
yarn build:core
|
||||
|
||||
# Package base plugins
|
||||
yarn build:plugins
|
||||
|
||||
# Packing uikit
|
||||
yarn build:uikit
|
||||
|
||||
# Build the app
|
||||
yarn build
|
||||
```
|
||||
|
||||
This completes the installation process for Jan from source. The production-ready app for macOS can be found in the dist folder.
|
||||
@ -1,123 +0,0 @@
|
||||
---
|
||||
title: Docker
|
||||
slug: /install/docker
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
docker installation,
|
||||
cpu mode,
|
||||
gpu mode,
|
||||
]
|
||||
---
|
||||
|
||||
# Installing Jan using Docker
|
||||
|
||||
### Pre-requisites
|
||||
|
||||
:::note
|
||||
|
||||
**Supported OS**: Linux, WSL2 Docker
|
||||
|
||||
:::
|
||||
|
||||
- Docker Engine and Docker Compose are required to run Jan in Docker mode. Follow the [instructions](https://docs.docker.com/engine/install/ubuntu/) below to get started with Docker Engine on Ubuntu.
|
||||
|
||||
```bash
|
||||
curl -fsSL https://get.docker.com -o get-docker.sh
|
||||
sudo sh ./get-docker.sh --dry-run
|
||||
```
|
||||
|
||||
- If you intend to run Jan in GPU mode, you need to install `nvidia-driver` and `nvidia-docker2`. Follow the instruction [here](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) for installation.
|
||||
|
||||
### Run Jan in Docker Mode
|
||||
|
||||
| Docker compose Profile | Description |
|
||||
| ---------------------- | -------------------------------------------- |
|
||||
| `cpu-fs` | Run Jan in CPU mode with default file system |
|
||||
| `cpu-s3fs` | Run Jan in CPU mode with S3 file system |
|
||||
| `gpu-fs` | Run Jan in GPU mode with default file system |
|
||||
| `gpu-s3fs` | Run Jan in GPU mode with S3 file system |
|
||||
|
||||
| Environment Variable | Description |
|
||||
| ----------------------- | ------------------------------------------------------------------------------------------------------- |
|
||||
| `S3_BUCKET_NAME` | S3 bucket name - leave blank for default file system |
|
||||
| `AWS_ACCESS_KEY_ID` | AWS access key ID - leave blank for default file system |
|
||||
| `AWS_SECRET_ACCESS_KEY` | AWS secret access key - leave blank for default file system |
|
||||
| `AWS_ENDPOINT` | AWS endpoint URL - leave blank for default file system |
|
||||
| `AWS_REGION` | AWS region - leave blank for default file system |
|
||||
| `API_BASE_URL` | Jan Server URL, please modify it as your public ip address or domain name default http://localhost:1377 |
|
||||
|
||||
- **Option 1**: Run Jan in CPU mode
|
||||
|
||||
```bash
|
||||
# cpu mode with default file system
|
||||
docker compose --profile cpu-fs up -d
|
||||
|
||||
# cpu mode with S3 file system
|
||||
docker compose --profile cpu-s3fs up -d
|
||||
```
|
||||
|
||||
- **Option 2**: Run Jan in GPU mode
|
||||
|
||||
- **Step 1**: Check CUDA compatibility with your NVIDIA driver by running `nvidia-smi` and check the CUDA version in the output
|
||||
|
||||
```bash
|
||||
nvidia-smi
|
||||
|
||||
# Output
|
||||
+---------------------------------------------------------------------------------------+
|
||||
| NVIDIA-SMI 531.18 Driver Version: 531.18 CUDA Version: 12.1 |
|
||||
|-----------------------------------------+----------------------+----------------------+
|
||||
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
|
||||
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|
||||
| | | MIG M. |
|
||||
|=========================================+======================+======================|
|
||||
| 0 NVIDIA GeForce RTX 4070 Ti WDDM | 00000000:01:00.0 On | N/A |
|
||||
| 0% 44C P8 16W / 285W| 1481MiB / 12282MiB | 2% Default |
|
||||
| | | N/A |
|
||||
+-----------------------------------------+----------------------+----------------------+
|
||||
| 1 NVIDIA GeForce GTX 1660 Ti WDDM | 00000000:02:00.0 Off | N/A |
|
||||
| 0% 49C P8 14W / 120W| 0MiB / 6144MiB | 0% Default |
|
||||
| | | N/A |
|
||||
+-----------------------------------------+----------------------+----------------------+
|
||||
| 2 NVIDIA GeForce GTX 1660 Ti WDDM | 00000000:05:00.0 Off | N/A |
|
||||
| 29% 38C P8 11W / 120W| 0MiB / 6144MiB | 0% Default |
|
||||
| | | N/A |
|
||||
+-----------------------------------------+----------------------+----------------------+
|
||||
|
||||
+---------------------------------------------------------------------------------------+
|
||||
| Processes: |
|
||||
| GPU GI CI PID Type Process name GPU Memory |
|
||||
| ID ID Usage |
|
||||
|=======================================================================================|
|
||||
```
|
||||
|
||||
- **Step 2**: Visit [NVIDIA NGC Catalog ](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/cuda/tags) and find the smallest minor version of image tag that matches your CUDA version (e.g., 12.1 -> 12.1.0)
|
||||
|
||||
- **Step 3**: Update the `Dockerfile.gpu` line number 5 with the latest minor version of the image tag from step 2 (e.g. change `FROM nvidia/cuda:12.2.0-runtime-ubuntu22.04 AS base` to `FROM nvidia/cuda:12.1.0-runtime-ubuntu22.04 AS base`)
|
||||
|
||||
- **Step 4**: Run command to start Jan in GPU mode
|
||||
|
||||
```bash
|
||||
# GPU mode with default file system
|
||||
docker compose --profile gpu-fs up -d
|
||||
|
||||
# GPU mode with S3 file system
|
||||
docker compose --profile gpu-s3fs up -d
|
||||
```
|
||||
|
||||
This will start the web server and you can access Jan at `http://localhost:3000`.
|
||||
|
||||
:::warning
|
||||
|
||||
- RAG feature is not supported in Docker mode with s3fs yet.
|
||||
|
||||
:::
|
||||
@ -1,56 +0,0 @@
|
||||
---
|
||||
title: Hardware Requirements
|
||||
slug: /guides/install/hardware
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
]
|
||||
---
|
||||
|
||||
Jan is designed to be lightweight and able to run Large Language Models (LLMs) out-of-the-box.
|
||||
|
||||
The current download size is less than 150 MB and has a disk space of ~300 MB.
|
||||
|
||||
To ensure optimal performance, please see the following system requirements:
|
||||
|
||||
## Disk Space
|
||||
|
||||
- Minimum requirement
|
||||
- At least 5 GB of free disk space is required to accommodate the download, storage, and management of open-source LLM models.
|
||||
- Recommended
|
||||
- For an optimal experience and to run most available open-source LLM models on Jan, it is recommended to have 10 GB of free disk space.
|
||||
|
||||
## RAM and GPU VRAM
|
||||
|
||||
The amount of RAM on your system plays a crucial role in determining the size and complexity of LLM models you can effectively run. Jan can be utilized on traditional computers where RAM is a key resource. For enhanced performance, Jan also supports GPU acceleration, utilizing the VRAM of your graphics card.
|
||||
|
||||
## Best Models for your V/RAM
|
||||
|
||||
The RAM and GPU VRAM requirements are dependent on the size and complexity of the LLM models you intend to run. The following are some general guidelines to help you determine the amount of RAM or VRAM you need to run LLM models on Jan
|
||||
|
||||
- `8 GB of RAM`: Suitable for running smaller models like 3B models or quantized 7B models
|
||||
- `16 GB of RAM (recommended)`: This is considered the "minimum usable models" threshold, particularly for 7B models (e.g Mistral 7B, etc)
|
||||
- `Beyond 16GB of RAM`: Required for handling larger and more sophisticated model, such as 70B models.
|
||||
|
||||
## Architecture
|
||||
|
||||
Jan is designed to run on multiple architectures, versatility and widespread usability. The supported architectures include:
|
||||
|
||||
### CPU Support
|
||||
|
||||
- `x86`: Jan is well-suited for systems with x86 architecture, which is commonly found in traditional desktops and laptops. It ensures smooth performance on a variety of devices using x86 processors.
|
||||
- `ARM`: Jan is optimized to run efficiently on ARM-based systems, extending compatibility to a broad range of devices using ARM processors.
|
||||
|
||||
### GPU Support
|
||||
|
||||
- `NVIDIA`
|
||||
- `AMD`
|
||||
- `ARM64 Mac`
|
||||
@ -1,49 +0,0 @@
|
||||
---
|
||||
title: Nightly Release
|
||||
slug: /install/nightly
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
nightly release,
|
||||
]
|
||||
---
|
||||
|
||||
:::warning
|
||||
|
||||
- Nightly releases are cutting-edge versions that include the latest features. However, they are highly unstable and may contain bugs.
|
||||
|
||||
:::
|
||||
|
||||
## Where to Find Nightly Releases
|
||||
|
||||
- **Jan's GitHub Repository**: Visit the [Download section](https://github.com/janhq/jan?tab=readme-ov-file#download) of Jan's GitHub repository for the latest nightly release.
|
||||
|
||||
- **Discord Channel**: Nightly releases are also announced in our [Discord channel](https://discord.com/channels/1107178041848909847/1191638499355537418).
|
||||
|
||||
## Automatic Updates
|
||||
|
||||
Once you install a nightly build, the application will automatically prompt you to update each time it is restarted, ensuring you always have the latest version.
|
||||
|
||||
## Running Stable and Nightly Versions Simultaneously
|
||||
|
||||
If you wish to use both the stable and nightly versions of Jan, follow these steps:
|
||||
|
||||
1. Install the stable version as usual.
|
||||
2. For the nightly build, choose a different installation directory to avoid conflicts.
|
||||
3. Ensure that you clearly label or create shortcuts for each version to avoid confusion.
|
||||
|
||||
<br></br>
|
||||
|
||||
:::tip
|
||||
|
||||
- Engage with [our community on Discord](https://discord.gg/Dt7MxDyNNZ) to share feedback or get support for any issues you encounter.
|
||||
|
||||
:::
|
||||
@ -1,31 +0,0 @@
|
||||
---
|
||||
title: Antivirus Testing
|
||||
slug: /guides/install/antivirus-compatibility-testing
|
||||
description: Antivirus compatibility testing documentation
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
antivirus compatibility,
|
||||
]
|
||||
---
|
||||
|
||||
As a part of our release process, we run antivirus compatibility tests for Jan v0.4.4 and onwards. This documentation includes a matrix that correlates the Jan App version with the tested antivirus versions.
|
||||
|
||||
## Antivirus Software Tested
|
||||
|
||||
The following summarizes ongoing testing targets:
|
||||
|
||||
| Antivirus | Version | Target Result |
|
||||
| ------------------ | ------------ | -------------------------------- |
|
||||
| Bitdefender | 27.0.27.125 | Scanned and 0 threat(s) detected |
|
||||
| McAfee | 4.21.0.0 | Scanned and 0 threat(s) detected |
|
||||
| Microsoft Defender | 1.403.2259.0 | Scanned and 0 threat(s) detected |
|
||||
|
||||
To report issues, false positives, or to request additional testing, please email devops@jan.ai
|
||||
@ -1,51 +0,0 @@
|
||||
---
|
||||
title: Installation
|
||||
slug: /install
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
]
|
||||
---
|
||||
|
||||
import Tabs from "@theme/Tabs";
|
||||
import TabItem from "@theme/TabItem";
|
||||
|
||||
In this quickstart, we'll show you how to:
|
||||
|
||||
- Download the Jan Desktop client - Mac, Windows, Linux, (and toaster) compatible
|
||||
- Download the Nightly (unstable) version
|
||||
- Build the application from source
|
||||
|
||||
## Setup
|
||||
|
||||
### Installation
|
||||
|
||||
- To download the latest stable release: https://jan.ai/ or visit the [GitHub Releases](https://github.com/janhq/jan/releases) to download any previous release.
|
||||
|
||||
- To download a nightly release (highly unstable but lots of new features), please check out the [Download section](https://github.com/janhq/jan?tab=readme-ov-file#download) on our repository.
|
||||
|
||||
- For a detailed installation guide for your operating system, see the following:
|
||||
|
||||
<Tabs groupId="operating-systems">
|
||||
<TabItem value="mac" label="macOS">
|
||||
[Mac installation guide](/install/mac)
|
||||
</TabItem>
|
||||
<TabItem value="win" label="Windows">
|
||||
[Windows installation guide](/install/windows)
|
||||
</TabItem>
|
||||
<TabItem value="linux" label="Linux">
|
||||
[Linux installation guide](/install/linux)
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
- To build Jan Desktop from scratch (and have the right to tinker!)
|
||||
|
||||
See the [Build from Source](/install/from-source) guide.
|
||||
@ -1,56 +0,0 @@
|
||||
---
|
||||
title: Manage Chat History
|
||||
slug: /guides/chatting/manage-history/
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
manage-chat-history,
|
||||
]
|
||||
---
|
||||
|
||||
Jan offers a convenient and private way to interact with a conversational AI locally on your computer. This guide will walk you through how to manage your chat history with Jan, ensuring your interactions remain private and organized.
|
||||
|
||||
## Viewing Chat History
|
||||
|
||||
1. Navigate to the main dashboard.
|
||||
2. Locate the list of threads on the left side of the screen. This list shows all your conversations.
|
||||
3. Select a thread to view the conversation in the main chat window.
|
||||
4. Scroll up and down to view the entire chat history in the selected thread.
|
||||
|
||||
<br></br>
|
||||

|
||||
|
||||
## Managing Threads via Folders
|
||||
|
||||
This feature allows you to directly manage your thread history and configurations.
|
||||
|
||||
1. Navigate to the Thread that you want to manage via the list of threads on the left side of the dashboard.
|
||||
2. Click on the three dots (⋮) on the `Thread` section on the right side of the dashboard. There are two options:
|
||||
|
||||
- `Reveal in Finder` will open the folder containing the thread history and configurations.
|
||||
- `View as JSON` will open the thread.json file in your default browser.
|
||||
|
||||
<br></br>
|
||||

|
||||
|
||||
## Clean Thread
|
||||
|
||||
To streamline your conservation view, click on the three dots (⋮) on the thread you want to clean, then select `Clean Thread`. It will remove all messages from the thread. It is useful if you want to keep the thread settings, but want to remove the messages from the chat window.
|
||||
|
||||
<br></br>
|
||||

|
||||
|
||||
## Delete Thread
|
||||
|
||||
To delete a thread, click on the three dots (⋮) on the thread you want to delete, then select `Delete Thread`. It will remove the thread from the list of threads.
|
||||
|
||||
<br></br>
|
||||

|
||||
@ -1,23 +0,0 @@
|
||||
---
|
||||
title: Chatting
|
||||
slug: /guides/chatting/
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
chatting,
|
||||
]
|
||||
---
|
||||
|
||||
This guide is designed to help you maximize your experience with Jan, covering everything from starting engaging threads to managing your chat history effectively.
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
||||
|
Before Width: | Height: | Size: 360 KiB |
|
Before Width: | Height: | Size: 8.5 MiB |
|
Before Width: | Height: | Size: 342 KiB |
|
Before Width: | Height: | Size: 10 MiB |
|
Before Width: | Height: | Size: 18 MiB |
|
Before Width: | Height: | Size: 333 KiB |
|
Before Width: | Height: | Size: 342 KiB |
|
Before Width: | Height: | Size: 13 MiB |
|
Before Width: | Height: | Size: 11 MiB |
@ -1,51 +0,0 @@
|
||||
---
|
||||
title: Install Models from the Hub
|
||||
slug: /guides/using-models/install-from-hub
|
||||
description: Guide to install models from the Hub.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
install model,
|
||||
]
|
||||
---
|
||||
|
||||
In this guide, we will walk through the process of installing a **Large Language Model (LLM)** from the Hub.
|
||||
|
||||
## Steps to Install Models from the Hub
|
||||
|
||||
### 1. Explore and Select a Model
|
||||
|
||||
Explore the available LLMs by scrolling through the Hub or using the **Search Bar**.
|
||||
|
||||

|
||||
|
||||
Utilize the **Filter Button** to choose the **recommended LLM**. LLM is recommended based on the [RAM usage](https://github.com/janhq/jan/issues/1384).
|
||||
|
||||
| Name | Description |
|
||||
| ----------- | ------------------------------------- |
|
||||
| All Models | Show all LLMs available |
|
||||
| Recommended | Show the Recommended LLM |
|
||||
| Downloaded | Show the LLM that has been downloaded |
|
||||
|
||||

|
||||
|
||||
If you want to use a model that is not available in the Hub, you can also [import the Model Manually](./02-import-manually.mdx).
|
||||
|
||||
### 2. Download the Model
|
||||
|
||||
Once you've identified the desired LLM, simply click the **Download** button to initiate the download. A progress bar will appear to indicate the download progress.
|
||||
|
||||

|
||||
|
||||
### 3. Use the Model
|
||||
|
||||
Once the download is completed, you can start using the model by clicking the **Use** button.
|
||||
|
||||

|
||||
@ -1,242 +0,0 @@
|
||||
---
|
||||
title: Import Models Manually
|
||||
slug: /guides/using-models/import-manually
|
||||
description: Guide to manually import a local model into Jan.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
import-models-manually,
|
||||
local model,
|
||||
]
|
||||
---
|
||||
|
||||
:::caution
|
||||
This is currently under development.
|
||||
:::
|
||||
|
||||
{/* Imports */}
|
||||
import Tabs from "@theme/Tabs";
|
||||
import TabItem from "@theme/TabItem";
|
||||
|
||||
In this section, we will show you how to import a GGUF model from [HuggingFace](https://huggingface.co/), using our latest model, [Trinity](https://huggingface.co/janhq/trinity-v1-GGUF), as an example.
|
||||
|
||||
> We are fast shipping a UI to make this easier, but it's a bit manual for now. Apologies.
|
||||
|
||||
## Import Models Using Absolute Filepath (version 0.4.7)
|
||||
|
||||
Starting from version 0.4.7, Jan has introduced the capability to import models using an absolute file path. It allows you to import models from any directory on your computer. Please check the [import models using absolute filepath](../import-models-using-absolute-filepath) guide for more information.
|
||||
|
||||
## Manually Importing a Downloaded Model (nightly versions and v0.4.4+)
|
||||
|
||||
### 1. Create a Model Folder
|
||||
|
||||
Navigate to the `~/jan/models` folder. You can find this folder by going to `App Settings` > `Advanced` > `Open App Directory`.
|
||||
|
||||
<Tabs groupId="operating-systems">
|
||||
<TabItem value="mac" label="macOS">
|
||||
|
||||
```sh
|
||||
cd ~/jan/models
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="win" label="Windows">
|
||||
|
||||
```sh
|
||||
C:/Users/<your_user_name>/jan/models
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="linux" label="Linux">
|
||||
|
||||
```sh
|
||||
cd ~/jan/models
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
In the `models` folder, create a folder with the name of the model.
|
||||
|
||||
<Tabs groupId="operating-systems">
|
||||
<TabItem value="mac" label="macOS">
|
||||
|
||||
```sh
|
||||
mkdir trinity-v1-7b
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="win" label="Windows">
|
||||
|
||||
```sh
|
||||
mkdir trinity-v1-7b
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="linux" label="Linux">
|
||||
|
||||
```sh
|
||||
mkdir trinity-v1-7b
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
#### 2. Drag & Drop the Model
|
||||
|
||||
Drag and drop your model binary into this folder, ensuring the `modelname.gguf` is the same name as the folder name, e.g. `models/modelname`
|
||||
|
||||
#### 3. Voila
|
||||
|
||||
If your model doesn't show up in the Model Selector in conversations, please restart the app.
|
||||
|
||||
If that doesn't work, please feel free to join our [Discord community](https://discord.gg/Dt7MxDyNNZ) for support, updates, and discussions.
|
||||
|
||||
## Manually Importing a Downloaded Model (older versions)
|
||||
|
||||
### 1. Create a Model Folder
|
||||
|
||||
Navigate to the `~/jan/models` folder. You can find this folder by going to `App Settings` > `Advanced` > `Open App Directory`.
|
||||
|
||||
<Tabs groupId="operating-systems">
|
||||
<TabItem value="mac" label="macOS">
|
||||
|
||||
```sh
|
||||
cd ~/jan/models
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="win" label="Windows">
|
||||
|
||||
```sh
|
||||
C:/Users/<your_user_name>/jan/models
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="linux" label="Linux">
|
||||
|
||||
```sh
|
||||
cd ~/jan/models
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
In the `models` folder, create a folder with the name of the model.
|
||||
|
||||
<Tabs groupId="operating-systems">
|
||||
<TabItem value="mac" label="macOS">
|
||||
|
||||
```sh
|
||||
mkdir trinity-v1-7b
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="win" label="Windows">
|
||||
|
||||
```sh
|
||||
mkdir trinity-v1-7b
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="linux" label="Linux">
|
||||
|
||||
```sh
|
||||
mkdir trinity-v1-7b
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### 2. Create a Model JSON
|
||||
|
||||
Jan follows a folder-based, [standard model template](/docs/engineering/models) called a `model.json` to persist the model configurations on your local filesystem.
|
||||
|
||||
This means that you can easily reconfigure your models, export them, and share your preferences transparently.
|
||||
|
||||
<Tabs groupId="operating-systems">
|
||||
<TabItem value="mac" label="macOS">
|
||||
|
||||
```sh
|
||||
cd trinity-v1-7b
|
||||
touch model.json
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="win" label="Windows">
|
||||
|
||||
```sh
|
||||
cd trinity-v1-7b
|
||||
echo {} > model.json
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="linux" label="Linux">
|
||||
|
||||
```sh
|
||||
cd trinity-v1-7b
|
||||
touch model.json
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
Edit `model.json` and include the following configurations:
|
||||
|
||||
- Ensure the `id` property matches the folder name you created.
|
||||
- Ensure the GGUF filename should match the `id` property exactly.
|
||||
- Ensure the `source.url` property is the direct binary download link ending in `.gguf`. In HuggingFace, you can find the direct links in the `Files and versions` tab.
|
||||
- Ensure you are using the correct `prompt_template`. This is usually provided in the HuggingFace model's description page.
|
||||
|
||||
```json title="model.json"
|
||||
{
|
||||
// highlight-start
|
||||
"sources": [
|
||||
{
|
||||
"filename": "trinity-v1.Q4_K_M.gguf",
|
||||
"url": "https://huggingface.co/janhq/trinity-v1-GGUF/resolve/main/trinity-v1.Q4_K_M.gguf"
|
||||
}
|
||||
],
|
||||
"id": "trinity-v1-7b",
|
||||
// highlight-end
|
||||
"object": "model",
|
||||
"name": "Trinity-v1 7B Q4",
|
||||
"version": "1.0",
|
||||
"description": "Trinity is an experimental model merge of GreenNodeLM & LeoScorpius using the Slerp method. Recommended for daily assistance purposes.",
|
||||
"format": "gguf",
|
||||
"settings": {
|
||||
"ctx_len": 4096,
|
||||
// highlight-next-line
|
||||
"prompt_template": "{system_message}\n### Instruction:\n{prompt}\n### Response:",
|
||||
"llama_model_path": "trinity-v1.Q4_K_M.gguf"
|
||||
},
|
||||
"parameters": {
|
||||
"max_tokens": 4096
|
||||
},
|
||||
"metadata": {
|
||||
"author": "Jan",
|
||||
"tags": ["7B", "Merged"],
|
||||
"size": 4370000000
|
||||
},
|
||||
"engine": "nitro"
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Download the Model
|
||||
|
||||
Restart Jan and navigate to the Hub. Locate your model and click the `Download` button to download the model binary.
|
||||
|
||||

|
||||
|
||||
Your model is now ready to use in Jan.
|
||||
|
||||
## Assistance and Support
|
||||
|
||||
If you have questions or are looking for more preconfigured GGUF models, please feel free to join our [Discord community](https://discord.gg/Dt7MxDyNNZ) for support, updates, and discussions.
|
||||
@ -1,84 +0,0 @@
|
||||
---
|
||||
title: Import Models Using Absolute Filepath
|
||||
slug: /guides/using-models/import-models-using-absolute-filepath
|
||||
description: Guide to import model using absolute filepath in Jan.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
import-models-manually,
|
||||
absolute-filepath,
|
||||
]
|
||||
---
|
||||
|
||||
In this guide, we will walk you through the process of importing a model using an absolute filepath in Jan, using our latest model, [Trinity](https://huggingface.co/janhq/trinity-v1-GGUF), as an example.
|
||||
|
||||
### 1. Get the Absolute Filepath of the Model
|
||||
|
||||
After downloading .gguf model, you can get the absolute filepath of the model file.
|
||||
|
||||
### 2. Configure the Model JSON
|
||||
|
||||
1. Navigate to the `~/jan/models` folder.
|
||||
2. Create a folder named `<modelname>`, for example, `tinyllama`.
|
||||
3. Create a `model.json` file inside the folder, including the following configurations:
|
||||
|
||||
- Ensure the `id` property matches the folder name you created.
|
||||
- Ensure the `url` property is the direct binary download link ending in `.gguf`. Now, you can use the absolute filepath of the model file.
|
||||
- Ensure the `engine` property is set to `nitro`.
|
||||
|
||||
```json
|
||||
{
|
||||
"sources": [
|
||||
{
|
||||
"filename": "tinyllama.gguf",
|
||||
// highlight-next-line
|
||||
"url": "<absolute-filepath-of-the-model-file>"
|
||||
}
|
||||
],
|
||||
"id": "tinyllama-1.1b",
|
||||
"object": "model",
|
||||
"name": "(Absolute Path) TinyLlama Chat 1.1B Q4",
|
||||
"version": "1.0",
|
||||
"description": "TinyLlama is a tiny model with only 1.1B. It's a good model for less powerful computers.",
|
||||
"format": "gguf",
|
||||
"settings": {
|
||||
"ctx_len": 4096,
|
||||
"prompt_template": "<|system|>\n{system_message}<|user|>\n{prompt}<|assistant|>",
|
||||
"llama_model_path": "tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf"
|
||||
},
|
||||
"parameters": {
|
||||
"temperature": 0.7,
|
||||
"top_p": 0.95,
|
||||
"stream": true,
|
||||
"max_tokens": 2048,
|
||||
"stop": [],
|
||||
"frequency_penalty": 0,
|
||||
"presence_penalty": 0
|
||||
},
|
||||
"metadata": {
|
||||
"author": "TinyLlama",
|
||||
"tags": ["Tiny", "Foundation Model"],
|
||||
"size": 669000000
|
||||
},
|
||||
"engine": "nitro"
|
||||
}
|
||||
```
|
||||
|
||||
:::warning
|
||||
|
||||
- If you are using Windows, you need to use double backslashes in the url property, for example: `C:\\Users\\username\\filename.gguf`.
|
||||
|
||||
:::
|
||||
|
||||
### 3. Start the Model
|
||||
|
||||
Restart Jan and navigate to the Hub. Locate your model and click the Use button.
|
||||
|
||||

|
||||
@ -1,166 +0,0 @@
|
||||
---
|
||||
title: Integrate With a Remote Server
|
||||
slug: /guides/using-models/integrate-with-remote-server
|
||||
description: Guide to integrate with a remote server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
import-models-manually,
|
||||
remote server,
|
||||
OAI compatible,
|
||||
]
|
||||
---
|
||||
|
||||
:::caution
|
||||
This is currently under development.
|
||||
:::
|
||||
|
||||
In this guide, we will show you how to configure Jan as a client and point it to any remote & local (self-hosted) API server.
|
||||
|
||||
## OpenAI Platform Configuration
|
||||
|
||||
In this section, we will show you how to configure with OpenAI Platform, using the OpenAI GPT 3.5 Turbo 16k model as an example.
|
||||
|
||||
### 1. Create a Model JSON
|
||||
|
||||
Navigate to the `~/jan/models` folder. Create a folder named `gpt-3.5-turbo-16k` and create a `model.json` file inside the folder including the following configurations:
|
||||
|
||||
- Ensure the filename must be `model.json`.
|
||||
- Ensure the `id` property matches the folder name you created.
|
||||
- Ensure the `format` property is set to `api`.
|
||||
- Ensure the `engine` property is set to `openai`.
|
||||
- Ensure the `state` property is set to `ready`.
|
||||
|
||||
```json title="~/jan/models/gpt-3.5-turbo-16k/model.json"
|
||||
{
|
||||
"sources": [
|
||||
{
|
||||
"filename": "openai",
|
||||
"url": "https://openai.com"
|
||||
}
|
||||
],
|
||||
// highlight-next-line
|
||||
"id": "gpt-3.5-turbo-16k",
|
||||
"object": "model",
|
||||
"name": "OpenAI GPT 3.5 Turbo 16k",
|
||||
"version": "1.0",
|
||||
"description": "OpenAI GPT 3.5 Turbo 16k model is extremely good",
|
||||
// highlight-start
|
||||
"format": "api",
|
||||
"settings": {},
|
||||
"parameters": {},
|
||||
"metadata": {
|
||||
"author": "OpenAI",
|
||||
"tags": ["General", "Big Context Length"]
|
||||
},
|
||||
"engine": "openai"
|
||||
// highlight-end
|
||||
}
|
||||
```
|
||||
|
||||
:::tip
|
||||
|
||||
- You can find the list of available models in the [OpenAI Platform](https://platform.openai.com/docs/models/overview).
|
||||
- Please note that the `id` property need to match the model name in the list. For example, if you want to use the [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo), you need to set the `id` property as `gpt-4-1106-preview`.
|
||||
|
||||
:::
|
||||
|
||||
### 2. Configure OpenAI API Keys
|
||||
|
||||
You can find your API keys in the [OpenAI Platform](https://platform.openai.com/api-keys) and set the OpenAI API keys in `~/jan/engines/openai.json` file.
|
||||
|
||||
```json title="~/jan/engines/openai.json"
|
||||
{
|
||||
"full_url": "https://api.openai.com/v1/chat/completions",
|
||||
// highlight-next-line
|
||||
"api_key": "sk-<your key here>"
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Start the Model
|
||||
|
||||
Restart Jan and navigate to the Hub. Then, select your configured model and start the model.
|
||||
|
||||

|
||||
|
||||
## Engines with OAI Compatible Configuration
|
||||
|
||||
In this section, we will show you how to configure a client connection to a remote/local server, using Jan's API server that is running model `mistral-ins-7b-q4` as an example.
|
||||
|
||||
:::note
|
||||
|
||||
- Please note that at the moment, you can only connect to one OpenAI compatible endpoint at a time.
|
||||
|
||||
:::
|
||||
|
||||
### 1. Configure a Client Connection
|
||||
|
||||
Navigate to the `~/jan/engines` folder and modify the `openai.json` file. Please note that at the moment the code that supports any openai compatible endpoint only reads `engine/openai.json` file, thus, it will not search any other files in this directory.
|
||||
|
||||
Configure `full_url` properties with the endpoint server that you want to connect. For example, if you want to connect to Jan's API server, you can configure it as follows:
|
||||
|
||||
```json title="~/jan/engines/openai.json"
|
||||
{
|
||||
// highlight-start
|
||||
// "full_url": "https://<server-ip-address>:<port>/v1/chat/completions"
|
||||
"full_url": "https://<server-ip-address>:1337/v1/chat/completions"
|
||||
// highlight-end
|
||||
// Skip api_key if your local server does not require authentication
|
||||
// "api_key": "sk-<your key here>"
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Create a Model JSON
|
||||
|
||||
Navigate to the `~/jan/models` folder. Create a folder named `mistral-ins-7b-q4` and create a `model.json` file inside the folder including the following configurations:
|
||||
|
||||
- Ensure the filename must be `model.json`.
|
||||
- Ensure the `id` property matches the folder name you created.
|
||||
- Ensure the `format` property is set to `api`.
|
||||
- Ensure the `engine` property is set to `openai`.
|
||||
- Ensure the `state` property is set to `ready`.
|
||||
|
||||
```json title="~/jan/models/mistral-ins-7b-q4/model.json"
|
||||
{
|
||||
"sources": [
|
||||
{
|
||||
"filename": "janai",
|
||||
"url": "https://jan.ai"
|
||||
}
|
||||
],
|
||||
// highlight-next-line
|
||||
"id": "mistral-ins-7b-q4",
|
||||
"object": "model",
|
||||
"name": "Mistral Instruct 7B Q4 on Jan API Server",
|
||||
"version": "1.0",
|
||||
"description": "Jan integration with remote Jan API server",
|
||||
// highlight-next-line
|
||||
"format": "api",
|
||||
"settings": {},
|
||||
"parameters": {},
|
||||
"metadata": {
|
||||
"author": "MistralAI, The Bloke",
|
||||
"tags": ["remote", "awesome"]
|
||||
},
|
||||
// highlight-start
|
||||
"engine": "openai"
|
||||
// highlight-end
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Start the Model
|
||||
|
||||
Restart Jan and navigate to the Hub. Locate your model and click the Use button.
|
||||
|
||||

|
||||
|
||||
## Assistance and Support
|
||||
|
||||
If you have questions or are looking for more preconfigured GGUF models, please feel free to join our [Discord community](https://discord.gg/Dt7MxDyNNZ) for support, updates, and discussions.
|
||||
@ -1,80 +0,0 @@
|
||||
---
|
||||
title: Customize Engine Settings
|
||||
slug: /guides/using-models/customize-engine-settings
|
||||
description: Guide to customize engine settings.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
import-models-manually,
|
||||
customize-engine-settings,
|
||||
]
|
||||
---
|
||||
|
||||
{/* Imports */}
|
||||
import Tabs from "@theme/Tabs";
|
||||
import TabItem from "@theme/TabItem";
|
||||
|
||||
In this guide, we will show you how to customize the engine settings.
|
||||
|
||||
1. Navigate to the `~/jan/engine` folder. You can find this folder by going to `App Settings` > `Advanced` > `Open App Directory`.
|
||||
|
||||
<Tabs groupId="operating-systems">
|
||||
<TabItem value="mac" label="macOS">
|
||||
|
||||
```sh
|
||||
cd ~/jan/engine
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="win" label="Windows">
|
||||
|
||||
```sh
|
||||
C:/Users/<your_user_name>/jan/engine
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="linux" label="Linux">
|
||||
|
||||
```sh
|
||||
cd ~/jan/engine
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
2. Modify the `nitro.json` file based on your needs. The default settings are shown below.
|
||||
|
||||
```json title="~/jan/engines/nitro.json"
|
||||
{
|
||||
"ctx_len": 2048,
|
||||
"ngl": 100,
|
||||
"cpu_threads": 1,
|
||||
"cont_batching": false,
|
||||
"embedding": false
|
||||
}
|
||||
```
|
||||
|
||||
The table below describes the parameters in the `nitro.json` file.
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------------- | ------- | ------------------------------------------------------------ |
|
||||
| `ctx_len` | Integer | The context length for the model operations. |
|
||||
| `ngl` | Integer | The number of GPU layers to use. |
|
||||
| `cpu_threads` | Integer | The number of threads to use for inferencing (CPU mode only) |
|
||||
| `cont_batching` | Boolean | Whether to use continuous batching. |
|
||||
| `embedding` | Boolean | Whether to use embedding in the model. |
|
||||
|
||||
:::tip
|
||||
|
||||
- By default, the value of `ngl` is set to 100, which indicates that it will offload all. If you wish to offload only 50% of the GPU, you can set `ngl` to 15. Because the majority of models on Mistral or Llama are around ~ 30 layers.
|
||||
- To utilize the embedding feature, include the JSON parameter `"embedding": true`. It will enable Nitro to process inferences with embedding capabilities. For a more detailed explanation, please refer to the [Embedding in the Nitro documentation](https://nitro.jan.ai/features/embed).
|
||||
- To utilize the continuous batching feature to boost throughput and minimize latency in large language model (LLM) inference, please refer to the [Continuous Batching in the Nitro documentation](https://nitro.jan.ai/features/cont-batch).
|
||||
|
||||
:::
|
||||
@ -1,21 +0,0 @@
|
||||
---
|
||||
title: Using Models
|
||||
slug: /guides/using-models/
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
using-models,
|
||||
]
|
||||
---
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList className="DocCardList" />
|
||||
|
Before Width: | Height: | Size: 1.5 MiB |
|
Before Width: | Height: | Size: 2.9 MiB |
|
Before Width: | Height: | Size: 11 MiB |
|
Before Width: | Height: | Size: 6.4 MiB |
|
Before Width: | Height: | Size: 378 KiB |
|
Before Width: | Height: | Size: 3.8 MiB |
|
Before Width: | Height: | Size: 348 KiB |
|
Before Width: | Height: | Size: 372 KiB |
@ -1,72 +0,0 @@
|
||||
---
|
||||
title: Start Local Server
|
||||
slug: /guides/using-server/start-server
|
||||
description: How to run Jan's built-in API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
local server,
|
||||
api server,
|
||||
]
|
||||
---
|
||||
|
||||
Jan ships with a built-in API server that can be used as a drop-in, local replacement for OpenAI's API. You can run your server by following these simple steps.
|
||||
|
||||
## Open Local API Server View
|
||||
|
||||
Navigate to the Local API Server view by clicking the corresponding icon on the left side of the screen.
|
||||
|
||||
<br></br>
|
||||
|
||||

|
||||
|
||||
## Choosing a Model
|
||||
|
||||
On the top right of your screen under `Model Settings`, set the LLM that your local server will be running. You can choose from any of the models already installed, or pick a new model by clicking `Explore the Hub`.
|
||||
|
||||
<br></br>
|
||||
|
||||

|
||||
|
||||
## Server Options
|
||||
|
||||
On the left side of your screen, you can set custom server options.
|
||||
|
||||
<br></br>
|
||||
|
||||

|
||||
|
||||
### Local Server Address
|
||||
|
||||
By default, Jan will be accessible only on localhost `127.0.0.1`. This means a local server can only be accessed on the same machine where the server is being run.
|
||||
|
||||
You can make the local server more accessible by clicking on the address and choosing `0.0.0.0` instead, which allows the server to be accessed from other devices on the local network. This is less secure than choosing localhost, and should be done with caution.
|
||||
|
||||
### Port
|
||||
|
||||
Jan runs on port `1337` by default. You can change the port to any other port number if needed.
|
||||
|
||||
### Cross-Origin Resource Sharing (CORS)
|
||||
|
||||
Cross-Origin Resource Sharing (CORS) manages resource access on the local server from external domains. Enabled for security by default, it can be disabled if needed.
|
||||
|
||||
### Verbose Server Logs
|
||||
|
||||
The center of the screen displays the server logs as the local server runs. This option provides extensive details about server activities.
|
||||
|
||||
## Start Server
|
||||
|
||||
Click the `Start Server` button on the top left of your screen. You will see the server log display a message such as `Server listening at http://127.0.0.1:1337`, and the `Start Server` button will change to a red `Stop Server` button.
|
||||
|
||||
<br></br>
|
||||
|
||||

|
||||
|
||||
You server is now running and you can use the server address and port to make requests to the local server.
|
||||
@ -1,102 +0,0 @@
|
||||
---
|
||||
title: Using Jan's Built-in API Server
|
||||
description: How to use Jan's built-in API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
local server,
|
||||
api server,
|
||||
]
|
||||
---
|
||||
|
||||
Jan's built-in API server is compatible with [OpenAI's API](https://platform.openai.com/docs/api-reference) and can be used as a drop-in, local replacement. Follow these steps to use the API server.
|
||||
|
||||
## Open the API Reference
|
||||
|
||||
Jan contains a comprehensive API reference. This reference displays all the API endpoints available, gives you examples requests and responses, and allows you to execute them in browser.
|
||||
|
||||
On the top left of your screen below the red `Stop Server` button is the blue `API Reference`. Clicking this will open the reference in your browser.
|
||||
|
||||
<br></br>
|
||||
|
||||

|
||||
|
||||
Scroll through the various available endpoints to learn what options are available and try them out by executing the example requests. In addition, you can also use the [Jan API Reference](https://jan.ai/api-reference/) on the Jan website.
|
||||
|
||||
### Chat
|
||||
|
||||
In the Chat section of the API reference, you will see an example JSON request body.
|
||||
|
||||
<br></br>
|
||||
|
||||

|
||||
|
||||
With your local server running, you can click the `Try it out` button on the top left, then the blue `Execute` button below the JSON. The browser will send the example request to your server, and display the response body below.
|
||||
|
||||
Use the API endpoints, request and response body examples as models for your own application.
|
||||
|
||||
### cURL Request Example
|
||||
|
||||
Here is an example curl request with a local server running `tinyllama-1.1b`:
|
||||
|
||||
<br></br>
|
||||
|
||||
```json
|
||||
{
|
||||
"messages": [
|
||||
{
|
||||
"content": "You are a helpful assistant.",
|
||||
"role": "system"
|
||||
},
|
||||
{
|
||||
"content": "Hello!",
|
||||
"role": "user"
|
||||
}
|
||||
],
|
||||
"model": "tinyllama-1.1b",
|
||||
"stream": true,
|
||||
"max_tokens": 2048,
|
||||
"stop": [
|
||||
"hello"
|
||||
],
|
||||
"frequency_penalty": 0,
|
||||
"presence_penalty": 0,
|
||||
"temperature": 0.7,
|
||||
"top_p": 0.95
|
||||
}
|
||||
'
|
||||
```
|
||||
|
||||
### Response Body Example
|
||||
|
||||
```json
|
||||
{
|
||||
"choices": [
|
||||
{
|
||||
"finish_reason": null,
|
||||
"index": 0,
|
||||
"message": {
|
||||
"content": "Hello user. What can I help you with?",
|
||||
"role": "assistant"
|
||||
}
|
||||
}
|
||||
],
|
||||
"created": 1700193928,
|
||||
"id": "ebwd2niJvJB1Q2Whyvkz",
|
||||
"model": "_",
|
||||
"object": "chat.completion",
|
||||
"system_fingerprint": "_",
|
||||
"usage": {
|
||||
"completion_tokens": 500,
|
||||
"prompt_tokens": 33,
|
||||
"total_tokens": 533
|
||||
}
|
||||
}
|
||||
```
|
||||
@ -1,21 +0,0 @@
|
||||
---
|
||||
title: Using the Local Server
|
||||
slug: /guides/using-server/
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
using-server,
|
||||
]
|
||||
---
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
||||
|
Before Width: | Height: | Size: 328 KiB |
|
Before Width: | Height: | Size: 1.2 MiB |
|
Before Width: | Height: | Size: 3.7 MiB |
|
Before Width: | Height: | Size: 109 KiB |
|
Before Width: | Height: | Size: 90 KiB |
|
Before Width: | Height: | Size: 252 KiB |
@ -1,133 +0,0 @@
|
||||
---
|
||||
title: Extension Settings
|
||||
slug: /guides/using-extensions/extension-settings/
|
||||
description: Configure settings for extensions.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
extension settings,
|
||||
]
|
||||
---
|
||||
|
||||
The current Jan Desktop Client has some default extensions built on top of this framework to enhance the user experience. In this guide, we will show you the list of default extensions and how to configure extension settings.
|
||||
|
||||
## Default Extensions
|
||||
|
||||
You can find the default extensions in the `Settings` > `Extensions`.
|
||||
|
||||

|
||||
|
||||
### List of Default Extensions
|
||||
|
||||
| Extension Name | Version | Description | Source Code Link |
|
||||
| ---------------------------------- | ------- | ------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------- |
|
||||
| Assistant Extension | v1.0.0 | This extension enables assistants, including Jan, a default assistant that can call all downloaded models. | [Link to Source](https://github.com/janhq/jan/tree/main/extensions/assistant-extension) |
|
||||
| Conversational Extension | v1.0.0 | This extension enables conversations and state persistence via your filesystem. | [Link to Source](https://github.com/janhq/jan/tree/main/extensions/conversational-extension) |
|
||||
| Inference Nitro Extension | v1.0.0 | This extension embeds Nitro, a lightweight (3mb) inference engine written in C++. See [nitro.jan.ai](nitro.jan.ai) | [Link to Source](https://github.com/janhq/jan/tree/main/extensions/inference-nitro-extension) |
|
||||
| Inference Openai Extension | v1.0.0 | This extension enables OpenAI chat completion API calls | [Link to Source](https://github.com/janhq/jan/tree/main/extensions/inference-openai-extension) |
|
||||
| Inference Triton Trt Llm Extension | v1.0.0 | This extension enables Nvidia's TensorRT-LLM as an inference engine option. | [Link to Source](https://github.com/janhq/jan/tree/main/extensions/inference-triton-trtllm-extension) |
|
||||
| Model Extension | v1.0.22 | Model Management Extension provides model exploration and seamless downloads. | [Link to Source](https://github.com/janhq/jan/tree/main/extensions/model-extension) |
|
||||
| Monitoring Extension | v1.0.9 | This extension provides system health and OS level data. | [Link to Source](https://github.com/janhq/jan/tree/main/extensions/monitoring-extension) |
|
||||
|
||||
## Configure Extension Settings
|
||||
|
||||
You can configure the extension settings by modifying the `extensions.json` file under the `~/jan/extensions` directory including the following configurations:
|
||||
|
||||
- `_active`: true means the extension is enabled. If you want to disable an extension, you can set it to false.
|
||||
- `listeners`: {} is the default value for listeners.
|
||||
- `origin`: the path to the extension file.
|
||||
- `installOptions`: configure the installOptions with version and fullMetadata.
|
||||
- `name`: the name of the extension.
|
||||
- `version`: the version of the extension.
|
||||
- `main`: the path to the main file of the extension.
|
||||
- `description`: the description of the extension.
|
||||
- `url`: the url of the extension.
|
||||
|
||||
```json title="~/jan/extensions/extensions.json"
|
||||
{
|
||||
"@janhq/assistant-extension": {
|
||||
"_active": true,
|
||||
"listeners": {},
|
||||
"origin": "/Applications/Jan.app/Contents/Resources/app.asar.unpacked/pre-install/janhq-assistant-extension-1.0.0.tgz",
|
||||
"installOptions": { "version": false, "fullMetadata": false },
|
||||
"name": "@janhq/assistant-extension",
|
||||
"version": "1.0.0",
|
||||
"main": "dist/index.js",
|
||||
"description": "This extension enables assistants, including Jan, a default assistant that can call all downloaded models",
|
||||
"url": "extension://@janhq/assistant-extension/dist/index.js"
|
||||
},
|
||||
"@janhq/conversational-extension": {
|
||||
"_active": true,
|
||||
"listeners": {},
|
||||
"origin": "/Applications/Jan.app/Contents/Resources/app.asar.unpacked/pre-install/janhq-conversational-extension-1.0.0.tgz",
|
||||
"installOptions": { "version": false, "fullMetadata": false },
|
||||
"name": "@janhq/conversational-extension",
|
||||
"version": "1.0.0",
|
||||
"main": "dist/index.js",
|
||||
"description": "This extension enables conversations and state persistence via your filesystem",
|
||||
"url": "extension://@janhq/conversational-extension/dist/index.js"
|
||||
},
|
||||
"@janhq/inference-nitro-extension": {
|
||||
"_active": true,
|
||||
"listeners": {},
|
||||
"origin": "/Applications/Jan.app/Contents/Resources/app.asar.unpacked/pre-install/janhq-inference-nitro-extension-1.0.0.tgz",
|
||||
"installOptions": { "version": false, "fullMetadata": false },
|
||||
"name": "@janhq/inference-nitro-extension",
|
||||
"version": "1.0.0",
|
||||
"main": "dist/index.js",
|
||||
"description": "This extension embeds Nitro, a lightweight (3mb) inference engine written in C++. See nitro.jan.ai",
|
||||
"url": "extension://@janhq/inference-nitro-extension/dist/index.js"
|
||||
},
|
||||
"@janhq/inference-openai-extension": {
|
||||
"_active": true,
|
||||
"listeners": {},
|
||||
"origin": "/Applications/Jan.app/Contents/Resources/app.asar.unpacked/pre-install/janhq-inference-openai-extension-1.0.0.tgz",
|
||||
"installOptions": { "version": false, "fullMetadata": false },
|
||||
"name": "@janhq/inference-openai-extension",
|
||||
"version": "1.0.0",
|
||||
"main": "dist/index.js",
|
||||
"description": "This extension enables OpenAI chat completion API calls",
|
||||
"url": "extension://@janhq/inference-openai-extension/dist/index.js"
|
||||
},
|
||||
"@janhq/inference-triton-trt-llm-extension": {
|
||||
"_active": true,
|
||||
"listeners": {},
|
||||
"origin": "/Applications/Jan.app/Contents/Resources/app.asar.unpacked/pre-install/janhq-inference-triton-trt-llm-extension-1.0.0.tgz",
|
||||
"installOptions": { "version": false, "fullMetadata": false },
|
||||
"name": "@janhq/inference-triton-trt-llm-extension",
|
||||
"version": "1.0.0",
|
||||
"main": "dist/index.js",
|
||||
"description": "This extension enables Nvidia's TensorRT-LLM as an inference engine option",
|
||||
"url": "extension://@janhq/inference-triton-trt-llm-extension/dist/index.js"
|
||||
},
|
||||
"@janhq/model-extension": {
|
||||
"_active": true,
|
||||
"listeners": {},
|
||||
"origin": "/Applications/Jan.app/Contents/Resources/app.asar.unpacked/pre-install/janhq-model-extension-1.0.22.tgz",
|
||||
"installOptions": { "version": false, "fullMetadata": false },
|
||||
"name": "@janhq/model-extension",
|
||||
"version": "1.0.22",
|
||||
"main": "dist/index.js",
|
||||
"description": "Model Management Extension provides model exploration and seamless downloads",
|
||||
"url": "extension://@janhq/model-extension/dist/index.js"
|
||||
},
|
||||
"@janhq/monitoring-extension": {
|
||||
"_active": true,
|
||||
"listeners": {},
|
||||
"origin": "/Applications/Jan.app/Contents/Resources/app.asar.unpacked/pre-install/janhq-monitoring-extension-1.0.9.tgz",
|
||||
"installOptions": { "version": false, "fullMetadata": false },
|
||||
"name": "@janhq/monitoring-extension",
|
||||
"version": "1.0.9",
|
||||
"main": "dist/index.js",
|
||||
"description": "This extension provides system health and OS level data",
|
||||
"url": "extension://@janhq/monitoring-extension/dist/index.js"
|
||||
}
|
||||
}
|
||||
```
|
||||
@ -1,29 +0,0 @@
|
||||
---
|
||||
title: Import Extensions
|
||||
slug: /guides/using-extensions/import-extensions/
|
||||
description: Import extensions into Jan.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
import extensions,
|
||||
]
|
||||
---
|
||||
|
||||
Beside default extensions, you can import extensions into Jan by navigate to `Settings` > `Extensions` > `Manual Installation`. Then, the `~/jan/extensions/extensions.json` file will be updated automatically.
|
||||
|
||||
:::caution
|
||||
|
||||
You need to prepare the extension file in `.tgz` format to install.
|
||||
|
||||
:::
|
||||
|
||||

|
||||
|
||||
If you want to build your own extension, please refer to the [Build Your First Extension | Developer Documentation](/developer/build-extension/your-first-extension/).
|
||||
@ -1,21 +0,0 @@
|
||||
---
|
||||
title: Using Extensions
|
||||
slug: /guides/using-extensions/
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
using-extensions,
|
||||
]
|
||||
---
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
||||
|
Before Width: | Height: | Size: 429 KiB |
|
Before Width: | Height: | Size: 17 MiB |
@ -1,113 +0,0 @@
|
||||
---
|
||||
title: Integrate Continue with Jan and VS Code
|
||||
slug: /guides/integrations/continue
|
||||
description: Guide to integrate Continue with Jan and VS Code
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
Continue integration,
|
||||
VSCode integration,
|
||||
]
|
||||
---
|
||||
|
||||
{/* Imports */}
|
||||
import Tabs from "@theme/Tabs";
|
||||
import TabItem from "@theme/TabItem";
|
||||
|
||||
## Quick Introduction
|
||||
|
||||
[Continue](https://continue.dev/docs/intro) is an open-source autopilot for VS Code and JetBrains—the easiest way to code with any LLM.
|
||||
|
||||
In this guide, we will show you how to integrate Continue with Jan and VS Code, enhancing your coding experience with the power of the local AI language model.
|
||||
|
||||
## Steps to Integrate Continue with Jan and VS Code
|
||||
|
||||
### 1. Install Continue for VS Code
|
||||
|
||||
To get started with Continue in VS Code, please follow this [guide to install Continue for VS Code](https://continue.dev/docs/quickstart).
|
||||
|
||||
### 2. Enable Jan API Server
|
||||
|
||||
To configure the Continue to use Jan's Local Server, you need to enable Jan API Server with your preferred model, please follow this [guide to enable Jan API Server](/guides/using-server/start-server).
|
||||
|
||||
### 3. Configure Continue to Use Jan's Local Server
|
||||
|
||||
Navigate to the `~/.continue` directory.
|
||||
|
||||
<Tabs groupId="operating-systems">
|
||||
<TabItem value="mac" label="macOS">
|
||||
|
||||
```sh
|
||||
cd ~/.continue
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="win" label="Windows">
|
||||
|
||||
```sh
|
||||
C:/Users/<your_user_name>/.continue
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="linux" label="Linux">
|
||||
|
||||
```sh
|
||||
cd ~/.continue
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
Edit the `config.json` file and include the following configuration.
|
||||
|
||||
```json title="~/.continue/config.json"
|
||||
{
|
||||
"models": [
|
||||
{
|
||||
// highlight-next-line
|
||||
"title": "Jan",
|
||||
"provider": "openai",
|
||||
// highlight-start
|
||||
"model": "mistral-ins-7b-q4",
|
||||
"apiKey": "EMPTY",
|
||||
"apiBase": "http://localhost:1337/v1"
|
||||
// highlight-end
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
- Ensure that the `provider` is `openai`.
|
||||
- Ensure that the `model` is the ID of the running model. You can check for the respective ID in System Monitor.
|
||||
- Ensure that the `apiBase` is `http://localhost:1337/v1`.
|
||||
- Ensure that the `apiKey` is `EMPTY`.
|
||||
|
||||
### 4. Double Check the Model is Running
|
||||
|
||||
Open up the `System Monitor` to check that your model is currently running.
|
||||
|
||||
If there are not active models, go to `Settings` > `My Models`. Click on the **three dots (⋮)** and **start model**.
|
||||
|
||||

|
||||
|
||||
### 5. Use Continue in VS Code
|
||||
|
||||
#### Asking questions about the code
|
||||
|
||||
- Highlight a code snippet and press `Command + M` to open the Continue Extension in VSCode.
|
||||
- Select Jan at the bottom and ask a question about the code, for example, `Explain this code`.
|
||||
|
||||

|
||||
|
||||
#### Editing the code directly
|
||||
|
||||
- Highlight a code snippet and press `Command + Shift + L` and input your edit request, for example, `Write comments for this code`.
|
||||
|
||||

|
||||
@ -1,84 +0,0 @@
|
||||
---
|
||||
title: Integrate OpenRouter with Jan
|
||||
slug: /guides/integrations/openrouter
|
||||
description: Guide to integrate OpenRouter with Jan
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
OpenRouter integration,
|
||||
]
|
||||
---
|
||||
|
||||
## Quick Introduction
|
||||
|
||||
[OpenRouter](https://openrouter.ai/docs#quick-start) is an AI model aggregator. The API can be used by developers to interact with a variety of large language models, generative image models, and generative 3D object models.
|
||||
|
||||
In this guide, we will show you how to integrate OpenRouter with Jan, enabling you to leverage remote Large Language Models (LLM) that are available at OpenRouter.
|
||||
|
||||
## Steps to Integrate OpenRouter with Jan
|
||||
|
||||
### 1. Configure OpenRouter API key
|
||||
|
||||
You can find your API keys in the [OpenRouter API Key](https://openrouter.ai/keys) and set the OpenRouter API key in `~/jan/engines/openai.json` file.
|
||||
|
||||
```json title="~/jan/engines/openai.json"
|
||||
{
|
||||
// highlight-start
|
||||
"full_url": "https://openrouter.ai/api/v1/chat/completions",
|
||||
"api_key": "sk-or-v1<your-openrouter-api-key-here>"
|
||||
// highlight-end
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Modify a Model JSON
|
||||
|
||||
Navigate to the `~/jan/models` folder. Create a folder named `<openrouter-modelname>`, for example, `openrouter-dolphin-mixtral-8x7b` and create a `model.json` file inside the folder including the following configurations:
|
||||
|
||||
- Ensure the filename must be `model.json`.
|
||||
- Ensure the `id` property is set to the model id from OpenRouter.
|
||||
- Ensure the `format` property is set to `api`.
|
||||
- Ensure the `engine` property is set to `openai`.
|
||||
- Ensure the `state` property is set to `ready`.
|
||||
|
||||
```json title="~/jan/models/openrouter-dolphin-mixtral-8x7b/model.json"
|
||||
{
|
||||
"sources": [
|
||||
{
|
||||
"filename": "openrouter",
|
||||
"url": "https://openrouter.ai/"
|
||||
}
|
||||
],
|
||||
"id": "cognitivecomputations/dolphin-mixtral-8x7b",
|
||||
"object": "model",
|
||||
"name": "Dolphin 2.6 Mixtral 8x7B",
|
||||
"version": "1.0",
|
||||
"description": "This is a 16k context fine-tune of Mixtral-8x7b. It excels in coding tasks due to extensive training with coding data and is known for its obedience, although it lacks DPO tuning. The model is uncensored and is stripped of alignment and bias. It requires an external alignment layer for ethical use. Users are cautioned to use this highly compliant model responsibly, as detailed in a blog post about uncensored models at erichartford.com/uncensored-models.",
|
||||
// highlight-next-line
|
||||
"format": "api",
|
||||
"settings": {},
|
||||
"parameters": {},
|
||||
"metadata": {
|
||||
"tags": ["General", "Big Context Length"]
|
||||
},
|
||||
// highlight-start
|
||||
"engine": "openai"
|
||||
// highlight-end
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Start the Model
|
||||
|
||||
Restart Jan and navigate to the Hub. Locate your model and click the Use button.
|
||||
|
||||

|
||||
|
||||
### 4. Try Out the Integration of Jan and OpenRouter
|
||||
|
||||

|
||||
@ -1,95 +0,0 @@
|
||||
---
|
||||
title: Integrate Azure OpenAI Service with Jan
|
||||
slug: /guides/integrations/azure-openai-service
|
||||
description: Guide to integrate Azure OpenAI Service with Jan
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
integration,
|
||||
Azure OpenAI Service,
|
||||
]
|
||||
---
|
||||
|
||||
## Quick Introduction
|
||||
|
||||
[Azure OpenAI Service](https://learn.microsoft.com/en-us/azure/ai-services/openai/overview?source=docs) provides a set of powerful APIs that enable you to easily integrate the OpenAI's language models.
|
||||
|
||||
In this guide, we will show you how to integrate Azure OpenAI Service with Jan.
|
||||
|
||||
## Steps to Integrate Azure OpenAI Service with Jan
|
||||
|
||||
### 1. Configure Azure OpenAI Service API key
|
||||
|
||||
Once you completed setting up and deploying the Azure OpenAI Service, you can find the endpoint and API key in the [Azure OpenAI Studio](https://oai.azure.com/) by navigating to `Chat` > `View code`.
|
||||
|
||||

|
||||
|
||||
<br> </br>
|
||||
|
||||

|
||||
|
||||
Set the Azure OpenAI Service endpoint and API key in the `~/jan/engines/openai.json` file.
|
||||
|
||||
```json title="~/jan/engines/openai.json"
|
||||
{
|
||||
// https://hieujan.openai.azure.com/openai/deployments/gpt-35-hieu-jan/chat/completions?api-version=2023-07-01-preview
|
||||
// highlight-start
|
||||
"full_url": "https://<your-resource-name>.openai.azure.com/openai/deployments/<your-deployment-name>/chat/completions?api-version=<api-version>",
|
||||
"api_key": "<your-api-key>"
|
||||
// highlight-end
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Modify a Model JSON
|
||||
|
||||
Navigate to the `~/jan/models` folder. Create a folder named `<your-deployment-name>`, for example, `gpt-35-hieu-jan` and create a `model.json` file inside the folder including the following configurations:
|
||||
|
||||
- Ensure the filename must be `model.json`.
|
||||
- Ensure the `id` property is set to the same as the folder name and your deployment name.
|
||||
- Ensure the `format` property is set to `api`.
|
||||
- Ensure the `engine` property is set to `openai`.
|
||||
- Ensure the `state` property is set to `ready`.
|
||||
|
||||
```json title="~/jan/models/gpt-35-hieu-jan/model.json"
|
||||
{
|
||||
"sources": [
|
||||
{
|
||||
"filename": "azure_openai",
|
||||
"url": "https://hieujan.openai.azure.com"
|
||||
}
|
||||
],
|
||||
// highlight-next-line
|
||||
"id": "gpt-35-hieu-jan",
|
||||
"object": "model",
|
||||
"name": "Azure OpenAI GPT 3.5",
|
||||
"version": "1.0",
|
||||
"description": "Azure Open AI GPT 3.5 model is extremely good",
|
||||
// highlight-next-line
|
||||
"format": "api",
|
||||
"settings": {},
|
||||
"parameters": {},
|
||||
"metadata": {
|
||||
"author": "OpenAI",
|
||||
"tags": ["General", "Big Context Length"]
|
||||
},
|
||||
// highlight-start
|
||||
"engine": "openai"
|
||||
// highlight-end
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Start the Model
|
||||
|
||||
Restart Jan and navigate to the Hub. Locate your model and click the Use button.
|
||||

|
||||
|
||||
### 4. Try Out the Integration of Jan and Azure OpenAI Service
|
||||
|
||||

|
||||
@ -1,89 +0,0 @@
|
||||
---
|
||||
title: Integrate Mistral AI with Jan
|
||||
slug: /guides/integrations/mistral-ai
|
||||
description: Guide to integrate Mistral AI with Jan
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
Mistral integration,
|
||||
]
|
||||
---
|
||||
|
||||
## Quick Introduction
|
||||
|
||||
[Mistral AI](https://docs.mistral.ai/) currently provides two ways of accessing their Large Language Models (LLM) - via their API or via open source models available on Hugging Face. In this guide, we will show you how to integrate Mistral AI with Jan using the API method.
|
||||
|
||||
## Steps to Integrate Mistral AI with Jan
|
||||
|
||||
### 1. Configure Mistral API key
|
||||
|
||||
You can find your API keys in the [Mistral API Key](https://console.mistral.ai/user/api-keys/) and set the Mistral AI API key in `~/jan/engines/openai.json` file.
|
||||
|
||||
```json title="~/jan/engines/openai.json"
|
||||
{
|
||||
// highlight-start
|
||||
"full_url": "https://api.mistral.ai/v1/chat/completions",
|
||||
"api_key": "<your-mistral-ai-api-key>"
|
||||
// highlight-end
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Modify a Model JSON
|
||||
|
||||
Navigate to the `~/jan/models` folder. Create a folder named `<mistral-modelname>`, for example, `mistral-tiny` and create a `model.json` file inside the folder including the following configurations:
|
||||
|
||||
- Ensure the filename must be `model.json`.
|
||||
- Ensure the `id` property is set to the model id from Mistral AI.
|
||||
- Ensure the `format` property is set to `api`.
|
||||
- Ensure the `engine` property is set to `openai`.
|
||||
- Ensure the `state` property is set to `ready`.
|
||||
|
||||
```json title="~/jan/models/mistral-tiny/model.json"
|
||||
{
|
||||
"sources": [
|
||||
{
|
||||
"filename": "mistral-tiny",
|
||||
"url": "https://mistral.ai/"
|
||||
}
|
||||
],
|
||||
"id": "mistral-tiny",
|
||||
"object": "model",
|
||||
"name": "Mistral-7B-v0.2 (Tiny Endpoint)",
|
||||
"version": "1.0",
|
||||
"description": "Currently powered by Mistral-7B-v0.2, a better fine-tuning of the initial Mistral-7B released, inspired by the fantastic work of the community.",
|
||||
// highlight-next-line
|
||||
"format": "api",
|
||||
"settings": {},
|
||||
"parameters": {},
|
||||
"metadata": {
|
||||
"author": "Mistral AI",
|
||||
"tags": ["General", "Big Context Length"]
|
||||
},
|
||||
// highlight-start
|
||||
"engine": "openai"
|
||||
// highlight-end
|
||||
}
|
||||
```
|
||||
|
||||
:::tip
|
||||
|
||||
Mistral AI provides different endpoints. Please check out their [endpoint documentation](https://docs.mistral.ai/platform/endpoints/) to find the one that suits your needs. In this example, we will use the `mistral-tiny` model.
|
||||
|
||||
:::
|
||||
|
||||
### 3. Start the Model
|
||||
|
||||
Restart Jan and navigate to the Hub. Locate your model and click the Use button.
|
||||
|
||||

|
||||
|
||||
### 4. Try Out the Integration of Jan and Mistral AI
|
||||
|
||||

|
||||
@ -1,184 +0,0 @@
|
||||
---
|
||||
title: Integrate LM Studio with Jan
|
||||
slug: /guides/integrations/lmstudio
|
||||
description: Guide to integrate LM Studio with Jan
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
LM Studio integration,
|
||||
]
|
||||
---
|
||||
|
||||
## Quick Introduction
|
||||
|
||||
With [LM Studio](https://lmstudio.ai/), you can discover, download, and run local Large Language Models (LLMs). In this guide, we will show you how to integrate and use your current models on LM Studio with Jan using 2 methods. The first method is integrating LM Studio server with Jan UI. The second method is migrating your downloaded model from LM Studio to Jan. We will use the [Phi 2 - GGUF](https://huggingface.co/TheBloke/phi-2-GGUF) model on Hugging Face as an example.
|
||||
|
||||
## Steps to Integrate LM Studio Server with Jan UI
|
||||
|
||||
### 1. Start the LM Studio Server
|
||||
|
||||
1. Navigate to the `Local Inference Server` on the LM Studio application.
|
||||
2. Select the model you want to use.
|
||||
3. Start the server after configuring the server port and options.
|
||||
|
||||

|
||||
|
||||
<br></br>
|
||||
|
||||
Modify the `openai.json` file in the `~/jan/engines` folder to include the full URL of the LM Studio server.
|
||||
|
||||
```json title="~/jan/engines/openai.json"
|
||||
{
|
||||
"full_url": "http://localhost:<port>/v1/chat/completions"
|
||||
}
|
||||
```
|
||||
|
||||
:::tip
|
||||
|
||||
- Replace `<port>` with the port number you set in the LM Studio server. The default port is `1234`.
|
||||
|
||||
:::
|
||||
|
||||
### 2. Modify a Model JSON
|
||||
|
||||
Navigate to the `~/jan/models` folder. Create a folder named `<lmstudio-modelname>`, for example, `lmstudio-phi-2` and create a `model.json` file inside the folder including the following configurations:
|
||||
|
||||
- Set the `format` property to `api`.
|
||||
- Set the `engine` property to `openai`.
|
||||
- Set the `state` property to `ready`.
|
||||
|
||||
```json title="~/jan/models/lmstudio-phi-2/model.json"
|
||||
{
|
||||
"sources": [
|
||||
{
|
||||
"filename": "phi-2-GGUF",
|
||||
"url": "https://huggingface.co/TheBloke/phi-2-GGUF"
|
||||
}
|
||||
],
|
||||
"id": "lmstudio-phi-2",
|
||||
"object": "model",
|
||||
"name": "LM Studio - Phi 2 - GGUF",
|
||||
"version": "1.0",
|
||||
"description": "TheBloke/phi-2-GGUF",
|
||||
// highlight-next-line
|
||||
"format": "api",
|
||||
"settings": {},
|
||||
"parameters": {},
|
||||
"metadata": {
|
||||
"author": "Microsoft",
|
||||
"tags": ["General", "Big Context Length"]
|
||||
},
|
||||
// highlight-start
|
||||
"engine": "openai"
|
||||
// highlight-end
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Start the Model
|
||||
|
||||
1. Restart Jan and navigate to the **Hub**.
|
||||
2. Locate your model and click the **Use** button.
|
||||
|
||||

|
||||
|
||||
### 4. Try Out the Integration of Jan and LM Studio
|
||||
|
||||

|
||||
|
||||
## Steps to Migrate Your Downloaded Model from LM Studio to Jan (version 0.4.6 and older)
|
||||
|
||||
### 1. Migrate Your Downloaded Model
|
||||
|
||||
1. Navigate to `My Models` in the LM Studio application and reveal the model folder.
|
||||
|
||||

|
||||
|
||||
2. Copy the model folder that you want to migrate to `~/jan/models` folder.
|
||||
|
||||
3. Ensure the folder name property is the same as the model name of `.gguf` filename by changing the folder name if necessary. For example, in this case, we changed foldername from `TheBloke` to `phi-2.Q4_K_S`.
|
||||
|
||||
### 2. Start the Model
|
||||
|
||||
1. Restart Jan and navigate to the **Hub**. Jan will automatically detect the model and display it in the **Hub**.
|
||||
2. Locate your model and click the **Use** button to try the migrating model.
|
||||
|
||||

|
||||
|
||||
## Steps to Pointing to the Downloaded Model of LM Studio from Jan (version 0.4.7+)
|
||||
|
||||
Starting from version 0.4.7, Jan supports importing models using an absolute filepath, so you can directly use the model from the LM Studio folder.
|
||||
|
||||
### 1. Reveal the Model Absolute Path
|
||||
|
||||
Navigate to `My Models` in the LM Studio application and reveal the model folder. Then, you can get the absolute path of your model.
|
||||
|
||||

|
||||
|
||||
### 2. Modify a Model JSON
|
||||
|
||||
Navigate to the `~/jan/models` folder. Create a folder named `<modelname>`, for example, `phi-2.Q4_K_S` and create a `model.json` file inside the folder including the following configurations:
|
||||
|
||||
- Ensure the `id` property matches the folder name you created.
|
||||
- Ensure the `url` property is the direct binary download link ending in `.gguf`. Now, you can use the absolute filepath of the model file. In this example, the absolute filepath is `/Users/<username>/.cache/lm-studio/models/TheBloke/phi-2-GGUF/phi-2.Q4_K_S.gguf`.
|
||||
- Ensure the `engine` property is set to `nitro`.
|
||||
|
||||
```json
|
||||
{
|
||||
"object": "model",
|
||||
"version": 1,
|
||||
"format": "gguf",
|
||||
"sources": [
|
||||
{
|
||||
"filename": "phi-2.Q4_K_S.gguf",
|
||||
"url": "<absolute-path-of-model-file>"
|
||||
}
|
||||
],
|
||||
"id": "phi-2.Q4_K_S",
|
||||
"name": "phi-2.Q4_K_S",
|
||||
"created": 1708308111506,
|
||||
"description": "phi-2.Q4_K_S - user self import model",
|
||||
"settings": {
|
||||
"ctx_len": 4096,
|
||||
"embedding": false,
|
||||
"prompt_template": "{system_message}\n### Instruction: {prompt}\n### Response:",
|
||||
"llama_model_path": "phi-2.Q4_K_S.gguf"
|
||||
},
|
||||
"parameters": {
|
||||
"temperature": 0.7,
|
||||
"top_p": 0.95,
|
||||
"stream": true,
|
||||
"max_tokens": 2048,
|
||||
"stop": ["<endofstring>"],
|
||||
"frequency_penalty": 0,
|
||||
"presence_penalty": 0
|
||||
},
|
||||
"metadata": {
|
||||
"size": 1615568736,
|
||||
"author": "User",
|
||||
"tags": []
|
||||
},
|
||||
"engine": "nitro"
|
||||
}
|
||||
```
|
||||
|
||||
:::warning
|
||||
|
||||
- If you are using Windows, you need to use double backslashes in the url property, for example: `C:\\Users\\username\\filename.gguf`.
|
||||
|
||||
:::
|
||||
|
||||
|
||||
### 3. Start the Model
|
||||
|
||||
1. Restart Jan and navigate to the **Hub**.
|
||||
2. Jan will automatically detect the model and display it in the **Hub**.
|
||||
3. Locate your model and click the **Use** button to try the migrating model.
|
||||
|
||||

|
||||
@ -1,90 +0,0 @@
|
||||
---
|
||||
title: Integrate Ollama with Jan
|
||||
slug: /guides/integrations/ollama
|
||||
description: Guide to integrate Ollama with Jan
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
Ollama integration,
|
||||
]
|
||||
---
|
||||
|
||||
## Quick Introduction
|
||||
|
||||
With [Ollama](https://ollama.com/), you can run large language models locally. In this guide, we will show you how to integrate and use your current models on Ollama with Jan using 2 methods. The first method is integrating Ollama server with Jan UI. The second method is migrating your downloaded model from Ollama to Jan. We will use the [llama2](https://ollama.com/library/llama2) model as an example.
|
||||
|
||||
## Steps to Integrate Ollama Server with Jan UI
|
||||
|
||||
### 1. Start the Ollama Server
|
||||
|
||||
1. Select the model you want to use from the [Ollama library](https://ollama.com/library).
|
||||
2. Run your model by using the following command:
|
||||
|
||||
```bash
|
||||
ollama run <model-name>
|
||||
```
|
||||
|
||||
3. According to the [Ollama documentation on OpenAI compatibility](https://github.com/ollama/ollama/blob/main/docs/openai.md), you can use the `http://localhost:11434/v1/chat/completions` endpoint to interact with the Ollama server. Thus, modify the `openai.json` file in the `~/jan/engines` folder to include the full URL of the Ollama server.
|
||||
|
||||
```json title="~/jan/engines/openai.json"
|
||||
{
|
||||
"full_url": "http://localhost:11434/v1/chat/completions"
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Modify a Model JSON
|
||||
|
||||
1. Navigate to the `~/jan/models` folder.
|
||||
2. Create a folder named `<ollam-modelname>`, for example, `lmstudio-phi-2`.
|
||||
3. Create a `model.json` file inside the folder including the following configurations:
|
||||
|
||||
- Set the `id` property to the model name as Ollama model name.
|
||||
- Set the `format` property to `api`.
|
||||
- Set the `engine` property to `openai`.
|
||||
- Set the `state` property to `ready`.
|
||||
|
||||
```json title="~/jan/models/llama2/model.json"
|
||||
{
|
||||
"sources": [
|
||||
{
|
||||
"filename": "llama2",
|
||||
"url": "https://ollama.com/library/llama2"
|
||||
}
|
||||
],
|
||||
// highlight-next-line
|
||||
"id": "llama2",
|
||||
"object": "model",
|
||||
"name": "Ollama - Llama2",
|
||||
"version": "1.0",
|
||||
"description": "Llama 2 is a collection of foundation language models ranging from 7B to 70B parameters.",
|
||||
// highlight-next-line
|
||||
"format": "api",
|
||||
"settings": {},
|
||||
"parameters": {},
|
||||
"metadata": {
|
||||
"author": "Meta",
|
||||
"tags": ["General", "Big Context Length"]
|
||||
},
|
||||
// highlight-next-line
|
||||
"engine": "openai"
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Start the Model
|
||||
|
||||
1. Restart Jan and navigate to the **Hub**.
|
||||
2. Locate your model and click the **Use** button.
|
||||
|
||||

|
||||
|
||||
### 4. Try Out the Integration of Jan and Ollama
|
||||
|
||||

|
||||
|
||||
@ -1,21 +0,0 @@
|
||||
---
|
||||
title: Integrations
|
||||
slug: /guides/integrations/
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
troubleshooting,
|
||||
]
|
||||
---
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
||||
|
Before Width: | Height: | Size: 85 KiB |
|
Before Width: | Height: | Size: 622 KiB |
|
Before Width: | Height: | Size: 13 MiB |
|
Before Width: | Height: | Size: 88 KiB |
|
Before Width: | Height: | Size: 14 MiB |
|
Before Width: | Height: | Size: 1.3 MiB |
|
Before Width: | Height: | Size: 827 KiB |
|
Before Width: | Height: | Size: 9.9 MiB |
|
Before Width: | Height: | Size: 1.3 MiB |
|
Before Width: | Height: | Size: 567 KiB |
|
Before Width: | Height: | Size: 8.3 MiB |
|
Before Width: | Height: | Size: 1.5 MiB |
|
Before Width: | Height: | Size: 5.3 MiB |
|
Before Width: | Height: | Size: 5.7 MiB |
|
Before Width: | Height: | Size: 6.6 MiB |
|
Before Width: | Height: | Size: 1.2 MiB |
|
Before Width: | Height: | Size: 3.3 MiB |
|
Before Width: | Height: | Size: 11 MiB |
|
Before Width: | Height: | Size: 5.0 MiB |
|
Before Width: | Height: | Size: 1.2 MiB |
@ -1,126 +0,0 @@
|
||||
---
|
||||
title: Stuck on a Broken Build
|
||||
slug: /troubleshooting/stuck-on-broken-build
|
||||
description: Troubleshooting steps to resolve issues related to broken builds.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
troubleshooting,
|
||||
]
|
||||
---
|
||||
|
||||
{/* Imports */}
|
||||
import Tabs from "@theme/Tabs";
|
||||
import TabItem from "@theme/TabItem";
|
||||
|
||||
The following steps will help you troubleshoot and resolve issues related to broken builds.
|
||||
|
||||
1. Unistall Jan
|
||||
|
||||
<Tabs groupId="operating-systems">
|
||||
<TabItem value="mac" label="macOS">
|
||||
|
||||
Delete Jan from your `/Applications` folder
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="win" label="Windows">
|
||||
|
||||
To uninstall Jan on Windows, use the [Windows Control Panel](https://support.microsoft.com/en-us/windows/uninstall-or-remove-apps-and-programs-in-windows-4b55f974-2cc6-2d2b-d092-5905080eaf98).
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="linux" label="Linux">
|
||||
|
||||
To uninstall Jan on Linux, you should use your package manager's uninstall or remove option. For Debian/Ubuntu-based distributions, if you installed Jan via the `.deb` package, you can uninstall Jan using the following command:
|
||||
```bash
|
||||
sudo apt-get remove jan
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
2. Delete the application data, cache, and user data folders
|
||||
|
||||
<Tabs groupId="operating-systems">
|
||||
<TabItem value="mac" label="macOS">
|
||||
|
||||
```bash
|
||||
# Step 1: Delete the application data
|
||||
## Newer versions
|
||||
rm -rf ~/Library/Application\ Support/jan
|
||||
## Versions 0.2.0 and older
|
||||
rm -rf ~/Library/Application\ Support/jan-electron
|
||||
|
||||
# Step 2: Clear application cache
|
||||
rm -rf ~/Library/Caches/jan*
|
||||
|
||||
# Step 3: Remove all user data
|
||||
rm -rf ~/jan
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="win" label="Windows">
|
||||
|
||||
```bash
|
||||
# You can delete the `/Jan` directory in Windows's AppData Directory by visiting the following path `%APPDATA%\Jan`
|
||||
cd C:\Users\%USERNAME%\AppData\Roaming
|
||||
rmdir /S jan
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="linux" label="Linux">
|
||||
|
||||
```bash
|
||||
# You can delete the user data folders located at the following `~/jan`
|
||||
rm -rf ~/jan
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
3. If you are using version before `0.4.2` you need to run the following commands
|
||||
|
||||
<Tabs groupId="operating-systems">
|
||||
<TabItem value="mac" label="macOS">
|
||||
|
||||
```bash
|
||||
ps aux | grep nitro
|
||||
# Looks for processes like `nitro` and `nitro_arm_64`, and kill them one by one by process ID
|
||||
kill -9 <PID>
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="win" label="Windows">
|
||||
|
||||
```bash
|
||||
# Find the process ID (PID) of the nitro process by filtering the list by process name
|
||||
tasklist | findstr "nitro"
|
||||
# Once you have the PID of the process you want to terminate, run the `taskkill`
|
||||
taskkill /F /PID <PID>
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="linux" label="Linux">
|
||||
|
||||
```bash
|
||||
ps aux | grep nitro
|
||||
# Looks for processes like `nitro` and `nitro_arm_64`, and kill them one by one by process ID
|
||||
kill -9 <PID>
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
4. Download the latest version from via our homepage, [https://jan.ai/](https://jan.ai/).
|
||||
|
||||
:::note
|
||||
|
||||
If Jan is installed on multiple user accounts on your device, ensure it's completely removed from all shared space before reinstalling.
|
||||
|
||||
:::
|
||||
@ -1,85 +0,0 @@
|
||||
---
|
||||
title: Something's Amiss
|
||||
slug: /troubleshooting/somethings-amiss
|
||||
description: Troubleshooting "Something's amiss".
|
||||
keywords: [
|
||||
jan ai failed to fetch,
|
||||
failed to fetch error,
|
||||
jan ai error,
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
convZ
|
||||
ersational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
troubleshooting,
|
||||
]
|
||||
---
|
||||
|
||||
{/* Imports */}
|
||||
import Tabs from "@theme/Tabs";
|
||||
import TabItem from "@theme/TabItem";
|
||||
|
||||
Previously labelled "Failed to fetch" error.
|
||||
|
||||
You may receive a "Something's amiss" response when you first start chatting with a selected model.
|
||||
|
||||
This may occur due to several reasons. Please follow these steps to resolve it:
|
||||
|
||||
1. Ensure you are on the latest version of Mac, Windows, or Ubuntu OS version
|
||||
|
||||
- Upgrading to the latest version has resolved this issue for most people
|
||||
|
||||
2. Select a model that is smaller than 80% of your hardware V/RAM.
|
||||
|
||||
- For example, if you have an 8GB machine, you should select models smaller than 6GB.
|
||||
|
||||
3. Install the latest [Nightly release](https://jan.ai/install/nightly/)
|
||||
|
||||
- If you are re-installing Jan, it can help to [clear the application cache](https://jan.ai/troubleshooting/stuck-on-broken-build/).
|
||||
|
||||
4. Ensure your V/RAM is accessible by the application (some people have virtual RAM).
|
||||
|
||||
5. If you are on Nvidia GPUs, please download [Cuda](https://developer.nvidia.com/cuda-downloads).
|
||||
|
||||
6. If you're using Linux, please ensure that your system meets the following requirements gcc 11, g++ 11, cpp 11, or higher, refer to this [link](https://jan.ai/guides/troubleshooting/gpu-not-used/#specific-requirements-for-linux) for more information.
|
||||
|
||||
7. When [checking app logs](https://jan.ai/troubleshooting/how-to-get-error-logs/), if you encounter the error log `Bind address failed at 127.0.0.1:3928`, it indicates that the port used by Nitro might already be in use. Use the following commands to check the port status:
|
||||
|
||||
<Tabs groupId="operating-systems">
|
||||
<TabItem value="mac" label="macOS">
|
||||
|
||||
```bash
|
||||
netstat -an | grep 3928
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="win" label="Windows">
|
||||
|
||||
```sh
|
||||
netstat -ano | find "3928"
|
||||
tasklist /fi "PID eq 3928"
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="linux" label="Linux">
|
||||
|
||||
```sh
|
||||
netstat -anpe | grep "3928"
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
:::tip
|
||||
|
||||
Jan uses the following ports:
|
||||
|
||||
- Nitro: 3928
|
||||
- Jan API Server: 1337
|
||||
- Jan Documentation: 3001
|
||||
|
||||
:::
|
||||
@ -1,193 +0,0 @@
|
||||
---
|
||||
title: Jan is Not Using GPU
|
||||
slug: /troubleshooting/gpu-not-used
|
||||
description: Jan is not using GPU.
|
||||
keywords: [
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
convZ
|
||||
ersational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
troubleshooting,
|
||||
using GPU,
|
||||
]
|
||||
---
|
||||
|
||||
This guide provides steps to troubleshoot and resolve issues when Jan app does not utilize the GPU on Windows and Linux systems.
|
||||
|
||||
## Requirements for Running Jan in GPU Mode on Windows and Linux
|
||||
|
||||
### NVIDIA Driver
|
||||
|
||||
Ensure that you have installed the NVIDIA driver that supports CUDA 11.7 or higher. For a detailed of CUDA compatibility, please refer [here](https://docs.nvidia.com/deploy/cuda-compatibility/index.html#binary-compatibility__table-toolkit-driver).
|
||||
|
||||
To verify, open PowerShell or Terminal and enter the following command:
|
||||
|
||||
```bash
|
||||
nvidia-smi
|
||||
```
|
||||
|
||||
If you see a result similar to the following, you have successfully installed the NVIDIA driver:
|
||||
|
||||
```bash
|
||||
+-----------------------------------------------------------------------------+
|
||||
| NVIDIA-SMI 470.57.02 Driver Version: 470.57.02 CUDA Version: 11.7 |
|
||||
|-------------------------------+----------------------+----------------------+
|
||||
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
|
||||
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|
||||
| | | MIG M. |
|
||||
|===============================+======================+======================|
|
||||
| 0 NVIDIA GeForce ... Off | 00000000:01:00.0 On | N/A |
|
||||
| 0% 51C P8 10W / 170W | 364MiB / 7982MiB | 0% Default |
|
||||
| | | N/A |
|
||||
+-------------------------------+----------------------+----------------------+
|
||||
```
|
||||
|
||||
### CUDA Toolkit
|
||||
|
||||
Ensure that you have installed the CUDA toolkit that is compatible with your NVIDIA driver. For a detailed of CUDA compatibility, please refer [here](https://docs.nvidia.com/deploy/cuda-compatibility/index.html#binary-compatibility__table-toolkit-driver).
|
||||
|
||||
To verify, open PowerShell or Terminal and enter the following command:
|
||||
|
||||
```bash
|
||||
nvcc --version
|
||||
```
|
||||
|
||||
If you see a result similar to the following, you have successfully installed CUDA:
|
||||
|
||||
```bash
|
||||
nvcc: NVIDIA (R) Cuda compiler driver
|
||||
|
||||
Cuda compilation tools, release 11.7, V11.7.100
|
||||
Build cuda_11.7.r11.7/compiler.30033411_0
|
||||
```
|
||||
|
||||
### Specific Requirements for Linux
|
||||
|
||||
**GCC and G++ Version**: Ensure that you have installed `gcc-11`, `g++-11`, `cpp-11` or higher, refer [here](https://gcc.gnu.org/projects/cxx-status.html#cxx17). For Ubuntu, you can install g++ 11 by following the instructions [here](https://linuxconfig.org/how-to-switch-between-multiple-gcc-and-g-compiler-versions-on-ubuntu-20-04-lts-focal-fossa).
|
||||
|
||||
```bash
|
||||
# Example for ubuntu
|
||||
# Add the following PPA repository
|
||||
sudo add-apt-repository ppa:ubuntu-toolchain-r/test
|
||||
# Update the package list
|
||||
sudo apt update
|
||||
# Install g++ 11
|
||||
sudo apt-get install -y gcc-11 g++-11 cpp-11
|
||||
|
||||
# Update the default g++ version
|
||||
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-11 110 \
|
||||
--slave /usr/bin/g++ g++ /usr/bin/g++-11 \
|
||||
--slave /usr/bin/gcov gcov /usr/bin/gcov-11 \
|
||||
--slave /usr/bin/gcc-ar gcc-ar /usr/bin/gcc-ar-11 \
|
||||
--slave /usr/bin/gcc-ranlib gcc-ranlib /usr/bin/gcc-ranlib-11
|
||||
sudo update-alternatives --install /usr/bin/cpp cpp /usr/bin/cpp-11 110
|
||||
# Check the default g++ version
|
||||
g++ --version
|
||||
```
|
||||
|
||||
**Post-Installation Actions**: You must add the `.so` libraries of CUDA to the `LD_LIBRARY_PATH` environment variable by following the [Post-installation Actions instruction](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#post-installation-actions).
|
||||
|
||||
```bash
|
||||
# Example for ubuntu with CUDA 11.7
|
||||
sudo nano /etc/environment
|
||||
# Add /usr/local/cuda-11.7/bin to the PATH environment variable - the first line
|
||||
# Add the following line to the end of the file
|
||||
LD_LIBRARY_PATH=/usr/local/cuda-11.7/lib64
|
||||
|
||||
# Save and exit
|
||||
# Restart your computer or log out and log in again, the changes will take effect
|
||||
```
|
||||
|
||||
## Switching Between CPU/GPU Modes in Jan
|
||||
|
||||
By default, Jan runs in CPU mode. Upon start, Jan checks if your system is capable of running in GPU mode. If compatible, GPU mode is enabled automatically, and the GPU with the highest VRAM is selected. This setting can be verified in the `Settings` > `Advanced` section.
|
||||
|
||||

|
||||
|
||||
If you find that GPU mode is available but not enabled by default, consider the following troubleshooting steps:
|
||||
|
||||
:::tip
|
||||
|
||||
1. Check if you have installed the NVIDIA driver that supports CUDA 11.7 or higher. For a detailed of CUDA compatibility, please refer [here](https://docs.nvidia.com/deploy/cuda-compatibility/index.html#binary-compatibility__table-toolkit-driver).
|
||||
|
||||
2. Ensure that the CUDA toolkit is installed and compatible with your NVIDIA driver. For a detailed of CUDA compatibility, please refer [here](https://docs.nvidia.com/deploy/cuda-compatibility/index.html#binary-compatibility__table-toolkit-driver).
|
||||
|
||||
3. For Linux, it's crucial to add the `.so` libraries of CUDA and the CUDA compatible driver to the `LD_LIBRARY_PATH` environment variable. For Windows, users should ensure that the `.dll` libraries of CUDA and the CUDA compatible driver are included in the PATH environment variable. Usually, when installing CUDA on Windows, this environment variable is automatically added, but if you do not see it, you can add it manually by referring [here](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html#environment-setup).
|
||||
|
||||
:::
|
||||
|
||||
## Checking GPU Settings in Jan
|
||||
|
||||
1. To check the current GPU settings detected by Jan, navigate to `Settings` > `Advanced` > `Open App Directory`
|
||||
|
||||
<br></br>
|
||||
|
||||

|
||||
|
||||
<br></br>
|
||||
|
||||
2. Open the `settings.json` file under the `settings` folder. The following is an example of the `settings.json` file:
|
||||
|
||||
<br></br>
|
||||
|
||||
```json title="~/jan/settings/settings.json"
|
||||
{
|
||||
"notify": true,
|
||||
"run_mode": "gpu",
|
||||
"nvidia_driver": {
|
||||
"exist": true,
|
||||
"version": "531.18"
|
||||
},
|
||||
"cuda": {
|
||||
"exist": true,
|
||||
"version": "12"
|
||||
},
|
||||
"gpus": [
|
||||
{
|
||||
"id": "0",
|
||||
"vram": "12282"
|
||||
},
|
||||
{
|
||||
"id": "1",
|
||||
"vram": "6144"
|
||||
},
|
||||
{
|
||||
"id": "2",
|
||||
"vram": "6144"
|
||||
}
|
||||
],
|
||||
"gpu_highest_vram": "0"
|
||||
}
|
||||
```
|
||||
|
||||
:::tip
|
||||
|
||||
Troubleshooting tips:
|
||||
|
||||
- Ensure the `nvidia_driver` and `cuda` fields indicate that requirements software are installed.
|
||||
- If the `gpus` field is empty or does not list your GPU, verify the installation of the NVIDIA driver and CUDA toolkit.
|
||||
- For further assistance, please share the `settings.json` with us.
|
||||
|
||||
:::
|
||||
|
||||
## Tested Configurations
|
||||
|
||||
- Windows 11 Pro 64-bit, NVIDIA GeForce RTX 4070ti GPU, CUDA 12.2, NVIDIA driver 531.18 (Bare metal)
|
||||
- Ubuntu 22.04 LTS, NVIDIA GeForce RTX 4070ti GPU, CUDA 12.2, NVIDIA driver 545 (Bare metal)
|
||||
- Ubuntu 20.04 LTS, NVIDIA GeForce GTX 1660ti GPU, CUDA 12.1, NVIDIA driver 535 (Proxmox VM passthrough GPU)
|
||||
- Ubuntu 18.04 LTS, NVIDIA GeForce GTX 1660ti GPU, CUDA 12.1, NVIDIA driver 535 (Proxmox VM passthrough GPU)
|
||||
|
||||
## Common Issues and Solutions
|
||||
|
||||
1. If the issue persists, please install the [Nightly version](/install/nightly) instead.
|
||||
|
||||
2. If the issue persists, ensure your (V)RAM is accessible by the application. Some folks have virtual RAM and need additional configuration.
|
||||
|
||||
3. If you are facing issues with the installation of RTX issues, please update the NVIDIA driver that supports CUDA 11.7 or higher. Ensure that the CUDA path is added to the environment variable.
|
||||
|
||||
4. Get help in [Jan Discord](https://discord.gg/mY69SZaMaC).
|
||||
@ -1,39 +0,0 @@
|
||||
---
|
||||
title: How to Get Error Logs
|
||||
slug: /troubleshooting/how-to-get-error-logs
|
||||
description: How to get error logs.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
troubleshooting,
|
||||
error logs,
|
||||
app logs,
|
||||
server logs,
|
||||
]
|
||||
---
|
||||
|
||||
To get the error logs of Jan, you can navigate to the `~/jan/logs` directory through `Settings` > `Advanced` > `Open App Directory`.
|
||||
|
||||
- Open the `app.log` file if you are using UI.
|
||||
- Open the `error.log` file for error logs if you are using the local API server.
|
||||
|
||||
```bash
|
||||
# Using UI
|
||||
tail -n 50 ~/jan/logs/app.log
|
||||
|
||||
# Using local api server
|
||||
tail -n 50 ~/jan/logs/server.log
|
||||
```
|
||||
|
||||
:::note
|
||||
- When sharing logs or error information, make sure to redact any private or sensitive information.
|
||||
:::
|
||||
|
||||
If you have any questions or are looking for support, please don't hesitate to contact us via our [Discord community](https://discord.gg/Dt7MxDyNNZ) or create a [new issue in our GitHub repository](https://github.com/janhq/jan/issues/new/choose).
|
||||
@ -1,32 +0,0 @@
|
||||
---
|
||||
title: Permission Denied
|
||||
slug: /troubleshooting/permission-denied
|
||||
description: Permission denied.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
troubleshooting,
|
||||
permission denied,
|
||||
]
|
||||
---
|
||||
|
||||
When you run Jan, you may encounter the following error:
|
||||
|
||||
```bash
|
||||
Uncaught (in promise) Error: Error invoking layout-480796bff433a3a3.js:538 remote method 'installExtension':
|
||||
Error Package /Applications/Jan.app/Contents/Resources/app.asar.unpacked/pre-install/janhq-assistant-extension-1.0.0.tgz does not contain a valid manifest:
|
||||
Error EACCES: permission denied, mkdtemp '/Users/username/.npm/_cacache/tmp/ueCMn4'
|
||||
```
|
||||
|
||||
This error indicates a permission issue during the installation process. To fix this issue, you can run the following command to change ownership of the `~/.npm` directory to the current user:
|
||||
|
||||
```bash
|
||||
sudo chown -R $(whoami) ~/.npm
|
||||
```
|
||||
@ -1,24 +0,0 @@
|
||||
---
|
||||
title: Unexpected Token
|
||||
slug: /troubleshooting/unexpected-token
|
||||
description: Unexpected token is not a valid JSON
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
troubleshooting,
|
||||
unexpected token,
|
||||
]
|
||||
---
|
||||
|
||||
You may receive an error response `Error occurred: Unexpected token '<', "<!DOCTYPE"...is not valid JSON`, when you start a chat with OpenAI models.
|
||||
|
||||
1. Check that you added an OpenAI API key. You can get an API key from OpenAI's [developer platform](https://platform.openai.com/). Alternatively, we recommend you download a local model from Jan Hub, which remains free to use and runs on your own computer!
|
||||
|
||||
2. Using a VPN may help fix the issue.
|
||||
@ -1,26 +0,0 @@
|
||||
---
|
||||
title: Undefined Issue
|
||||
slug: /troubleshooting/undefined-issue
|
||||
description: Undefined issue troubleshooting guide.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
troubleshooting,
|
||||
undefined issue,
|
||||
]
|
||||
---
|
||||
|
||||
You may encounter an "undefined" issue when using Jan. Here are some troubleshooting steps to help you resolve the issue.
|
||||
|
||||
1. Try wiping the Jan folder and reopening the Jan app and see if the issue persists.
|
||||
2. If the issue persists, try to go `~/jan/extensions/@janhq/inference-nitro-extensions/dist/bin/<your-os>/nitro` and run the nitro manually and see if you get any error messages.
|
||||
3. Resolve the error messages you get from the nitro and see if the issue persists.
|
||||
4. Reopen the Jan app and see if the issue is resolved.
|
||||
5. If the issue persists, please share with us the [app logs](https://jan.ai/troubleshooting/how-to-get-error-logs/) via [Jan Discord](https://discord.gg/mY69SZaMaC).
|
||||
@ -1,21 +0,0 @@
|
||||
---
|
||||
title: Troubleshooting
|
||||
slug: /guides/troubleshooting/
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
troubleshooting,
|
||||
]
|
||||
---
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
||||
@ -1,101 +0,0 @@
|
||||
---
|
||||
title: HTTPS Proxy
|
||||
slug: /guides/advanced-settings/https-proxy
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
advanced-settings,
|
||||
https-proxy,
|
||||
]
|
||||
---
|
||||
|
||||
In this guide, we will show you how to set up your own HTTPS proxy server and configure Jan to use it.
|
||||
|
||||
## Why HTTPS Proxy?
|
||||
An HTTPS proxy helps you to maintain your privacy and security while still being able to browser the internet circumventing geographical restrictions.
|
||||
|
||||
## Setting Up Your Own HTTPS Proxy Server
|
||||
In this section, we will show you a high-level overview of how to set up your own HTTPS proxy server. This guide focus on using Squid as a popular and open-source proxy server software, but there are other software options you might consider based on your needs and preferences.
|
||||
|
||||
### Step 1: Choosing a Server
|
||||
Firstly, you need to choose a server to host your proxy server. We recommend using a cloud provider like Amazon AWS, Google Cloud, Microsoft Azure, Digital Ocean, etc. Ensure that your server has a public IP address and is accessible from the internet.
|
||||
|
||||
### Step 2: Installing Squid
|
||||
```bash
|
||||
sudo apt-get update
|
||||
sudo apt-get install squid
|
||||
```
|
||||
|
||||
### Step 3: Configure Squid for HTTPS
|
||||
|
||||
To enable HTTPS, you will need to configure Squid with SSL support.
|
||||
|
||||
- Generate SSL certificate
|
||||
|
||||
Squid requires an SSL certificate to be able to handle HTTPS traffic. You can generate a self-signed certificate or obtain one from a Certificate Authority (CA). For a self-signed certificate, you can use OpenSSL:
|
||||
|
||||
```bash
|
||||
openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout squid-proxy.pem -out squid-proxy.pem
|
||||
```
|
||||
|
||||
- Configure Squid to use the SSL certificate: Edit the Squid configuration file `/etc/squid/squid.conf` to include the path to your SSL certificate and enable the HTTPS port:
|
||||
|
||||
```bash
|
||||
http_port 3128 ssl-bump cert=/path/to/your/squid-proxy.pem
|
||||
ssl_bump server-first all
|
||||
ssl_bump bump all
|
||||
```
|
||||
|
||||
- Enable SSL Bumping: To intercept HTTPS traffic, Squid uses a process called SSL Bumping. This process allows Squid to decrypt and re-encrypt HTTPS traffic. To enable SSL Bumping, ensure the `ssl_bump` directives are configured correctly in your `squid.conf` file.
|
||||
|
||||
### Step 4 (Optional): Configure ACLs and Authentication
|
||||
|
||||
- Access Control Lists (ACLs): You can define rules to control who can access your proxy. This is done by editing the squid.conf file and defining ACLs:
|
||||
|
||||
```bash
|
||||
acl allowed_ips src "/etc/squid/allowed_ips.txt"
|
||||
http_access allow allowed_ips
|
||||
```
|
||||
|
||||
- Authentication: If you want to add an authentication layer, Squid supports several authentication schemes. Basic authentication setup might look like this:
|
||||
|
||||
```bash
|
||||
auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/passwords
|
||||
acl authenticated proxy_auth REQUIRED
|
||||
http_access allow authenticated
|
||||
```
|
||||
|
||||
### Step 5: Restart and Test Your Proxy
|
||||
|
||||
After configuring, restart Squid to apply the changes:
|
||||
|
||||
```bash
|
||||
sudo systemctl restart squid
|
||||
```
|
||||
|
||||
To test, configure your browser or another client to use the proxy server with its IP address and port (default is 3128). Check if you can access the internet through your proxy.
|
||||
|
||||
:::tip
|
||||
|
||||
Tips for Secure Your Proxy:
|
||||
- Firewall rules: Ensure that only intended users or IP addresses can connect to your proxy server. This can be achieved by setting up appropriate firewall rules.
|
||||
- Regular updates: Keep your server and proxy software updated to ensure that you are protected against known vulnerabilities.
|
||||
- Monitoring and logging: Monitor your proxy server for unusual activity and enable logging to keep track of the traffic passing through your proxy.
|
||||
|
||||
:::
|
||||
|
||||
## Setting Up Jan to Use Your HTTPS Proxy
|
||||
|
||||
Once you have your HTTPS proxy server set up, you can configure Jan to use it. Navigate to `Settings` > `Advanced Settings` and specify the HTTPS proxy (proxy auto-configuration and SOCKS not supported).
|
||||
|
||||
You can turn on the feature `Ignore SSL Certificates` if you are using a self-signed certificate. This feature allows self-signed or unverified certificates.
|
||||
|
||||

|
||||
@ -1,65 +0,0 @@
|
||||
---
|
||||
title: Advanced Settings
|
||||
slug: /guides/advanced-settings/
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
advanced-settings,
|
||||
]
|
||||
---
|
||||
|
||||
This guide will show you how to use the advanced settings in Jan.
|
||||
|
||||
## Keyboard Shortcuts
|
||||
|
||||
Keyboard shortcuts are a great way to speed up your workflow. Here are some of the keyboard shortcuts that you can use in Jan.
|
||||
|
||||
| Combination | Description |
|
||||
| --------------- | -------------------------------------------------- |
|
||||
| `⌘ E` | Show list your models |
|
||||
| `⌘ K` | Show list navigation pages |
|
||||
| `⌘ B` | Toggle collapsible left panel |
|
||||
| `⌘ ,` | Navigate to setting page |
|
||||
| `Enter` | Send a message |
|
||||
| `Shift + Enter` | Insert new line in input box |
|
||||
| `Arrow Up` | Navigate to previous option (within search dialog) |
|
||||
| `Arrow Down` | Navigate to next option (within search dialog) |
|
||||
|
||||
<br></br>
|
||||
|
||||
:::note
|
||||
`⌘` is the command key on macOS, and `Ctrl` on Windows.
|
||||
:::
|
||||
|
||||
## Experimental Mode
|
||||
|
||||
Experimental mode allows you to enable experimental features that may be unstable tested.
|
||||
|
||||
## Jan Data Folder
|
||||
|
||||
The Jan data folder is the location where messages, model configurations, and other user data are placed. You can change the location of the data folder to a different location.
|
||||
|
||||

|
||||
|
||||
## HTTPS Proxy & Ignore SSL Certificate
|
||||
|
||||
HTTPS Proxy allows you to use a proxy server to connect to the internet. You can also ignore SSL certificates if you are using a self-signed certificate.
|
||||
Please check out the guide on [how to set up your own HTTPS proxy server and configure Jan to use it](../advanced-settings/https-proxy) for more information.
|
||||
|
||||
## Clear Logs
|
||||
|
||||
Clear logs will remove all logs from the Jan application.
|
||||
|
||||
## Reset To Factory Default
|
||||
|
||||
Reset the application to its original state, deleting all your usage data, including model customizations and conversation history. This action is irreversible and recommended only if the application is in a corrupted state.
|
||||
|
||||

|
||||
|
Before Width: | Height: | Size: 8.1 MiB |
|
Before Width: | Height: | Size: 3.1 MiB |
|
Before Width: | Height: | Size: 390 KiB |