Merge branch 'dev' into chore/get-to-3.5-performance
4
.gitignore
vendored
@ -31,3 +31,7 @@ extensions/inference-nitro-extension/bin/saved-*
|
||||
extensions/inference-nitro-extension/bin/*.tar.gz
|
||||
extensions/inference-nitro-extension/bin/vulkaninfoSDK.exe
|
||||
extensions/inference-nitro-extension/bin/vulkaninfo
|
||||
|
||||
|
||||
# Turborepo
|
||||
.turbo
|
||||
10
README.md
@ -76,31 +76,31 @@ Jan is an open-source ChatGPT alternative that runs 100% offline on your compute
|
||||
<tr style="text-align:center">
|
||||
<td style="text-align:center"><b>Experimental (Nightly Build)</b></td>
|
||||
<td style="text-align:center">
|
||||
<a href='https://delta.jan.ai/latest/jan-win-x64-0.4.7-293.exe'>
|
||||
<a href='https://delta.jan.ai/latest/jan-win-x64-0.4.7-295.exe'>
|
||||
<img src='./docs/static/img/windows.png' style="height:14px; width: 14px" />
|
||||
<b>jan.exe</b>
|
||||
</a>
|
||||
</td>
|
||||
<td style="text-align:center">
|
||||
<a href='https://delta.jan.ai/latest/jan-mac-x64-0.4.7-293.dmg'>
|
||||
<a href='https://delta.jan.ai/latest/jan-mac-x64-0.4.7-295.dmg'>
|
||||
<img src='./docs/static/img/mac.png' style="height:15px; width: 15px" />
|
||||
<b>Intel</b>
|
||||
</a>
|
||||
</td>
|
||||
<td style="text-align:center">
|
||||
<a href='https://delta.jan.ai/latest/jan-mac-arm64-0.4.7-293.dmg'>
|
||||
<a href='https://delta.jan.ai/latest/jan-mac-arm64-0.4.7-295.dmg'>
|
||||
<img src='./docs/static/img/mac.png' style="height:15px; width: 15px" />
|
||||
<b>M1/M2</b>
|
||||
</a>
|
||||
</td>
|
||||
<td style="text-align:center">
|
||||
<a href='https://delta.jan.ai/latest/jan-linux-amd64-0.4.7-293.deb'>
|
||||
<a href='https://delta.jan.ai/latest/jan-linux-amd64-0.4.7-295.deb'>
|
||||
<img src='./docs/static/img/linux.png' style="height:14px; width: 14px" />
|
||||
<b>jan.deb</b>
|
||||
</a>
|
||||
</td>
|
||||
<td style="text-align:center">
|
||||
<a href='https://delta.jan.ai/latest/jan-linux-x86_64-0.4.7-293.AppImage'>
|
||||
<a href='https://delta.jan.ai/latest/jan-linux-x86_64-0.4.7-295.AppImage'>
|
||||
<img src='./docs/static/img/linux.png' style="height:14px; width: 14px" />
|
||||
<b>jan.AppImage</b>
|
||||
</a>
|
||||
|
||||
@ -3,3 +3,4 @@ UMAMI_PROJECT_API_KEY=xxxx
|
||||
UMAMI_APP_URL=xxxx
|
||||
ALGOLIA_API_KEY=xxxx
|
||||
ALGOLIA_APP_ID=xxxx
|
||||
GITHUB_ACCESS_TOKEN=xxxx
|
||||
@ -24,7 +24,7 @@ keywords:
|
||||
|
||||
Ensure your system meets the following specifications to guarantee a smooth development experience:
|
||||
|
||||
- [Hardware Requirements](../../guides/02-installation/06-hardware.md)
|
||||
- Hardware Requirements
|
||||
|
||||
### System Requirements
|
||||
|
||||
|
||||
@ -18,7 +18,7 @@ keywords:
|
||||
The following docs are aimed at developers who want to build extensions on top of the Jan Framework.
|
||||
|
||||
:::tip
|
||||
If you are interested to **contribute to the framework's Core SDK itself**, like adding new drivers, runtimes, and infrastructure level support, please refer to [framework docs](/docs) instead.
|
||||
If you are interested to **contribute to the framework's Core SDK itself**, like adding new drivers, runtimes, and infrastructure level support, please refer to [framework docs](/developer/framework) instead.
|
||||
:::
|
||||
|
||||
## Extensions
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
---
|
||||
title: Engineering Specs
|
||||
slug: /docs/engineering
|
||||
slug: /developer/engineering
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
@ -1,6 +1,6 @@
|
||||
---
|
||||
title: Product Specs
|
||||
slug: /docs/product
|
||||
slug: /developer/product
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
@ -1,6 +1,18 @@
|
||||
---
|
||||
title: Overview
|
||||
slug: /docs
|
||||
title: Framework
|
||||
slug: /developer/framework/
|
||||
description: Jan Docs | Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
]
|
||||
---
|
||||
|
||||
The following low-level docs are aimed at core contributors.
|
||||
|
Before Width: | Height: | Size: 105 KiB After Width: | Height: | Size: 105 KiB |
|
Before Width: | Height: | Size: 402 KiB After Width: | Height: | Size: 402 KiB |
|
Before Width: | Height: | Size: 32 KiB After Width: | Height: | Size: 32 KiB |
|
Before Width: | Height: | Size: 172 KiB After Width: | Height: | Size: 172 KiB |
|
Before Width: | Height: | Size: 629 KiB After Width: | Height: | Size: 629 KiB |
|
Before Width: | Height: | Size: 105 KiB After Width: | Height: | Size: 105 KiB |
|
Before Width: | Height: | Size: 96 KiB After Width: | Height: | Size: 96 KiB |
@ -1,61 +0,0 @@
|
||||
---
|
||||
title: Overview
|
||||
slug: /guides
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
]
|
||||
---
|
||||
|
||||
The following docs are aimed at end users who want to troubleshoot or learn how to use the **Jan Desktop** application better.
|
||||
|
||||
:::tip
|
||||
If you are interested to build extensions, please refer to [developer docs](/developer) instead (WIP).
|
||||
|
||||
If you are interested to contribute to the underlying framework, please refer to [framework docs](/docs) instead.
|
||||
:::
|
||||
|
||||
## Jan Desktop
|
||||
|
||||
The desktop client is a ChatGPT alternative that runs on your own computer, with a [local API server](/guides/using-server).
|
||||
|
||||
## Features
|
||||
|
||||
- Compatible with [open-source models](/guides/using-models) (GGUF via [llama.cpp](https://github.com/ggerganov/llama.cpp), TensorRT via [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM), and [remote APIs](https://platform.openai.com/docs/api-reference))
|
||||
- Compatible with most OSes: [Windows](/install/windows/), [Mac](/install/mac), [Linux](/install/linux), with GPU acceleration through [llama.cpp](https://github.com/ggerganov/llama.cpp)
|
||||
- Stores data in [open file formats](/developer/file-based)
|
||||
- Local API [server mode](/guides/using-server)
|
||||
- Customizable via [extensions](/developer/build-extension)
|
||||
- And more in the [roadmap](https://github.com/orgs/janhq/projects/5/views/16). Join us on [Discord](https://discord.gg/5rQ2zTv3be) and tell us what you want to see!
|
||||
|
||||
## Why Jan?
|
||||
|
||||
We believe in the need for an open source AI ecosystem.
|
||||
|
||||
We're focused on building infra, tooling and [custom models](https://huggingface.co/janhq) to allow open source AIs to compete on a level playing field with proprietary offerings.
|
||||
|
||||
Read more about our mission and culture [here](/about).
|
||||
|
||||
#### 💻 Own your AI
|
||||
|
||||
Jan runs 100% on your own machine, predictably, privately and offline. No one else can see your conversations, not even us.
|
||||
|
||||
#### 🏗️ Extensions
|
||||
|
||||
Jan ships with a local-first, AI-native, and cross platform [extensions framework](/developer/build-extension). Developers can extend and customize everything from functionality to UI to branding. In fact, Jan's current main features are actually built as extensions on top of this framework.
|
||||
|
||||
#### 🗂️ Open File Formats
|
||||
|
||||
Jan stores data in your [local filesystem](/developer/file-based). Your data never leaves your computer. You are free to delete, export, migrate your data, even to a different platform.
|
||||
|
||||
#### 🌍 Open Source
|
||||
|
||||
Both Jan and [Nitro](https://nitro.jan.ai), our lightweight inference engine, are licensed via the open source [AGPLv3 license](https://github.com/janhq/jan/blob/main/LICENSE).
|
||||
@ -1,98 +0,0 @@
|
||||
---
|
||||
title: Mac
|
||||
slug: /install/mac
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
installation guide,
|
||||
]
|
||||
---
|
||||
|
||||
# Installing Jan on MacOS
|
||||
|
||||
## System Requirements
|
||||
|
||||
Ensure that your MacOS version is 13 or higher to run Jan.
|
||||
|
||||
## Installation
|
||||
|
||||
Jan is available for download via our homepage, [https://jan.ai/](https://jan.ai/).
|
||||
|
||||
For MacOS, the download should be available as a `.dmg` file in the following format.
|
||||
|
||||
```bash
|
||||
# Intel Mac
|
||||
jan-mac-x64-{version}.dmg
|
||||
# Apple Silicon Mac
|
||||
jan-mac-arm64-{version}.dmg
|
||||
```
|
||||
|
||||
The typical installation process takes around a minute.
|
||||
|
||||
## GitHub Releases
|
||||
|
||||
Jan is also available from [Jan's GitHub Releases](https://github.com/janhq/jan/releases) page, with a recommended [latest stable release](https://github.com/janhq/jan/releases/latest).
|
||||
|
||||
Within the Releases' assets, you will find the following files for MacOS:
|
||||
|
||||
```bash
|
||||
# Intel Mac (dmg file and zip file)
|
||||
jan-mac-x64-{version}.dmg
|
||||
jan-mac-x64-{version}.zip
|
||||
|
||||
# Apple Silicon Mac (dmg file and zip file)
|
||||
jan-mac-arm64-{version}.dmg
|
||||
jan-mac-arm64-{version}.zip
|
||||
```
|
||||
|
||||
## Uninstall Jan
|
||||
|
||||
As Jan is in development mode, you might get stuck on a broken build.
|
||||
To reset your installation
|
||||
|
||||
1. Delete Jan from your `/Applications` folder
|
||||
2. Delete Application data
|
||||
|
||||
```bash
|
||||
# Newer versions
|
||||
rm -rf ~/Library/Application\ Support/jan
|
||||
|
||||
# Versions 0.2.0 and older
|
||||
rm -rf ~/Library/Application\ Support/jan-electron
|
||||
```
|
||||
|
||||
3. Clear Application cache
|
||||
|
||||
```bash
|
||||
rm -rf ~/Library/Caches/jan*
|
||||
```
|
||||
|
||||
4. Use the following commands to remove any dangling backend processes:
|
||||
|
||||
```bash
|
||||
ps aux | grep nitro
|
||||
```
|
||||
|
||||
Look for processes like "nitro" and "nitro_arm_64", and kill them one by one with:
|
||||
|
||||
```bash
|
||||
kill -9 <PID>
|
||||
```
|
||||
|
||||
## Common Questions
|
||||
|
||||
### Does Jan run on Apple Silicon machines?
|
||||
|
||||
Yes, Jan supports MacOS Arm64 builds that can run on Macs with the Apple Silicon chipsets. You can install Jan on your Apple Silicon Mac by downloading the `jan-mac-arm64-<version>.dmg` file from the [Jan's homepage](https://jan.ai/).
|
||||
|
||||
### Which package should I download for my Mac?
|
||||
|
||||
Jan supports both Intel and Apple Silicon Macs. To find which appropriate package to download for your Mac, please follow this official guide from Apple: [Get system information about your Mac - Apple Support](https://support.apple.com/guide/mac-help/syspr35536/mac).
|
||||
@ -1,73 +0,0 @@
|
||||
---
|
||||
title: Windows
|
||||
slug: /install/windows
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
installation guide,
|
||||
]
|
||||
---
|
||||
|
||||
# Installing Jan on Windows
|
||||
|
||||
## System Requirements
|
||||
|
||||
Ensure that your system meets the following requirements:
|
||||
|
||||
- Windows 10 or higher is required to run Jan.
|
||||
|
||||
To enable GPU support, you will need:
|
||||
|
||||
- NVIDIA GPU with CUDA Toolkit 11.7 or higher
|
||||
- NVIDIA driver 470.63.01 or higher
|
||||
|
||||
## Installation
|
||||
|
||||
Jan is available for download via our homepage, [https://jan.ai](https://jan.ai/).
|
||||
|
||||
For Windows, the download should be available as a `.exe` file in the following format.
|
||||
|
||||
```bash
|
||||
jan-win-x64-{version}.exe
|
||||
```
|
||||
|
||||
The typical installation process takes around a minute.
|
||||
|
||||
### GitHub Releases
|
||||
|
||||
Jan is also available from [Jan's GitHub Releases](https://github.com/janhq/jan/releases) page, with a recommended [latest stable release](https://github.com/janhq/jan/releases/latest).
|
||||
|
||||
Within the Releases' assets, you will find the following files for Windows:
|
||||
|
||||
```bash
|
||||
# Windows Installers
|
||||
jan-win-x64-{version}.exe
|
||||
```
|
||||
|
||||
### Default Installation Directory
|
||||
|
||||
By default, Jan is installed in the following directory:
|
||||
|
||||
```bash
|
||||
# Default installation directory
|
||||
C:\Users\{username}\AppData\Local\Programs\Jan
|
||||
```
|
||||
|
||||
## Uninstalling Jan
|
||||
|
||||
To uninstall Jan on Windows, use the [Windows Control Panel](https://support.microsoft.com/en-us/windows/uninstall-or-remove-apps-and-programs-in-windows-4b55f974-2cc6-2d2b-d092-5905080eaf98).
|
||||
|
||||
To remove all user data associated with Jan, you can delete the `/jan` directory in Windows' [AppData directory](https://superuser.com/questions/632891/what-is-appdata).
|
||||
|
||||
```bash
|
||||
cd C:\Users\%USERNAME%\AppData\Roaming
|
||||
rmdir /S jan
|
||||
```
|
||||
@ -1,94 +0,0 @@
|
||||
---
|
||||
title: Linux
|
||||
slug: /install/linux
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
installation guide,
|
||||
]
|
||||
---
|
||||
|
||||
# Installing Jan on Linux
|
||||
|
||||
## System Requirements
|
||||
|
||||
Ensure that your system meets the following requirements:
|
||||
|
||||
- glibc 2.27 or higher (check with `ldd --version`)
|
||||
- gcc 11, g++ 11, cpp 11, or higher, refer to this [link](https://jan.ai/guides/troubleshooting/gpu-not-used/#specific-requirements-for-linux) for more information.
|
||||
|
||||
To enable GPU support, you will need:
|
||||
|
||||
- NVIDIA GPU with CUDA Toolkit 11.7 or higher
|
||||
- NVIDIA driver 470.63.01 or higher
|
||||
|
||||
## Installation
|
||||
|
||||
Jan is available for download via our homepage, [https://jan.ai](https://jan.ai/).
|
||||
|
||||
For Linux, the download should be available as a `.AppImage` file or a `.deb` file in the following format.
|
||||
|
||||
```bash
|
||||
# AppImage
|
||||
jan-linux-x86_64-{version}.AppImage
|
||||
|
||||
# Debian Linux distribution
|
||||
jan-linux-amd64-{version}.deb
|
||||
```
|
||||
|
||||
To install Jan on Linux, you should use your package manager's install or `dpkg``. For Debian/Ubuntu-based distributions, you can install Jan using the following command:
|
||||
|
||||
```bash
|
||||
# Install Jan using dpkg
|
||||
sudo dpkg -i jan-linux-amd64-{version}.deb
|
||||
|
||||
# Install Jan using apt-get
|
||||
sudo apt-get install ./jan-linux-amd64-{version}.deb
|
||||
# where jan-linux-amd64-{version}.deb is path to the Jan package
|
||||
```
|
||||
|
||||
For other Linux distributions, you launch the AppImage file without installation. To do so, you need to make the AppImage file executable and then run it. You can do this either through your file manager's properties dialog or with the following commands:
|
||||
|
||||
```bash
|
||||
# Install Jan using AppImage
|
||||
chmod +x jan-linux-x86_64-{version}.AppImage
|
||||
./jan-linux-x86_64-{version}.AppImage
|
||||
# where jan-linux-x86_64-{version}.AppImage is path to the Jan package
|
||||
```
|
||||
|
||||
The typical installation process takes around a minute.
|
||||
|
||||
### GitHub Releases
|
||||
|
||||
Jan is also available from [Jan's GitHub Releases](https://github.com/janhq/jan/releases) page, with a recommended [latest stable release](https://github.com/janhq/jan/releases/latest).
|
||||
|
||||
Within the Releases' assets, you will find the following files for Linux:
|
||||
|
||||
```bash
|
||||
# Debian Linux distribution
|
||||
jan-linux-amd64-{version}.deb
|
||||
|
||||
# AppImage
|
||||
jan-linux-x86_64-{version}.AppImage
|
||||
```
|
||||
|
||||
## Uninstall Jan
|
||||
|
||||
To uninstall Jan on Linux, you should use your package manager's uninstall or remove option. For Debian/Ubuntu-based distributions, if you installed Jan via the `.deb` package, you can uninstall Jan using the following command:
|
||||
|
||||
```bash
|
||||
sudo apt-get remove jan
|
||||
# where jan is the name of Jan package
|
||||
```
|
||||
|
||||
For other Linux distributions, if you installed Jan via the `.AppImage` file, you can uninstall Jan by deleting the `.AppImage` file.
|
||||
|
||||
In case you wish to completely remove all user data associated with Jan after uninstallation, you can delete the user data folders located at ~/jan. This will return your system to its state prior to the installation of Jan. This method can also be used to reset all settings if you are experiencing any issues with Jan.
|
||||
@ -1,91 +0,0 @@
|
||||
---
|
||||
title: From Source
|
||||
slug: /install/from-source
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
]
|
||||
---
|
||||
|
||||
# Installing Jan from Source
|
||||
|
||||
## Installation
|
||||
|
||||
### Pre-requisites
|
||||
|
||||
Before proceeding with the installation of Jan from source, ensure that the following software versions are installed on your system:
|
||||
|
||||
- Node.js version 20.0.0 or higher
|
||||
- Yarn version 1.22.0 or higher
|
||||
|
||||
### Instructions
|
||||
|
||||
:::note
|
||||
|
||||
This instruction is tested on MacOS only.
|
||||
|
||||
:::
|
||||
|
||||
1. Clone the Jan repository from GitHub
|
||||
|
||||
```bash
|
||||
git clone https://github.com/janhq/jan
|
||||
git checkout DESIRED_BRANCH
|
||||
cd jan
|
||||
```
|
||||
|
||||
2. Install the required dependencies using Yarn
|
||||
|
||||
```bash
|
||||
yarn install
|
||||
|
||||
# Build core module
|
||||
yarn build:core
|
||||
|
||||
# Packing base plugins
|
||||
yarn build:plugins
|
||||
|
||||
# Packing uikit
|
||||
yarn build:uikit
|
||||
```
|
||||
|
||||
3. Run development and using Jan
|
||||
|
||||
```bash
|
||||
yarn dev
|
||||
```
|
||||
|
||||
This will start the development server and open the desktop app. During this step, you may encounter notifications about installing base plugins. Simply click `OK` and `Next` to continue.
|
||||
|
||||
#### For production build
|
||||
|
||||
Build the app for macOS M1/M2 for production and place the result in the dist folder
|
||||
|
||||
```bash
|
||||
# Do step 1 and 2 in the previous section
|
||||
git clone https://github.com/janhq/jan
|
||||
cd jan
|
||||
yarn install
|
||||
|
||||
# Build core module
|
||||
yarn build:core
|
||||
|
||||
# Package base plugins
|
||||
yarn build:plugins
|
||||
|
||||
# Packing uikit
|
||||
yarn build:uikit
|
||||
|
||||
# Build the app
|
||||
yarn build
|
||||
```
|
||||
|
||||
This completes the installation process for Jan from source. The production-ready app for macOS can be found in the dist folder.
|
||||
@ -1,123 +0,0 @@
|
||||
---
|
||||
title: Docker
|
||||
slug: /install/docker
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
docker installation,
|
||||
cpu mode,
|
||||
gpu mode,
|
||||
]
|
||||
---
|
||||
|
||||
# Installing Jan using Docker
|
||||
|
||||
### Pre-requisites
|
||||
|
||||
:::note
|
||||
|
||||
**Supported OS**: Linux, WSL2 Docker
|
||||
|
||||
:::
|
||||
|
||||
- Docker Engine and Docker Compose are required to run Jan in Docker mode. Follow the [instructions](https://docs.docker.com/engine/install/ubuntu/) below to get started with Docker Engine on Ubuntu.
|
||||
|
||||
```bash
|
||||
curl -fsSL https://get.docker.com -o get-docker.sh
|
||||
sudo sh ./get-docker.sh --dry-run
|
||||
```
|
||||
|
||||
- If you intend to run Jan in GPU mode, you need to install `nvidia-driver` and `nvidia-docker2`. Follow the instruction [here](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) for installation.
|
||||
|
||||
### Run Jan in Docker Mode
|
||||
|
||||
| Docker compose Profile | Description |
|
||||
| ---------------------- | -------------------------------------------- |
|
||||
| `cpu-fs` | Run Jan in CPU mode with default file system |
|
||||
| `cpu-s3fs` | Run Jan in CPU mode with S3 file system |
|
||||
| `gpu-fs` | Run Jan in GPU mode with default file system |
|
||||
| `gpu-s3fs` | Run Jan in GPU mode with S3 file system |
|
||||
|
||||
| Environment Variable | Description |
|
||||
| ----------------------- | ------------------------------------------------------------------------------------------------------- |
|
||||
| `S3_BUCKET_NAME` | S3 bucket name - leave blank for default file system |
|
||||
| `AWS_ACCESS_KEY_ID` | AWS access key ID - leave blank for default file system |
|
||||
| `AWS_SECRET_ACCESS_KEY` | AWS secret access key - leave blank for default file system |
|
||||
| `AWS_ENDPOINT` | AWS endpoint URL - leave blank for default file system |
|
||||
| `AWS_REGION` | AWS region - leave blank for default file system |
|
||||
| `API_BASE_URL` | Jan Server URL, please modify it as your public ip address or domain name default http://localhost:1377 |
|
||||
|
||||
- **Option 1**: Run Jan in CPU mode
|
||||
|
||||
```bash
|
||||
# cpu mode with default file system
|
||||
docker compose --profile cpu-fs up -d
|
||||
|
||||
# cpu mode with S3 file system
|
||||
docker compose --profile cpu-s3fs up -d
|
||||
```
|
||||
|
||||
- **Option 2**: Run Jan in GPU mode
|
||||
|
||||
- **Step 1**: Check CUDA compatibility with your NVIDIA driver by running `nvidia-smi` and check the CUDA version in the output
|
||||
|
||||
```bash
|
||||
nvidia-smi
|
||||
|
||||
# Output
|
||||
+---------------------------------------------------------------------------------------+
|
||||
| NVIDIA-SMI 531.18 Driver Version: 531.18 CUDA Version: 12.1 |
|
||||
|-----------------------------------------+----------------------+----------------------+
|
||||
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
|
||||
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|
||||
| | | MIG M. |
|
||||
|=========================================+======================+======================|
|
||||
| 0 NVIDIA GeForce RTX 4070 Ti WDDM | 00000000:01:00.0 On | N/A |
|
||||
| 0% 44C P8 16W / 285W| 1481MiB / 12282MiB | 2% Default |
|
||||
| | | N/A |
|
||||
+-----------------------------------------+----------------------+----------------------+
|
||||
| 1 NVIDIA GeForce GTX 1660 Ti WDDM | 00000000:02:00.0 Off | N/A |
|
||||
| 0% 49C P8 14W / 120W| 0MiB / 6144MiB | 0% Default |
|
||||
| | | N/A |
|
||||
+-----------------------------------------+----------------------+----------------------+
|
||||
| 2 NVIDIA GeForce GTX 1660 Ti WDDM | 00000000:05:00.0 Off | N/A |
|
||||
| 29% 38C P8 11W / 120W| 0MiB / 6144MiB | 0% Default |
|
||||
| | | N/A |
|
||||
+-----------------------------------------+----------------------+----------------------+
|
||||
|
||||
+---------------------------------------------------------------------------------------+
|
||||
| Processes: |
|
||||
| GPU GI CI PID Type Process name GPU Memory |
|
||||
| ID ID Usage |
|
||||
|=======================================================================================|
|
||||
```
|
||||
|
||||
- **Step 2**: Visit [NVIDIA NGC Catalog ](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/cuda/tags) and find the smallest minor version of image tag that matches your CUDA version (e.g., 12.1 -> 12.1.0)
|
||||
|
||||
- **Step 3**: Update the `Dockerfile.gpu` line number 5 with the latest minor version of the image tag from step 2 (e.g. change `FROM nvidia/cuda:12.2.0-runtime-ubuntu22.04 AS base` to `FROM nvidia/cuda:12.1.0-runtime-ubuntu22.04 AS base`)
|
||||
|
||||
- **Step 4**: Run command to start Jan in GPU mode
|
||||
|
||||
```bash
|
||||
# GPU mode with default file system
|
||||
docker compose --profile gpu-fs up -d
|
||||
|
||||
# GPU mode with S3 file system
|
||||
docker compose --profile gpu-s3fs up -d
|
||||
```
|
||||
|
||||
This will start the web server and you can access Jan at `http://localhost:3000`.
|
||||
|
||||
:::warning
|
||||
|
||||
- RAG feature is not supported in Docker mode with s3fs yet.
|
||||
|
||||
:::
|
||||
@ -1,56 +0,0 @@
|
||||
---
|
||||
title: Hardware Requirements
|
||||
slug: /guides/install/hardware
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
]
|
||||
---
|
||||
|
||||
Jan is designed to be lightweight and able to run Large Language Models (LLMs) out-of-the-box.
|
||||
|
||||
The current download size is less than 150 MB and has a disk space of ~300 MB.
|
||||
|
||||
To ensure optimal performance, please see the following system requirements:
|
||||
|
||||
## Disk Space
|
||||
|
||||
- Minimum requirement
|
||||
- At least 5 GB of free disk space is required to accommodate the download, storage, and management of open-source LLM models.
|
||||
- Recommended
|
||||
- For an optimal experience and to run most available open-source LLM models on Jan, it is recommended to have 10 GB of free disk space.
|
||||
|
||||
## RAM and GPU VRAM
|
||||
|
||||
The amount of RAM on your system plays a crucial role in determining the size and complexity of LLM models you can effectively run. Jan can be utilized on traditional computers where RAM is a key resource. For enhanced performance, Jan also supports GPU acceleration, utilizing the VRAM of your graphics card.
|
||||
|
||||
## Best Models for your V/RAM
|
||||
|
||||
The RAM and GPU VRAM requirements are dependent on the size and complexity of the LLM models you intend to run. The following are some general guidelines to help you determine the amount of RAM or VRAM you need to run LLM models on Jan
|
||||
|
||||
- `8 GB of RAM`: Suitable for running smaller models like 3B models or quantized 7B models
|
||||
- `16 GB of RAM (recommended)`: This is considered the "minimum usable models" threshold, particularly for 7B models (e.g Mistral 7B, etc)
|
||||
- `Beyond 16GB of RAM`: Required for handling larger and more sophisticated model, such as 70B models.
|
||||
|
||||
## Architecture
|
||||
|
||||
Jan is designed to run on multiple architectures, versatility and widespread usability. The supported architectures include:
|
||||
|
||||
### CPU Support
|
||||
|
||||
- `x86`: Jan is well-suited for systems with x86 architecture, which is commonly found in traditional desktops and laptops. It ensures smooth performance on a variety of devices using x86 processors.
|
||||
- `ARM`: Jan is optimized to run efficiently on ARM-based systems, extending compatibility to a broad range of devices using ARM processors.
|
||||
|
||||
### GPU Support
|
||||
|
||||
- `NVIDIA`
|
||||
- `AMD`
|
||||
- `ARM64 Mac`
|
||||
@ -1,49 +0,0 @@
|
||||
---
|
||||
title: Nightly Release
|
||||
slug: /install/nightly
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
nightly release,
|
||||
]
|
||||
---
|
||||
|
||||
:::warning
|
||||
|
||||
- Nightly releases are cutting-edge versions that include the latest features. However, they are highly unstable and may contain bugs.
|
||||
|
||||
:::
|
||||
|
||||
## Where to Find Nightly Releases
|
||||
|
||||
- **Jan's GitHub Repository**: Visit the [Download section](https://github.com/janhq/jan?tab=readme-ov-file#download) of Jan's GitHub repository for the latest nightly release.
|
||||
|
||||
- **Discord Channel**: Nightly releases are also announced in our [Discord channel](https://discord.com/channels/1107178041848909847/1191638499355537418).
|
||||
|
||||
## Automatic Updates
|
||||
|
||||
Once you install a nightly build, the application will automatically prompt you to update each time it is restarted, ensuring you always have the latest version.
|
||||
|
||||
## Running Stable and Nightly Versions Simultaneously
|
||||
|
||||
If you wish to use both the stable and nightly versions of Jan, follow these steps:
|
||||
|
||||
1. Install the stable version as usual.
|
||||
2. For the nightly build, choose a different installation directory to avoid conflicts.
|
||||
3. Ensure that you clearly label or create shortcuts for each version to avoid confusion.
|
||||
|
||||
<br></br>
|
||||
|
||||
:::tip
|
||||
|
||||
- Engage with [our community on Discord](https://discord.gg/Dt7MxDyNNZ) to share feedback or get support for any issues you encounter.
|
||||
|
||||
:::
|
||||
@ -1,31 +0,0 @@
|
||||
---
|
||||
title: Antivirus Testing
|
||||
slug: /guides/install/antivirus-compatibility-testing
|
||||
description: Antivirus compatibility testing documentation
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
antivirus compatibility,
|
||||
]
|
||||
---
|
||||
|
||||
As a part of our release process, we run antivirus compatibility tests for Jan v0.4.4 and onwards. This documentation includes a matrix that correlates the Jan App version with the tested antivirus versions.
|
||||
|
||||
## Antivirus Software Tested
|
||||
|
||||
The following summarizes ongoing testing targets:
|
||||
|
||||
| Antivirus | Version | Target Result |
|
||||
| ------------------ | ------------ | -------------------------------- |
|
||||
| Bitdefender | 27.0.27.125 | Scanned and 0 threat(s) detected |
|
||||
| McAfee | 4.21.0.0 | Scanned and 0 threat(s) detected |
|
||||
| Microsoft Defender | 1.403.2259.0 | Scanned and 0 threat(s) detected |
|
||||
|
||||
To report issues, false positives, or to request additional testing, please email devops@jan.ai
|
||||
@ -1,51 +0,0 @@
|
||||
---
|
||||
title: Installation
|
||||
slug: /install
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
]
|
||||
---
|
||||
|
||||
import Tabs from "@theme/Tabs";
|
||||
import TabItem from "@theme/TabItem";
|
||||
|
||||
In this quickstart, we'll show you how to:
|
||||
|
||||
- Download the Jan Desktop client - Mac, Windows, Linux, (and toaster) compatible
|
||||
- Download the Nightly (unstable) version
|
||||
- Build the application from source
|
||||
|
||||
## Setup
|
||||
|
||||
### Installation
|
||||
|
||||
- To download the latest stable release: https://jan.ai/ or visit the [GitHub Releases](https://github.com/janhq/jan/releases) to download any previous release.
|
||||
|
||||
- To download a nightly release (highly unstable but lots of new features), please check out the [Download section](https://github.com/janhq/jan?tab=readme-ov-file#download) on our repository.
|
||||
|
||||
- For a detailed installation guide for your operating system, see the following:
|
||||
|
||||
<Tabs groupId="operating-systems">
|
||||
<TabItem value="mac" label="macOS">
|
||||
[Mac installation guide](/install/mac)
|
||||
</TabItem>
|
||||
<TabItem value="win" label="Windows">
|
||||
[Windows installation guide](/install/windows)
|
||||
</TabItem>
|
||||
<TabItem value="linux" label="Linux">
|
||||
[Linux installation guide](/install/linux)
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
- To build Jan Desktop from scratch (and have the right to tinker!)
|
||||
|
||||
See the [Build from Source](/install/from-source) guide.
|
||||
@ -1,56 +0,0 @@
|
||||
---
|
||||
title: Manage Chat History
|
||||
slug: /guides/chatting/manage-history/
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
manage-chat-history,
|
||||
]
|
||||
---
|
||||
|
||||
Jan offers a convenient and private way to interact with a conversational AI locally on your computer. This guide will walk you through how to manage your chat history with Jan, ensuring your interactions remain private and organized.
|
||||
|
||||
## Viewing Chat History
|
||||
|
||||
1. Navigate to the main dashboard.
|
||||
2. Locate the list of threads on the left side of the screen. This list shows all your conversations.
|
||||
3. Select a thread to view the conversation in the main chat window.
|
||||
4. Scroll up and down to view the entire chat history in the selected thread.
|
||||
|
||||
<br></br>
|
||||

|
||||
|
||||
## Managing Threads via Folders
|
||||
|
||||
This feature allows you to directly manage your thread history and configurations.
|
||||
|
||||
1. Navigate to the Thread that you want to manage via the list of threads on the left side of the dashboard.
|
||||
2. Click on the three dots (⋮) on the `Thread` section on the right side of the dashboard. There are two options:
|
||||
|
||||
- `Reveal in Finder` will open the folder containing the thread history and configurations.
|
||||
- `View as JSON` will open the thread.json file in your default browser.
|
||||
|
||||
<br></br>
|
||||

|
||||
|
||||
## Clean Thread
|
||||
|
||||
To streamline your conservation view, click on the three dots (⋮) on the thread you want to clean, then select `Clean Thread`. It will remove all messages from the thread. It is useful if you want to keep the thread settings, but want to remove the messages from the chat window.
|
||||
|
||||
<br></br>
|
||||

|
||||
|
||||
## Delete Thread
|
||||
|
||||
To delete a thread, click on the three dots (⋮) on the thread you want to delete, then select `Delete Thread`. It will remove the thread from the list of threads.
|
||||
|
||||
<br></br>
|
||||

|
||||
@ -1,23 +0,0 @@
|
||||
---
|
||||
title: Chatting
|
||||
slug: /guides/chatting/
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
chatting,
|
||||
]
|
||||
---
|
||||
|
||||
This guide is designed to help you maximize your experience with Jan, covering everything from starting engaging threads to managing your chat history effectively.
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
||||
|
Before Width: | Height: | Size: 360 KiB |
|
Before Width: | Height: | Size: 8.5 MiB |
|
Before Width: | Height: | Size: 342 KiB |
|
Before Width: | Height: | Size: 10 MiB |
|
Before Width: | Height: | Size: 18 MiB |
|
Before Width: | Height: | Size: 333 KiB |
|
Before Width: | Height: | Size: 342 KiB |
|
Before Width: | Height: | Size: 11 MiB |
@ -1,51 +0,0 @@
|
||||
---
|
||||
title: Install Models from the Hub
|
||||
slug: /guides/using-models/install-from-hub
|
||||
description: Guide to install models from the Hub.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
install model,
|
||||
]
|
||||
---
|
||||
|
||||
In this guide, we will walk through the process of installing a **Large Language Model (LLM)** from the Hub.
|
||||
|
||||
## Steps to Install Models from the Hub
|
||||
|
||||
### 1. Explore and Select a Model
|
||||
|
||||
Explore the available LLMs by scrolling through the Hub or using the **Search Bar**.
|
||||
|
||||

|
||||
|
||||
Utilize the **Filter Button** to choose the **recommended LLM**. LLM is recommended based on the [RAM usage](https://github.com/janhq/jan/issues/1384).
|
||||
|
||||
| Name | Description |
|
||||
| ----------- | ------------------------------------- |
|
||||
| All Models | Show all LLMs available |
|
||||
| Recommended | Show the Recommended LLM |
|
||||
| Downloaded | Show the LLM that has been downloaded |
|
||||
|
||||

|
||||
|
||||
If you want to use a model that is not available in the Hub, you can also [import the Model Manually](./02-import-manually.mdx).
|
||||
|
||||
### 2. Download the Model
|
||||
|
||||
Once you've identified the desired LLM, simply click the **Download** button to initiate the download. A progress bar will appear to indicate the download progress.
|
||||
|
||||

|
||||
|
||||
### 3. Use the Model
|
||||
|
||||
Once the download is completed, you can start using the model by clicking the **Use** button.
|
||||
|
||||

|
||||
@ -1,242 +0,0 @@
|
||||
---
|
||||
title: Import Models Manually
|
||||
slug: /guides/using-models/import-manually
|
||||
description: Guide to manually import a local model into Jan.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
import-models-manually,
|
||||
local model,
|
||||
]
|
||||
---
|
||||
|
||||
:::caution
|
||||
This is currently under development.
|
||||
:::
|
||||
|
||||
{/* Imports */}
|
||||
import Tabs from "@theme/Tabs";
|
||||
import TabItem from "@theme/TabItem";
|
||||
|
||||
In this section, we will show you how to import a GGUF model from [HuggingFace](https://huggingface.co/), using our latest model, [Trinity](https://huggingface.co/janhq/trinity-v1-GGUF), as an example.
|
||||
|
||||
> We are fast shipping a UI to make this easier, but it's a bit manual for now. Apologies.
|
||||
|
||||
## Import Models Using Absolute Filepath (version 0.4.7)
|
||||
|
||||
Starting from version 0.4.7, Jan has introduced the capability to import models using an absolute file path. It allows you to import models from any directory on your computer. Please check the [import models using absolute filepath](../import-models-using-absolute-filepath) guide for more information.
|
||||
|
||||
## Manually Importing a Downloaded Model (nightly versions and v0.4.4+)
|
||||
|
||||
### 1. Create a Model Folder
|
||||
|
||||
Navigate to the `~/jan/models` folder. You can find this folder by going to `App Settings` > `Advanced` > `Open App Directory`.
|
||||
|
||||
<Tabs groupId="operating-systems">
|
||||
<TabItem value="mac" label="macOS">
|
||||
|
||||
```sh
|
||||
cd ~/jan/models
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="win" label="Windows">
|
||||
|
||||
```sh
|
||||
C:/Users/<your_user_name>/jan/models
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="linux" label="Linux">
|
||||
|
||||
```sh
|
||||
cd ~/jan/models
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
In the `models` folder, create a folder with the name of the model.
|
||||
|
||||
<Tabs groupId="operating-systems">
|
||||
<TabItem value="mac" label="macOS">
|
||||
|
||||
```sh
|
||||
mkdir trinity-v1-7b
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="win" label="Windows">
|
||||
|
||||
```sh
|
||||
mkdir trinity-v1-7b
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="linux" label="Linux">
|
||||
|
||||
```sh
|
||||
mkdir trinity-v1-7b
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
#### 2. Drag & Drop the Model
|
||||
|
||||
Drag and drop your model binary into this folder, ensuring the `modelname.gguf` is the same name as the folder name, e.g. `models/modelname`
|
||||
|
||||
#### 3. Voila
|
||||
|
||||
If your model doesn't show up in the Model Selector in conversations, please restart the app.
|
||||
|
||||
If that doesn't work, please feel free to join our [Discord community](https://discord.gg/Dt7MxDyNNZ) for support, updates, and discussions.
|
||||
|
||||
## Manually Importing a Downloaded Model (older versions)
|
||||
|
||||
### 1. Create a Model Folder
|
||||
|
||||
Navigate to the `~/jan/models` folder. You can find this folder by going to `App Settings` > `Advanced` > `Open App Directory`.
|
||||
|
||||
<Tabs groupId="operating-systems">
|
||||
<TabItem value="mac" label="macOS">
|
||||
|
||||
```sh
|
||||
cd ~/jan/models
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="win" label="Windows">
|
||||
|
||||
```sh
|
||||
C:/Users/<your_user_name>/jan/models
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="linux" label="Linux">
|
||||
|
||||
```sh
|
||||
cd ~/jan/models
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
In the `models` folder, create a folder with the name of the model.
|
||||
|
||||
<Tabs groupId="operating-systems">
|
||||
<TabItem value="mac" label="macOS">
|
||||
|
||||
```sh
|
||||
mkdir trinity-v1-7b
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="win" label="Windows">
|
||||
|
||||
```sh
|
||||
mkdir trinity-v1-7b
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="linux" label="Linux">
|
||||
|
||||
```sh
|
||||
mkdir trinity-v1-7b
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### 2. Create a Model JSON
|
||||
|
||||
Jan follows a folder-based, [standard model template](/docs/engineering/models) called a `model.json` to persist the model configurations on your local filesystem.
|
||||
|
||||
This means that you can easily reconfigure your models, export them, and share your preferences transparently.
|
||||
|
||||
<Tabs groupId="operating-systems">
|
||||
<TabItem value="mac" label="macOS">
|
||||
|
||||
```sh
|
||||
cd trinity-v1-7b
|
||||
touch model.json
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="win" label="Windows">
|
||||
|
||||
```sh
|
||||
cd trinity-v1-7b
|
||||
echo {} > model.json
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="linux" label="Linux">
|
||||
|
||||
```sh
|
||||
cd trinity-v1-7b
|
||||
touch model.json
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
Edit `model.json` and include the following configurations:
|
||||
|
||||
- Ensure the `id` property matches the folder name you created.
|
||||
- Ensure the GGUF filename should match the `id` property exactly.
|
||||
- Ensure the `source.url` property is the direct binary download link ending in `.gguf`. In HuggingFace, you can find the direct links in the `Files and versions` tab.
|
||||
- Ensure you are using the correct `prompt_template`. This is usually provided in the HuggingFace model's description page.
|
||||
|
||||
```json title="model.json"
|
||||
{
|
||||
// highlight-start
|
||||
"sources": [
|
||||
{
|
||||
"filename": "trinity-v1.Q4_K_M.gguf",
|
||||
"url": "https://huggingface.co/janhq/trinity-v1-GGUF/resolve/main/trinity-v1.Q4_K_M.gguf"
|
||||
}
|
||||
],
|
||||
"id": "trinity-v1-7b",
|
||||
// highlight-end
|
||||
"object": "model",
|
||||
"name": "Trinity-v1 7B Q4",
|
||||
"version": "1.0",
|
||||
"description": "Trinity is an experimental model merge of GreenNodeLM & LeoScorpius using the Slerp method. Recommended for daily assistance purposes.",
|
||||
"format": "gguf",
|
||||
"settings": {
|
||||
"ctx_len": 4096,
|
||||
// highlight-next-line
|
||||
"prompt_template": "{system_message}\n### Instruction:\n{prompt}\n### Response:",
|
||||
"llama_model_path": "trinity-v1.Q4_K_M.gguf"
|
||||
},
|
||||
"parameters": {
|
||||
"max_tokens": 4096
|
||||
},
|
||||
"metadata": {
|
||||
"author": "Jan",
|
||||
"tags": ["7B", "Merged"],
|
||||
"size": 4370000000
|
||||
},
|
||||
"engine": "nitro"
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Download the Model
|
||||
|
||||
Restart Jan and navigate to the Hub. Locate your model and click the `Download` button to download the model binary.
|
||||
|
||||

|
||||
|
||||
Your model is now ready to use in Jan.
|
||||
|
||||
## Assistance and Support
|
||||
|
||||
If you have questions or are looking for more preconfigured GGUF models, please feel free to join our [Discord community](https://discord.gg/Dt7MxDyNNZ) for support, updates, and discussions.
|
||||
@ -1,84 +0,0 @@
|
||||
---
|
||||
title: Import Models Using Absolute Filepath
|
||||
slug: /guides/using-models/import-models-using-absolute-filepath
|
||||
description: Guide to import model using absolute filepath in Jan.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
import-models-manually,
|
||||
absolute-filepath,
|
||||
]
|
||||
---
|
||||
|
||||
In this guide, we will walk you through the process of importing a model using an absolute filepath in Jan, using our latest model, [Trinity](https://huggingface.co/janhq/trinity-v1-GGUF), as an example.
|
||||
|
||||
### 1. Get the Absolute Filepath of the Model
|
||||
|
||||
After downloading .gguf model, you can get the absolute filepath of the model file.
|
||||
|
||||
### 2. Configure the Model JSON
|
||||
|
||||
1. Navigate to the `~/jan/models` folder.
|
||||
2. Create a folder named `<modelname>`, for example, `tinyllama`.
|
||||
3. Create a `model.json` file inside the folder, including the following configurations:
|
||||
|
||||
- Ensure the `id` property matches the folder name you created.
|
||||
- Ensure the `url` property is the direct binary download link ending in `.gguf`. Now, you can use the absolute filepath of the model file.
|
||||
- Ensure the `engine` property is set to `nitro`.
|
||||
|
||||
```json
|
||||
{
|
||||
"sources": [
|
||||
{
|
||||
"filename": "tinyllama.gguf",
|
||||
// highlight-next-line
|
||||
"url": "<absolute-filepath-of-the-model-file>"
|
||||
}
|
||||
],
|
||||
"id": "tinyllama-1.1b",
|
||||
"object": "model",
|
||||
"name": "(Absolute Path) TinyLlama Chat 1.1B Q4",
|
||||
"version": "1.0",
|
||||
"description": "TinyLlama is a tiny model with only 1.1B. It's a good model for less powerful computers.",
|
||||
"format": "gguf",
|
||||
"settings": {
|
||||
"ctx_len": 4096,
|
||||
"prompt_template": "<|system|>\n{system_message}<|user|>\n{prompt}<|assistant|>",
|
||||
"llama_model_path": "tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf"
|
||||
},
|
||||
"parameters": {
|
||||
"temperature": 0.7,
|
||||
"top_p": 0.95,
|
||||
"stream": true,
|
||||
"max_tokens": 2048,
|
||||
"stop": [],
|
||||
"frequency_penalty": 0,
|
||||
"presence_penalty": 0
|
||||
},
|
||||
"metadata": {
|
||||
"author": "TinyLlama",
|
||||
"tags": ["Tiny", "Foundation Model"],
|
||||
"size": 669000000
|
||||
},
|
||||
"engine": "nitro"
|
||||
}
|
||||
```
|
||||
|
||||
:::warning
|
||||
|
||||
- If you are using Windows, you need to use double backslashes in the url property, for example: `C:\\Users\\username\\filename.gguf`.
|
||||
|
||||
:::
|
||||
|
||||
### 3. Start the Model
|
||||
|
||||
Restart Jan and navigate to the Hub. Locate your model and click the Use button.
|
||||
|
||||

|
||||
@ -1,166 +0,0 @@
|
||||
---
|
||||
title: Integrate With a Remote Server
|
||||
slug: /guides/using-models/integrate-with-remote-server
|
||||
description: Guide to integrate with a remote server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
import-models-manually,
|
||||
remote server,
|
||||
OAI compatible,
|
||||
]
|
||||
---
|
||||
|
||||
:::caution
|
||||
This is currently under development.
|
||||
:::
|
||||
|
||||
In this guide, we will show you how to configure Jan as a client and point it to any remote & local (self-hosted) API server.
|
||||
|
||||
## OpenAI Platform Configuration
|
||||
|
||||
In this section, we will show you how to configure with OpenAI Platform, using the OpenAI GPT 3.5 Turbo 16k model as an example.
|
||||
|
||||
### 1. Create a Model JSON
|
||||
|
||||
Navigate to the `~/jan/models` folder. Create a folder named `gpt-3.5-turbo-16k` and create a `model.json` file inside the folder including the following configurations:
|
||||
|
||||
- Ensure the filename must be `model.json`.
|
||||
- Ensure the `id` property matches the folder name you created.
|
||||
- Ensure the `format` property is set to `api`.
|
||||
- Ensure the `engine` property is set to `openai`.
|
||||
- Ensure the `state` property is set to `ready`.
|
||||
|
||||
```json title="~/jan/models/gpt-3.5-turbo-16k/model.json"
|
||||
{
|
||||
"sources": [
|
||||
{
|
||||
"filename": "openai",
|
||||
"url": "https://openai.com"
|
||||
}
|
||||
],
|
||||
// highlight-next-line
|
||||
"id": "gpt-3.5-turbo-16k",
|
||||
"object": "model",
|
||||
"name": "OpenAI GPT 3.5 Turbo 16k",
|
||||
"version": "1.0",
|
||||
"description": "OpenAI GPT 3.5 Turbo 16k model is extremely good",
|
||||
// highlight-start
|
||||
"format": "api",
|
||||
"settings": {},
|
||||
"parameters": {},
|
||||
"metadata": {
|
||||
"author": "OpenAI",
|
||||
"tags": ["General", "Big Context Length"]
|
||||
},
|
||||
"engine": "openai"
|
||||
// highlight-end
|
||||
}
|
||||
```
|
||||
|
||||
:::tip
|
||||
|
||||
- You can find the list of available models in the [OpenAI Platform](https://platform.openai.com/docs/models/overview).
|
||||
- Please note that the `id` property need to match the model name in the list. For example, if you want to use the [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo), you need to set the `id` property as `gpt-4-1106-preview`.
|
||||
|
||||
:::
|
||||
|
||||
### 2. Configure OpenAI API Keys
|
||||
|
||||
You can find your API keys in the [OpenAI Platform](https://platform.openai.com/api-keys) and set the OpenAI API keys in `~/jan/engines/openai.json` file.
|
||||
|
||||
```json title="~/jan/engines/openai.json"
|
||||
{
|
||||
"full_url": "https://api.openai.com/v1/chat/completions",
|
||||
// highlight-next-line
|
||||
"api_key": "sk-<your key here>"
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Start the Model
|
||||
|
||||
Restart Jan and navigate to the Hub. Then, select your configured model and start the model.
|
||||
|
||||

|
||||
|
||||
## Engines with OAI Compatible Configuration
|
||||
|
||||
In this section, we will show you how to configure a client connection to a remote/local server, using Jan's API server that is running model `mistral-ins-7b-q4` as an example.
|
||||
|
||||
:::note
|
||||
|
||||
- Please note that at the moment, you can only connect to one OpenAI compatible endpoint at a time.
|
||||
|
||||
:::
|
||||
|
||||
### 1. Configure a Client Connection
|
||||
|
||||
Navigate to the `~/jan/engines` folder and modify the `openai.json` file. Please note that at the moment the code that supports any openai compatible endpoint only reads `engine/openai.json` file, thus, it will not search any other files in this directory.
|
||||
|
||||
Configure `full_url` properties with the endpoint server that you want to connect. For example, if you want to connect to Jan's API server, you can configure it as follows:
|
||||
|
||||
```json title="~/jan/engines/openai.json"
|
||||
{
|
||||
// highlight-start
|
||||
// "full_url": "https://<server-ip-address>:<port>/v1/chat/completions"
|
||||
"full_url": "https://<server-ip-address>:1337/v1/chat/completions"
|
||||
// highlight-end
|
||||
// Skip api_key if your local server does not require authentication
|
||||
// "api_key": "sk-<your key here>"
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Create a Model JSON
|
||||
|
||||
Navigate to the `~/jan/models` folder. Create a folder named `mistral-ins-7b-q4` and create a `model.json` file inside the folder including the following configurations:
|
||||
|
||||
- Ensure the filename must be `model.json`.
|
||||
- Ensure the `id` property matches the folder name you created.
|
||||
- Ensure the `format` property is set to `api`.
|
||||
- Ensure the `engine` property is set to `openai`.
|
||||
- Ensure the `state` property is set to `ready`.
|
||||
|
||||
```json title="~/jan/models/mistral-ins-7b-q4/model.json"
|
||||
{
|
||||
"sources": [
|
||||
{
|
||||
"filename": "janai",
|
||||
"url": "https://jan.ai"
|
||||
}
|
||||
],
|
||||
// highlight-next-line
|
||||
"id": "mistral-ins-7b-q4",
|
||||
"object": "model",
|
||||
"name": "Mistral Instruct 7B Q4 on Jan API Server",
|
||||
"version": "1.0",
|
||||
"description": "Jan integration with remote Jan API server",
|
||||
// highlight-next-line
|
||||
"format": "api",
|
||||
"settings": {},
|
||||
"parameters": {},
|
||||
"metadata": {
|
||||
"author": "MistralAI, The Bloke",
|
||||
"tags": ["remote", "awesome"]
|
||||
},
|
||||
// highlight-start
|
||||
"engine": "openai"
|
||||
// highlight-end
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Start the Model
|
||||
|
||||
Restart Jan and navigate to the Hub. Locate your model and click the Use button.
|
||||
|
||||

|
||||
|
||||
## Assistance and Support
|
||||
|
||||
If you have questions or are looking for more preconfigured GGUF models, please feel free to join our [Discord community](https://discord.gg/Dt7MxDyNNZ) for support, updates, and discussions.
|
||||
@ -1,80 +0,0 @@
|
||||
---
|
||||
title: Customize Engine Settings
|
||||
slug: /guides/using-models/customize-engine-settings
|
||||
description: Guide to customize engine settings.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
import-models-manually,
|
||||
customize-engine-settings,
|
||||
]
|
||||
---
|
||||
|
||||
{/* Imports */}
|
||||
import Tabs from "@theme/Tabs";
|
||||
import TabItem from "@theme/TabItem";
|
||||
|
||||
In this guide, we will show you how to customize the engine settings.
|
||||
|
||||
1. Navigate to the `~/jan/engine` folder. You can find this folder by going to `App Settings` > `Advanced` > `Open App Directory`.
|
||||
|
||||
<Tabs groupId="operating-systems">
|
||||
<TabItem value="mac" label="macOS">
|
||||
|
||||
```sh
|
||||
cd ~/jan/engine
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="win" label="Windows">
|
||||
|
||||
```sh
|
||||
C:/Users/<your_user_name>/jan/engine
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="linux" label="Linux">
|
||||
|
||||
```sh
|
||||
cd ~/jan/engine
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
2. Modify the `nitro.json` file based on your needs. The default settings are shown below.
|
||||
|
||||
```json title="~/jan/engines/nitro.json"
|
||||
{
|
||||
"ctx_len": 2048,
|
||||
"ngl": 100,
|
||||
"cpu_threads": 1,
|
||||
"cont_batching": false,
|
||||
"embedding": false
|
||||
}
|
||||
```
|
||||
|
||||
The table below describes the parameters in the `nitro.json` file.
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------------- | ------- | ------------------------------------------------------------ |
|
||||
| `ctx_len` | Integer | The context length for the model operations. |
|
||||
| `ngl` | Integer | The number of GPU layers to use. |
|
||||
| `cpu_threads` | Integer | The number of threads to use for inferencing (CPU mode only) |
|
||||
| `cont_batching` | Boolean | Whether to use continuous batching. |
|
||||
| `embedding` | Boolean | Whether to use embedding in the model. |
|
||||
|
||||
:::tip
|
||||
|
||||
- By default, the value of `ngl` is set to 100, which indicates that it will offload all. If you wish to offload only 50% of the GPU, you can set `ngl` to 15. Because the majority of models on Mistral or Llama are around ~ 30 layers.
|
||||
- To utilize the embedding feature, include the JSON parameter `"embedding": true`. It will enable Nitro to process inferences with embedding capabilities. For a more detailed explanation, please refer to the [Embedding in the Nitro documentation](https://nitro.jan.ai/features/embed).
|
||||
- To utilize the continuous batching feature to boost throughput and minimize latency in large language model (LLM) inference, please refer to the [Continuous Batching in the Nitro documentation](https://nitro.jan.ai/features/cont-batch).
|
||||
|
||||
:::
|
||||
@ -1,21 +0,0 @@
|
||||
---
|
||||
title: Using Models
|
||||
slug: /guides/using-models/
|
||||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
using-models,
|
||||
]
|
||||
---
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList className="DocCardList" />
|
||||
|
Before Width: | Height: | Size: 1.5 MiB |
|
Before Width: | Height: | Size: 2.9 MiB |
|
Before Width: | Height: | Size: 11 MiB |
|
Before Width: | Height: | Size: 6.4 MiB |
|
Before Width: | Height: | Size: 378 KiB |
|
Before Width: | Height: | Size: 3.8 MiB |
|
Before Width: | Height: | Size: 348 KiB |
|
Before Width: | Height: | Size: 372 KiB |
@ -1,72 +0,0 @@
|
||||
---
|
||||
title: Start Local Server
|
||||
slug: /guides/using-server/start-server
|
||||
description: How to run Jan's built-in API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
local server,
|
||||
api server,
|
||||
]
|
||||
---
|
||||
|
||||
Jan ships with a built-in API server that can be used as a drop-in, local replacement for OpenAI's API. You can run your server by following these simple steps.
|
||||
|
||||
## Open Local API Server View
|
||||
|
||||
Navigate to the Local API Server view by clicking the corresponding icon on the left side of the screen.
|
||||
|
||||
<br></br>
|
||||
|
||||

|
||||
|
||||
## Choosing a Model
|
||||
|
||||
On the top right of your screen under `Model Settings`, set the LLM that your local server will be running. You can choose from any of the models already installed, or pick a new model by clicking `Explore the Hub`.
|
||||
|
||||
<br></br>
|
||||
|
||||

|
||||
|
||||
## Server Options
|
||||
|
||||
On the left side of your screen, you can set custom server options.
|
||||
|
||||
<br></br>
|
||||
|
||||

|
||||
|
||||
### Local Server Address
|
||||
|
||||
By default, Jan will be accessible only on localhost `127.0.0.1`. This means a local server can only be accessed on the same machine where the server is being run.
|
||||
|
||||
You can make the local server more accessible by clicking on the address and choosing `0.0.0.0` instead, which allows the server to be accessed from other devices on the local network. This is less secure than choosing localhost, and should be done with caution.
|
||||
|
||||
### Port
|
||||
|
||||
Jan runs on port `1337` by default. You can change the port to any other port number if needed.
|
||||
|
||||
### Cross-Origin Resource Sharing (CORS)
|
||||
|
||||
Cross-Origin Resource Sharing (CORS) manages resource access on the local server from external domains. Enabled for security by default, it can be disabled if needed.
|
||||
|
||||
### Verbose Server Logs
|
||||
|
||||
The center of the screen displays the server logs as the local server runs. This option provides extensive details about server activities.
|
||||
|
||||
## Start Server
|
||||
|
||||
Click the `Start Server` button on the top left of your screen. You will see the server log display a message such as `Server listening at http://127.0.0.1:1337`, and the `Start Server` button will change to a red `Stop Server` button.
|
||||
|
||||
<br></br>
|
||||
|
||||

|
||||
|
||||
You server is now running and you can use the server address and port to make requests to the local server.
|
||||
@ -1,102 +0,0 @@
|
||||
---
|
||||
title: Using Jan's Built-in API Server
|
||||
description: How to use Jan's built-in API server.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
local server,
|
||||
api server,
|
||||
]
|
||||
---
|
||||
|
||||
Jan's built-in API server is compatible with [OpenAI's API](https://platform.openai.com/docs/api-reference) and can be used as a drop-in, local replacement. Follow these steps to use the API server.
|
||||
|
||||
## Open the API Reference
|
||||
|
||||
Jan contains a comprehensive API reference. This reference displays all the API endpoints available, gives you examples requests and responses, and allows you to execute them in browser.
|
||||
|
||||
On the top left of your screen below the red `Stop Server` button is the blue `API Reference`. Clicking this will open the reference in your browser.
|
||||
|
||||
<br></br>
|
||||
|
||||

|
||||
|
||||
Scroll through the various available endpoints to learn what options are available and try them out by executing the example requests. In addition, you can also use the [Jan API Reference](https://jan.ai/api-reference/) on the Jan website.
|
||||
|
||||
### Chat
|
||||
|
||||
In the Chat section of the API reference, you will see an example JSON request body.
|
||||
|
||||
<br></br>
|
||||
|
||||

|
||||
|
||||
With your local server running, you can click the `Try it out` button on the top left, then the blue `Execute` button below the JSON. The browser will send the example request to your server, and display the response body below.
|
||||
|
||||
Use the API endpoints, request and response body examples as models for your own application.
|
||||
|
||||
### cURL Request Example
|
||||
|
||||
Here is an example curl request with a local server running `tinyllama-1.1b`:
|
||||
|
||||
<br></br>
|
||||
|
||||
```json
|
||||
{
|
||||
"messages": [
|
||||
{
|
||||
"content": "You are a helpful assistant.",
|
||||
"role": "system"
|
||||
},
|
||||
{
|
||||
"content": "Hello!",
|
||||
"role": "user"
|
||||
}
|
||||
],
|
||||
"model": "tinyllama-1.1b",
|
||||
"stream": true,
|
||||
"max_tokens": 2048,
|
||||
"stop": [
|
||||
"hello"
|
||||
],
|
||||
"frequency_penalty": 0,
|
||||
"presence_penalty": 0,
|
||||
"temperature": 0.7,
|
||||
"top_p": 0.95
|
||||
}
|
||||
'
|
||||
```
|
||||
|
||||
### Response Body Example
|
||||
|
||||
```json
|
||||
{
|
||||
"choices": [
|
||||
{
|
||||
"finish_reason": null,
|
||||
"index": 0,
|
||||
"message": {
|
||||
"content": "Hello user. What can I help you with?",
|
||||
"role": "assistant"
|
||||
}
|
||||
}
|
||||
],
|
||||
"created": 1700193928,
|
||||
"id": "ebwd2niJvJB1Q2Whyvkz",
|
||||
"model": "_",
|
||||
"object": "chat.completion",
|
||||
"system_fingerprint": "_",
|
||||
"usage": {
|
||||
"completion_tokens": 500,
|
||||
"prompt_tokens": 33,
|
||||
"total_tokens": 533
|
||||
}
|
||||
}
|
||||
```
|
||||
|
Before Width: | Height: | Size: 328 KiB |
|
Before Width: | Height: | Size: 1.2 MiB |
|
Before Width: | Height: | Size: 3.7 MiB |
|
Before Width: | Height: | Size: 109 KiB |
|
Before Width: | Height: | Size: 90 KiB |
|
Before Width: | Height: | Size: 252 KiB |
@ -1,29 +0,0 @@
|
||||
---
|
||||
title: Import Extensions
|
||||
slug: /guides/using-extensions/import-extensions/
|
||||
description: Import extensions into Jan.
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
import extensions,
|
||||
]
|
||||
---
|
||||
|
||||
Beside default extensions, you can import extensions into Jan by navigate to `Settings` > `Extensions` > `Manual Installation`. Then, the `~/jan/extensions/extensions.json` file will be updated automatically.
|
||||
|
||||
:::caution
|
||||
|
||||
You need to prepare the extension file in `.tgz` format to install.
|
||||
|
||||
:::
|
||||
|
||||

|
||||
|
||||
If you want to build your own extension, please refer to the [Build Your First Extension | Developer Documentation](/developer/build-extension/your-first-extension/).
|
||||
|
Before Width: | Height: | Size: 429 KiB |
|
Before Width: | Height: | Size: 17 MiB |
@ -1,113 +0,0 @@
|
||||
---
|
||||
title: Integrate Continue with Jan and VS Code
|
||||
slug: /guides/integrations/continue
|
||||
description: Guide to integrate Continue with Jan and VS Code
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
Continue integration,
|
||||
VSCode integration,
|
||||
]
|
||||
---
|
||||
|
||||
{/* Imports */}
|
||||
import Tabs from "@theme/Tabs";
|
||||
import TabItem from "@theme/TabItem";
|
||||
|
||||
## Quick Introduction
|
||||
|
||||
[Continue](https://continue.dev/docs/intro) is an open-source autopilot for VS Code and JetBrains—the easiest way to code with any LLM.
|
||||
|
||||
In this guide, we will show you how to integrate Continue with Jan and VS Code, enhancing your coding experience with the power of the local AI language model.
|
||||
|
||||
## Steps to Integrate Continue with Jan and VS Code
|
||||
|
||||
### 1. Install Continue for VS Code
|
||||
|
||||
To get started with Continue in VS Code, please follow this [guide to install Continue for VS Code](https://continue.dev/docs/quickstart).
|
||||
|
||||
### 2. Enable Jan API Server
|
||||
|
||||
To configure the Continue to use Jan's Local Server, you need to enable Jan API Server with your preferred model, please follow this [guide to enable Jan API Server](/guides/using-server/start-server).
|
||||
|
||||
### 3. Configure Continue to Use Jan's Local Server
|
||||
|
||||
Navigate to the `~/.continue` directory.
|
||||
|
||||
<Tabs groupId="operating-systems">
|
||||
<TabItem value="mac" label="macOS">
|
||||
|
||||
```sh
|
||||
cd ~/.continue
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="win" label="Windows">
|
||||
|
||||
```sh
|
||||
C:/Users/<your_user_name>/.continue
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="linux" label="Linux">
|
||||
|
||||
```sh
|
||||
cd ~/.continue
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
Edit the `config.json` file and include the following configuration.
|
||||
|
||||
```json title="~/.continue/config.json"
|
||||
{
|
||||
"models": [
|
||||
{
|
||||
// highlight-next-line
|
||||
"title": "Jan",
|
||||
"provider": "openai",
|
||||
// highlight-start
|
||||
"model": "mistral-ins-7b-q4",
|
||||
"apiKey": "EMPTY",
|
||||
"apiBase": "http://localhost:1337/v1"
|
||||
// highlight-end
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
- Ensure that the `provider` is `openai`.
|
||||
- Ensure that the `model` is the ID of the running model. You can check for the respective ID in System Monitor.
|
||||
- Ensure that the `apiBase` is `http://localhost:1337/v1`.
|
||||
- Ensure that the `apiKey` is `EMPTY`.
|
||||
|
||||
### 4. Double Check the Model is Running
|
||||
|
||||
Open up the `System Monitor` to check that your model is currently running.
|
||||
|
||||
If there are not active models, go to `Settings` > `My Models`. Click on the **three dots (⋮)** and **start model**.
|
||||
|
||||

|
||||
|
||||
### 5. Use Continue in VS Code
|
||||
|
||||
#### Asking questions about the code
|
||||
|
||||
- Highlight a code snippet and press `Command + M` to open the Continue Extension in VSCode.
|
||||
- Select Jan at the bottom and ask a question about the code, for example, `Explain this code`.
|
||||
|
||||

|
||||
|
||||
#### Editing the code directly
|
||||
|
||||
- Highlight a code snippet and press `Command + Shift + L` and input your edit request, for example, `Write comments for this code`.
|
||||
|
||||

|
||||
@ -1,84 +0,0 @@
|
||||
---
|
||||
title: Integrate OpenRouter with Jan
|
||||
slug: /guides/integrations/openrouter
|
||||
description: Guide to integrate OpenRouter with Jan
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
OpenRouter integration,
|
||||
]
|
||||
---
|
||||
|
||||
## Quick Introduction
|
||||
|
||||
[OpenRouter](https://openrouter.ai/docs#quick-start) is an AI model aggregator. The API can be used by developers to interact with a variety of large language models, generative image models, and generative 3D object models.
|
||||
|
||||
In this guide, we will show you how to integrate OpenRouter with Jan, enabling you to leverage remote Large Language Models (LLM) that are available at OpenRouter.
|
||||
|
||||
## Steps to Integrate OpenRouter with Jan
|
||||
|
||||
### 1. Configure OpenRouter API key
|
||||
|
||||
You can find your API keys in the [OpenRouter API Key](https://openrouter.ai/keys) and set the OpenRouter API key in `~/jan/engines/openai.json` file.
|
||||
|
||||
```json title="~/jan/engines/openai.json"
|
||||
{
|
||||
// highlight-start
|
||||
"full_url": "https://openrouter.ai/api/v1/chat/completions",
|
||||
"api_key": "sk-or-v1<your-openrouter-api-key-here>"
|
||||
// highlight-end
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Modify a Model JSON
|
||||
|
||||
Navigate to the `~/jan/models` folder. Create a folder named `<openrouter-modelname>`, for example, `openrouter-dolphin-mixtral-8x7b` and create a `model.json` file inside the folder including the following configurations:
|
||||
|
||||
- Ensure the filename must be `model.json`.
|
||||
- Ensure the `id` property is set to the model id from OpenRouter.
|
||||
- Ensure the `format` property is set to `api`.
|
||||
- Ensure the `engine` property is set to `openai`.
|
||||
- Ensure the `state` property is set to `ready`.
|
||||
|
||||
```json title="~/jan/models/openrouter-dolphin-mixtral-8x7b/model.json"
|
||||
{
|
||||
"sources": [
|
||||
{
|
||||
"filename": "openrouter",
|
||||
"url": "https://openrouter.ai/"
|
||||
}
|
||||
],
|
||||
"id": "cognitivecomputations/dolphin-mixtral-8x7b",
|
||||
"object": "model",
|
||||
"name": "Dolphin 2.6 Mixtral 8x7B",
|
||||
"version": "1.0",
|
||||
"description": "This is a 16k context fine-tune of Mixtral-8x7b. It excels in coding tasks due to extensive training with coding data and is known for its obedience, although it lacks DPO tuning. The model is uncensored and is stripped of alignment and bias. It requires an external alignment layer for ethical use. Users are cautioned to use this highly compliant model responsibly, as detailed in a blog post about uncensored models at erichartford.com/uncensored-models.",
|
||||
// highlight-next-line
|
||||
"format": "api",
|
||||
"settings": {},
|
||||
"parameters": {},
|
||||
"metadata": {
|
||||
"tags": ["General", "Big Context Length"]
|
||||
},
|
||||
// highlight-start
|
||||
"engine": "openai"
|
||||
// highlight-end
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Start the Model
|
||||
|
||||
Restart Jan and navigate to the Hub. Locate your model and click the Use button.
|
||||
|
||||

|
||||
|
||||
### 4. Try Out the Integration of Jan and OpenRouter
|
||||
|
||||

|
||||
@ -1,95 +0,0 @@
|
||||
---
|
||||
title: Integrate Azure OpenAI Service with Jan
|
||||
slug: /guides/integrations/azure-openai-service
|
||||
description: Guide to integrate Azure OpenAI Service with Jan
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
integration,
|
||||
Azure OpenAI Service,
|
||||
]
|
||||
---
|
||||
|
||||
## Quick Introduction
|
||||
|
||||
[Azure OpenAI Service](https://learn.microsoft.com/en-us/azure/ai-services/openai/overview?source=docs) provides a set of powerful APIs that enable you to easily integrate the OpenAI's language models.
|
||||
|
||||
In this guide, we will show you how to integrate Azure OpenAI Service with Jan.
|
||||
|
||||
## Steps to Integrate Azure OpenAI Service with Jan
|
||||
|
||||
### 1. Configure Azure OpenAI Service API key
|
||||
|
||||
Once you completed setting up and deploying the Azure OpenAI Service, you can find the endpoint and API key in the [Azure OpenAI Studio](https://oai.azure.com/) by navigating to `Chat` > `View code`.
|
||||
|
||||

|
||||
|
||||
<br> </br>
|
||||
|
||||

|
||||
|
||||
Set the Azure OpenAI Service endpoint and API key in the `~/jan/engines/openai.json` file.
|
||||
|
||||
```json title="~/jan/engines/openai.json"
|
||||
{
|
||||
// https://hieujan.openai.azure.com/openai/deployments/gpt-35-hieu-jan/chat/completions?api-version=2023-07-01-preview
|
||||
// highlight-start
|
||||
"full_url": "https://<your-resource-name>.openai.azure.com/openai/deployments/<your-deployment-name>/chat/completions?api-version=<api-version>",
|
||||
"api_key": "<your-api-key>"
|
||||
// highlight-end
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Modify a Model JSON
|
||||
|
||||
Navigate to the `~/jan/models` folder. Create a folder named `<your-deployment-name>`, for example, `gpt-35-hieu-jan` and create a `model.json` file inside the folder including the following configurations:
|
||||
|
||||
- Ensure the filename must be `model.json`.
|
||||
- Ensure the `id` property is set to the same as the folder name and your deployment name.
|
||||
- Ensure the `format` property is set to `api`.
|
||||
- Ensure the `engine` property is set to `openai`.
|
||||
- Ensure the `state` property is set to `ready`.
|
||||
|
||||
```json title="~/jan/models/gpt-35-hieu-jan/model.json"
|
||||
{
|
||||
"sources": [
|
||||
{
|
||||
"filename": "azure_openai",
|
||||
"url": "https://hieujan.openai.azure.com"
|
||||
}
|
||||
],
|
||||
// highlight-next-line
|
||||
"id": "gpt-35-hieu-jan",
|
||||
"object": "model",
|
||||
"name": "Azure OpenAI GPT 3.5",
|
||||
"version": "1.0",
|
||||
"description": "Azure Open AI GPT 3.5 model is extremely good",
|
||||
// highlight-next-line
|
||||
"format": "api",
|
||||
"settings": {},
|
||||
"parameters": {},
|
||||
"metadata": {
|
||||
"author": "OpenAI",
|
||||
"tags": ["General", "Big Context Length"]
|
||||
},
|
||||
// highlight-start
|
||||
"engine": "openai"
|
||||
// highlight-end
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Start the Model
|
||||
|
||||
Restart Jan and navigate to the Hub. Locate your model and click the Use button.
|
||||

|
||||
|
||||
### 4. Try Out the Integration of Jan and Azure OpenAI Service
|
||||
|
||||

|
||||
@ -1,89 +0,0 @@
|
||||
---
|
||||
title: Integrate Mistral AI with Jan
|
||||
slug: /guides/integrations/mistral-ai
|
||||
description: Guide to integrate Mistral AI with Jan
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
Mistral integration,
|
||||
]
|
||||
---
|
||||
|
||||
## Quick Introduction
|
||||
|
||||
[Mistral AI](https://docs.mistral.ai/) currently provides two ways of accessing their Large Language Models (LLM) - via their API or via open source models available on Hugging Face. In this guide, we will show you how to integrate Mistral AI with Jan using the API method.
|
||||
|
||||
## Steps to Integrate Mistral AI with Jan
|
||||
|
||||
### 1. Configure Mistral API key
|
||||
|
||||
You can find your API keys in the [Mistral API Key](https://console.mistral.ai/user/api-keys/) and set the Mistral AI API key in `~/jan/engines/openai.json` file.
|
||||
|
||||
```json title="~/jan/engines/openai.json"
|
||||
{
|
||||
// highlight-start
|
||||
"full_url": "https://api.mistral.ai/v1/chat/completions",
|
||||
"api_key": "<your-mistral-ai-api-key>"
|
||||
// highlight-end
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Modify a Model JSON
|
||||
|
||||
Navigate to the `~/jan/models` folder. Create a folder named `<mistral-modelname>`, for example, `mistral-tiny` and create a `model.json` file inside the folder including the following configurations:
|
||||
|
||||
- Ensure the filename must be `model.json`.
|
||||
- Ensure the `id` property is set to the model id from Mistral AI.
|
||||
- Ensure the `format` property is set to `api`.
|
||||
- Ensure the `engine` property is set to `openai`.
|
||||
- Ensure the `state` property is set to `ready`.
|
||||
|
||||
```json title="~/jan/models/mistral-tiny/model.json"
|
||||
{
|
||||
"sources": [
|
||||
{
|
||||
"filename": "mistral-tiny",
|
||||
"url": "https://mistral.ai/"
|
||||
}
|
||||
],
|
||||
"id": "mistral-tiny",
|
||||
"object": "model",
|
||||
"name": "Mistral-7B-v0.2 (Tiny Endpoint)",
|
||||
"version": "1.0",
|
||||
"description": "Currently powered by Mistral-7B-v0.2, a better fine-tuning of the initial Mistral-7B released, inspired by the fantastic work of the community.",
|
||||
// highlight-next-line
|
||||
"format": "api",
|
||||
"settings": {},
|
||||
"parameters": {},
|
||||
"metadata": {
|
||||
"author": "Mistral AI",
|
||||
"tags": ["General", "Big Context Length"]
|
||||
},
|
||||
// highlight-start
|
||||
"engine": "openai"
|
||||
// highlight-end
|
||||
}
|
||||
```
|
||||
|
||||
:::tip
|
||||
|
||||
Mistral AI provides different endpoints. Please check out their [endpoint documentation](https://docs.mistral.ai/platform/endpoints/) to find the one that suits your needs. In this example, we will use the `mistral-tiny` model.
|
||||
|
||||
:::
|
||||
|
||||
### 3. Start the Model
|
||||
|
||||
Restart Jan and navigate to the Hub. Locate your model and click the Use button.
|
||||
|
||||

|
||||
|
||||
### 4. Try Out the Integration of Jan and Mistral AI
|
||||
|
||||

|
||||
@ -1,184 +0,0 @@
|
||||
---
|
||||
title: Integrate LM Studio with Jan
|
||||
slug: /guides/integrations/lmstudio
|
||||
description: Guide to integrate LM Studio with Jan
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
LM Studio integration,
|
||||
]
|
||||
---
|
||||
|
||||
## Quick Introduction
|
||||
|
||||
With [LM Studio](https://lmstudio.ai/), you can discover, download, and run local Large Language Models (LLMs). In this guide, we will show you how to integrate and use your current models on LM Studio with Jan using 2 methods. The first method is integrating LM Studio server with Jan UI. The second method is migrating your downloaded model from LM Studio to Jan. We will use the [Phi 2 - GGUF](https://huggingface.co/TheBloke/phi-2-GGUF) model on Hugging Face as an example.
|
||||
|
||||
## Steps to Integrate LM Studio Server with Jan UI
|
||||
|
||||
### 1. Start the LM Studio Server
|
||||
|
||||
1. Navigate to the `Local Inference Server` on the LM Studio application.
|
||||
2. Select the model you want to use.
|
||||
3. Start the server after configuring the server port and options.
|
||||
|
||||

|
||||
|
||||
<br></br>
|
||||
|
||||
Modify the `openai.json` file in the `~/jan/engines` folder to include the full URL of the LM Studio server.
|
||||
|
||||
```json title="~/jan/engines/openai.json"
|
||||
{
|
||||
"full_url": "http://localhost:<port>/v1/chat/completions"
|
||||
}
|
||||
```
|
||||
|
||||
:::tip
|
||||
|
||||
- Replace `<port>` with the port number you set in the LM Studio server. The default port is `1234`.
|
||||
|
||||
:::
|
||||
|
||||
### 2. Modify a Model JSON
|
||||
|
||||
Navigate to the `~/jan/models` folder. Create a folder named `<lmstudio-modelname>`, for example, `lmstudio-phi-2` and create a `model.json` file inside the folder including the following configurations:
|
||||
|
||||
- Set the `format` property to `api`.
|
||||
- Set the `engine` property to `openai`.
|
||||
- Set the `state` property to `ready`.
|
||||
|
||||
```json title="~/jan/models/lmstudio-phi-2/model.json"
|
||||
{
|
||||
"sources": [
|
||||
{
|
||||
"filename": "phi-2-GGUF",
|
||||
"url": "https://huggingface.co/TheBloke/phi-2-GGUF"
|
||||
}
|
||||
],
|
||||
"id": "lmstudio-phi-2",
|
||||
"object": "model",
|
||||
"name": "LM Studio - Phi 2 - GGUF",
|
||||
"version": "1.0",
|
||||
"description": "TheBloke/phi-2-GGUF",
|
||||
// highlight-next-line
|
||||
"format": "api",
|
||||
"settings": {},
|
||||
"parameters": {},
|
||||
"metadata": {
|
||||
"author": "Microsoft",
|
||||
"tags": ["General", "Big Context Length"]
|
||||
},
|
||||
// highlight-start
|
||||
"engine": "openai"
|
||||
// highlight-end
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Start the Model
|
||||
|
||||
1. Restart Jan and navigate to the **Hub**.
|
||||
2. Locate your model and click the **Use** button.
|
||||
|
||||

|
||||
|
||||
### 4. Try Out the Integration of Jan and LM Studio
|
||||
|
||||

|
||||
|
||||
## Steps to Migrate Your Downloaded Model from LM Studio to Jan (version 0.4.6 and older)
|
||||
|
||||
### 1. Migrate Your Downloaded Model
|
||||
|
||||
1. Navigate to `My Models` in the LM Studio application and reveal the model folder.
|
||||
|
||||

|
||||
|
||||
2. Copy the model folder that you want to migrate to `~/jan/models` folder.
|
||||
|
||||
3. Ensure the folder name property is the same as the model name of `.gguf` filename by changing the folder name if necessary. For example, in this case, we changed foldername from `TheBloke` to `phi-2.Q4_K_S`.
|
||||
|
||||
### 2. Start the Model
|
||||
|
||||
1. Restart Jan and navigate to the **Hub**. Jan will automatically detect the model and display it in the **Hub**.
|
||||
2. Locate your model and click the **Use** button to try the migrating model.
|
||||
|
||||

|
||||
|
||||
## Steps to Pointing to the Downloaded Model of LM Studio from Jan (version 0.4.7+)
|
||||
|
||||
Starting from version 0.4.7, Jan supports importing models using an absolute filepath, so you can directly use the model from the LM Studio folder.
|
||||
|
||||
### 1. Reveal the Model Absolute Path
|
||||
|
||||
Navigate to `My Models` in the LM Studio application and reveal the model folder. Then, you can get the absolute path of your model.
|
||||
|
||||

|
||||
|
||||
### 2. Modify a Model JSON
|
||||
|
||||
Navigate to the `~/jan/models` folder. Create a folder named `<modelname>`, for example, `phi-2.Q4_K_S` and create a `model.json` file inside the folder including the following configurations:
|
||||
|
||||
- Ensure the `id` property matches the folder name you created.
|
||||
- Ensure the `url` property is the direct binary download link ending in `.gguf`. Now, you can use the absolute filepath of the model file. In this example, the absolute filepath is `/Users/<username>/.cache/lm-studio/models/TheBloke/phi-2-GGUF/phi-2.Q4_K_S.gguf`.
|
||||
- Ensure the `engine` property is set to `nitro`.
|
||||
|
||||
```json
|
||||
{
|
||||
"object": "model",
|
||||
"version": 1,
|
||||
"format": "gguf",
|
||||
"sources": [
|
||||
{
|
||||
"filename": "phi-2.Q4_K_S.gguf",
|
||||
"url": "<absolute-path-of-model-file>"
|
||||
}
|
||||
],
|
||||
"id": "phi-2.Q4_K_S",
|
||||
"name": "phi-2.Q4_K_S",
|
||||
"created": 1708308111506,
|
||||
"description": "phi-2.Q4_K_S - user self import model",
|
||||
"settings": {
|
||||
"ctx_len": 4096,
|
||||
"embedding": false,
|
||||
"prompt_template": "{system_message}\n### Instruction: {prompt}\n### Response:",
|
||||
"llama_model_path": "phi-2.Q4_K_S.gguf"
|
||||
},
|
||||
"parameters": {
|
||||
"temperature": 0.7,
|
||||
"top_p": 0.95,
|
||||
"stream": true,
|
||||
"max_tokens": 2048,
|
||||
"stop": ["<endofstring>"],
|
||||
"frequency_penalty": 0,
|
||||
"presence_penalty": 0
|
||||
},
|
||||
"metadata": {
|
||||
"size": 1615568736,
|
||||
"author": "User",
|
||||
"tags": []
|
||||
},
|
||||
"engine": "nitro"
|
||||
}
|
||||
```
|
||||
|
||||
:::warning
|
||||
|
||||
- If you are using Windows, you need to use double backslashes in the url property, for example: `C:\\Users\\username\\filename.gguf`.
|
||||
|
||||
:::
|
||||
|
||||
|
||||
### 3. Start the Model
|
||||
|
||||
1. Restart Jan and navigate to the **Hub**.
|
||||
2. Jan will automatically detect the model and display it in the **Hub**.
|
||||
3. Locate your model and click the **Use** button to try the migrating model.
|
||||
|
||||

|
||||
@ -1,90 +0,0 @@
|
||||
---
|
||||
title: Integrate Ollama with Jan
|
||||
slug: /guides/integrations/ollama
|
||||
description: Guide to integrate Ollama with Jan
|
||||
keywords:
|
||||
[
|
||||
Jan AI,
|
||||
Jan,
|
||||
ChatGPT alternative,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
large language model,
|
||||
Ollama integration,
|
||||
]
|
||||
---
|
||||
|
||||
## Quick Introduction
|
||||
|
||||
With [Ollama](https://ollama.com/), you can run large language models locally. In this guide, we will show you how to integrate and use your current models on Ollama with Jan using 2 methods. The first method is integrating Ollama server with Jan UI. The second method is migrating your downloaded model from Ollama to Jan. We will use the [llama2](https://ollama.com/library/llama2) model as an example.
|
||||
|
||||
## Steps to Integrate Ollama Server with Jan UI
|
||||
|
||||
### 1. Start the Ollama Server
|
||||
|
||||
1. Select the model you want to use from the [Ollama library](https://ollama.com/library).
|
||||
2. Run your model by using the following command:
|
||||
|
||||
```bash
|
||||
ollama run <model-name>
|
||||
```
|
||||
|
||||
3. According to the [Ollama documentation on OpenAI compatibility](https://github.com/ollama/ollama/blob/main/docs/openai.md), you can use the `http://localhost:11434/v1/chat/completions` endpoint to interact with the Ollama server. Thus, modify the `openai.json` file in the `~/jan/engines` folder to include the full URL of the Ollama server.
|
||||
|
||||
```json title="~/jan/engines/openai.json"
|
||||
{
|
||||
"full_url": "http://localhost:11434/v1/chat/completions"
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Modify a Model JSON
|
||||
|
||||
1. Navigate to the `~/jan/models` folder.
|
||||
2. Create a folder named `<ollam-modelname>`, for example, `lmstudio-phi-2`.
|
||||
3. Create a `model.json` file inside the folder including the following configurations:
|
||||
|
||||
- Set the `id` property to the model name as Ollama model name.
|
||||
- Set the `format` property to `api`.
|
||||
- Set the `engine` property to `openai`.
|
||||
- Set the `state` property to `ready`.
|
||||
|
||||
```json title="~/jan/models/llama2/model.json"
|
||||
{
|
||||
"sources": [
|
||||
{
|
||||
"filename": "llama2",
|
||||
"url": "https://ollama.com/library/llama2"
|
||||
}
|
||||
],
|
||||
// highlight-next-line
|
||||
"id": "llama2",
|
||||
"object": "model",
|
||||
"name": "Ollama - Llama2",
|
||||
"version": "1.0",
|
||||
"description": "Llama 2 is a collection of foundation language models ranging from 7B to 70B parameters.",
|
||||
// highlight-next-line
|
||||
"format": "api",
|
||||
"settings": {},
|
||||
"parameters": {},
|
||||
"metadata": {
|
||||
"author": "Meta",
|
||||
"tags": ["General", "Big Context Length"]
|
||||
},
|
||||
// highlight-next-line
|
||||
"engine": "openai"
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Start the Model
|
||||
|
||||
1. Restart Jan and navigate to the **Hub**.
|
||||
2. Locate your model and click the **Use** button.
|
||||
|
||||

|
||||
|
||||
### 4. Try Out the Integration of Jan and Ollama
|
||||
|
||||

|
||||
|
||||
|
Before Width: | Height: | Size: 85 KiB |
|
Before Width: | Height: | Size: 622 KiB |
|
Before Width: | Height: | Size: 13 MiB |
|
Before Width: | Height: | Size: 88 KiB |
|
Before Width: | Height: | Size: 14 MiB |
|
Before Width: | Height: | Size: 1.3 MiB |
|
Before Width: | Height: | Size: 827 KiB |
|
Before Width: | Height: | Size: 9.9 MiB |
|
Before Width: | Height: | Size: 1.3 MiB |
|
Before Width: | Height: | Size: 567 KiB |
|
Before Width: | Height: | Size: 8.3 MiB |
|
Before Width: | Height: | Size: 1.5 MiB |
|
Before Width: | Height: | Size: 5.3 MiB |
|
Before Width: | Height: | Size: 5.7 MiB |
|
Before Width: | Height: | Size: 6.6 MiB |
|
Before Width: | Height: | Size: 1.2 MiB |
|
Before Width: | Height: | Size: 3.3 MiB |
|
Before Width: | Height: | Size: 11 MiB |
|
Before Width: | Height: | Size: 5.0 MiB |
|
Before Width: | Height: | Size: 1.2 MiB |