Merge branch 'dev' into chore/update-extensions-and-engines-docs
|
Before Width: | Height: | Size: 363 KiB After Width: | Height: | Size: 453 KiB |
|
Before Width: | Height: | Size: 191 KiB After Width: | Height: | Size: 190 KiB |
|
Before Width: | Height: | Size: 184 KiB After Width: | Height: | Size: 184 KiB |
|
Before Width: | Height: | Size: 160 KiB After Width: | Height: | Size: 159 KiB |
|
Before Width: | Height: | Size: 185 KiB After Width: | Height: | Size: 183 KiB |
|
Before Width: | Height: | Size: 179 KiB After Width: | Height: | Size: 179 KiB |
|
Before Width: | Height: | Size: 184 KiB After Width: | Height: | Size: 184 KiB |
|
Before Width: | Height: | Size: 171 KiB After Width: | Height: | Size: 171 KiB |
|
Before Width: | Height: | Size: 173 KiB After Width: | Height: | Size: 173 KiB |
|
Before Width: | Height: | Size: 173 KiB After Width: | Height: | Size: 173 KiB |
|
Before Width: | Height: | Size: 173 KiB After Width: | Height: | Size: 173 KiB |
|
Before Width: | Height: | Size: 182 KiB After Width: | Height: | Size: 182 KiB |
|
Before Width: | Height: | Size: 182 KiB After Width: | Height: | Size: 182 KiB |
|
Before Width: | Height: | Size: 182 KiB After Width: | Height: | Size: 182 KiB |
|
Before Width: | Height: | Size: 188 KiB After Width: | Height: | Size: 187 KiB |
|
Before Width: | Height: | Size: 188 KiB After Width: | Height: | Size: 187 KiB |
|
Before Width: | Height: | Size: 188 KiB After Width: | Height: | Size: 187 KiB |
|
Before Width: | Height: | Size: 188 KiB After Width: | Height: | Size: 187 KiB |
|
Before Width: | Height: | Size: 188 KiB After Width: | Height: | Size: 188 KiB |
|
Before Width: | Height: | Size: 188 KiB After Width: | Height: | Size: 188 KiB |
|
Before Width: | Height: | Size: 188 KiB After Width: | Height: | Size: 188 KiB |
|
Before Width: | Height: | Size: 190 KiB After Width: | Height: | Size: 190 KiB |
|
Before Width: | Height: | Size: 172 KiB After Width: | Height: | Size: 203 KiB |
|
Before Width: | Height: | Size: 170 KiB After Width: | Height: | Size: 168 KiB |
@ -23,6 +23,7 @@ keywords:
|
||||
|
||||
import FAQBox from '@/components/FaqBox'
|
||||
import { Tabs, Callout, Steps } from 'nextra/components'
|
||||
import { Settings } from 'lucide-react'
|
||||
|
||||
|
||||
|
||||
@ -68,14 +69,14 @@ Please check whether your Linux distribution supports desktop, server, or both e
|
||||
- Excavator processors (Q2 2015) and newer
|
||||
|
||||
<Callout type="info">
|
||||
Jan requires a processor with AVX2 for best performance. See [full list of supported processors.](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX2) While older processors with AVX/AVX-512 will work, you may experience slower performance.
|
||||
Jan requires a processor with **AVX2 or newer** for optimal performance. See [full list of supported processors](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX2). While Jan may run on processors with only AVX support, performance will be significantly reduced.
|
||||
</Callout>
|
||||
</Tabs.Tab>
|
||||
|
||||
<Tabs.Tab>
|
||||
- 8GB → 3B models (int4)
|
||||
- 16GB → 7B models (int4)
|
||||
- 32GB → 13B models (int4)
|
||||
- 8GB → up to 3B parameter models (int4)
|
||||
- 16GB → up to 7B parameter models (int4)
|
||||
- 32GB → up to 13B parameter models (int4)
|
||||
|
||||
<Callout type="info">
|
||||
DDR2 RAM minimum supported, newer generations recommended for better performance.
|
||||
@ -83,9 +84,9 @@ DDR2 RAM minimum supported, newer generations recommended for better performance
|
||||
</Tabs.Tab>
|
||||
|
||||
<Tabs.Tab>
|
||||
- 6GB → 3B model (int4) with `ngl` at 120
|
||||
- 8GB → 7B model (int4) with `ngl` at 120
|
||||
- 12GB → 13B model (int4) with `ngl` at 120
|
||||
- 6GB → up to 3B parameter models (int4)
|
||||
- 8GB → up to 7B parameter models (int4)
|
||||
- 12GB → up to 13B parameter models (int4)
|
||||
|
||||
<Callout type="info">
|
||||
Minimum 6GB VRAM recommended for NVIDIA, AMD, or Intel Arc GPUs.
|
||||
@ -93,7 +94,7 @@ Minimum 6GB VRAM recommended for NVIDIA, AMD, or Intel Arc GPUs.
|
||||
</Tabs.Tab>
|
||||
|
||||
<Tabs.Tab>
|
||||
At least 10GB for app installation and model downloads.
|
||||
At least **10GB** for app installation and model downloads.
|
||||
</Tabs.Tab>
|
||||
|
||||
</Tabs>
|
||||
@ -179,7 +180,7 @@ chmod +x jan-linux-x86_64-{version}.AppImage
|
||||
|
||||
By default, Jan is installed in the following directory:
|
||||
|
||||
```
|
||||
```bash
|
||||
# Custom installation directory
|
||||
$XDG_CONFIG_HOME = /home/username/custom_config
|
||||
|
||||
@ -238,9 +239,9 @@ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64
|
||||
See [detailed instructions](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#post-installation-actions).
|
||||
|
||||
### Step 2: Enable GPU Acceleration
|
||||
1. In Jan, navigate to **Settings** > **Hardware**
|
||||
3. Select and enable your prefered NVIDIA GPU(s)
|
||||
4. App reload is required after the selection
|
||||
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Advanced Settings**
|
||||
2. At **GPU Acceleration**, toggle on and select your preferred GPU(s)
|
||||
3. App reload is required after the selection
|
||||
|
||||
|
||||
<Callout type="info">
|
||||
@ -252,23 +253,32 @@ While **Vulkan** can enable Nvidia GPU acceleration in the Jan app, **CUDA** is
|
||||
</Tabs.Tab>
|
||||
|
||||
<Tabs.Tab>
|
||||
To enable the use of your AMD GPU in the Jan app, you need to activate the Vulkan support first by following the steps below:
|
||||
AMD GPUs require **Vulkan** support.
|
||||
|
||||
<Callout type="warning">
|
||||
This feature is currently experimental and must be enabled through [Experimental Mode](/docs/settings#experimental-mode) in **Advanced Settings**.
|
||||
</Callout>
|
||||
|
||||
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Advanced Settings**
|
||||
2. Enable **Experimental Mode**
|
||||
3. Under **GPU Acceleration**, enable **Vulkan Support**
|
||||
4. At **GPU Acceleration**, toggle on and select your preferred GPU(s)
|
||||
5. App reload is required after the selection
|
||||
|
||||
1. Open Jan application
|
||||
2. Go to **Settings** → **Advanced Settings** → enable the **Experimental Mode**
|
||||
3. Enable the **Vulkan Support** under the **GPU Acceleration**
|
||||
4. Enable the **GPU Acceleration** and choose the AMD GPU you want to use
|
||||
5. A success notification saying **Successfully turned on GPU acceleration** will appear when GPU acceleration is activated
|
||||
</Tabs.Tab>
|
||||
|
||||
<Tabs.Tab>
|
||||
To enable the use of your Intel Arc GPU in the Jan app, you need to activate the Vulkan support first by following the steps below:
|
||||
Intel Arc GPUs require **Vulkan** support.
|
||||
|
||||
1. Open Jan application
|
||||
2. Go to **Settings** → **Advanced Settings** → enable the **Experimental Mode**
|
||||
3. Enable the **Vulkan Support** under the **GPU Acceleration**
|
||||
4. Enable the **GPU Acceleration** and choose the Intel Arc GPU you want to use
|
||||
5. A success notification saying **Successfully turned on GPU acceleration** will appear when GPU acceleration is activated
|
||||
<Callout type="warning">
|
||||
This feature is currently experimental and must be enabled through [Experimental Mode](/docs/settings#experimental-mode) in **Advanced Settings**.
|
||||
</Callout>
|
||||
|
||||
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Advanced Settings**
|
||||
2. Enable **Experimental Mode**
|
||||
3. Under **GPU Acceleration**, enable **Vulkan Support**
|
||||
4. At **GPU Acceleration**, toggle on and select your preferred GPU(s)
|
||||
5. App reload is required after the selection
|
||||
</Tabs.Tab>
|
||||
|
||||
</Tabs>
|
||||
|
||||
@ -132,14 +132,7 @@ See [Jan Data Folder](/docs/data-folder) for more details about the data folder
|
||||
|
||||
Open **Terminal** and run these commands to remove all Jan-related data:
|
||||
```bash
|
||||
# Remove all user data
|
||||
rm -rf ~/jan
|
||||
|
||||
# Delete application data
|
||||
rm -rf ~/Library/Application\ Support/Jan/data
|
||||
|
||||
# Delete application cache
|
||||
rm -rf ~/Library/Application\ Support/Jan/cache
|
||||
```
|
||||
</Steps>
|
||||
|
||||
|
||||
@ -22,6 +22,7 @@ keywords:
|
||||
|
||||
import { Tabs, Callout, Steps } from 'nextra/components'
|
||||
import FAQBox from '@/components/FaqBox'
|
||||
import { Settings } from 'lucide-react'
|
||||
|
||||
|
||||
# Windows Installation
|
||||
@ -39,23 +40,29 @@ Ensure that your system meets the following requirements to use Jan effectively:
|
||||
- Excavator processors (Q2 2015) and newer.
|
||||
</Tabs.Tab>
|
||||
</Tabs>
|
||||
|
||||
<Callout type="info">
|
||||
Jan requires a processor with AVX2 for best performance. See [full list of supported processors.](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX2) While older processors with AVX/AVX-512 will work, you may experience slower performance.
|
||||
Jan requires a processor with **AVX2 or newer** for optimal performance. See [full list of supported processors](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX2). While Jan may run on processors with only AVX support, performance will be significantly reduced.
|
||||
</Callout>
|
||||
|
||||
- **Memory (RAM)**
|
||||
- 8GB > 3B parameter models (int4)
|
||||
- 16GB > 7B parameter models (int4)
|
||||
- 32GB > 13B parameter models (int4)
|
||||
- 8GB → up to 3B parameter models (int4)
|
||||
- 16GB → up to 7B parameter models (int4)
|
||||
- 32GB → up to 13B parameter models (int4)
|
||||
|
||||
<Callout type="info">
|
||||
DDR2 RAM is supported but newer RAM generations are recommended for better performance.
|
||||
</Callout>
|
||||
|
||||
- **GPU**:
|
||||
- 6GB > 3B models with **ngl** at 120 (full speed)
|
||||
- 8GB > 7B models with **ngl** at 120 (full speed)
|
||||
- 12GB > 13B models with **ngl** at 120 (full speed)
|
||||
- 6GB → up to 3B parameter models
|
||||
- 8GB → up to 7B parameter models
|
||||
- 12GB → up to 13B parameter models
|
||||
|
||||
<Callout type="info">
|
||||
Minimum 6GB VRAM recommended for NVIDIA, AMD, or Intel Arc GPUs.
|
||||
</Callout>
|
||||
|
||||
- **Storage:** Minimum 10GB free space for application and model downloads
|
||||
|
||||
|
||||
@ -153,9 +160,9 @@ Expected output should show your GPU model and driver version.
|
||||
nvcc --version
|
||||
```
|
||||
### Step 2: Enable GPU Acceleration
|
||||
1. In Jan, navigate to **Settings** > **Hardware**
|
||||
3. Select and enable your prefered NVIDIA GPU(s)
|
||||
4. App reload is required after the selection
|
||||
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Advanced Settings**
|
||||
2. At **GPU Acceleration**, toggle on and select your preferred GPU(s)
|
||||
3. App reload is required after the selection
|
||||
|
||||
<Callout type="info">
|
||||
While Jan supports both CUDA and Vulkan for NVIDIA GPUs, we strongly recommend using CUDA for optimal performance.
|
||||
@ -166,23 +173,32 @@ While Jan supports both CUDA and Vulkan for NVIDIA GPUs, we strongly recommend u
|
||||
</Tabs.Tab>
|
||||
|
||||
<Tabs.Tab>
|
||||
AMD GPUs require **Vulkan** support, which must be activated through **Experimental Mode**.
|
||||
1. Launch Jan
|
||||
2. Navigate to **Settings** > **Advanced Settings**
|
||||
3. Enable **Experimental Mode**
|
||||
4. Under **GPU Acceleration**, enable **Vulkan Support**
|
||||
5. Enable **GPU Acceleration** and select your AMD GPU
|
||||
6. App reload is required after the selection
|
||||
AMD GPUs require **Vulkan** support.
|
||||
|
||||
<Callout type="warning">
|
||||
This feature is currently experimental and must be enabled through [Experimental Mode](/docs/settings#experimental-mode) in **Advanced Settings**.
|
||||
</Callout>
|
||||
|
||||
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Advanced Settings**
|
||||
2. Enable **Experimental Mode**
|
||||
3. Under **GPU Acceleration**, enable **Vulkan Support**
|
||||
4. At **GPU Acceleration**, toggle on and select your preferred GPU(s)
|
||||
5. App reload is required after the selection
|
||||
|
||||
</Tabs.Tab>
|
||||
|
||||
<Tabs.Tab>
|
||||
Intel Arc GPUs require **Vulkan** support, which must be activated through **Experimental Mode**.
|
||||
1. Launch Jan
|
||||
2. Navigate to **Settings** > **Advanced Settings**
|
||||
3. Enable **Experimental Mode**
|
||||
4. Under **GPU Acceleration**, enable **Vulkan Support**
|
||||
5. Enable **GPU Acceleration** and select your Intel Arc GPU
|
||||
6. App reload is required after the selection
|
||||
Intel Arc GPUs require **Vulkan** support.
|
||||
|
||||
<Callout type="warning">
|
||||
This feature is currently experimental and must be enabled through [Experimental Mode](/docs/settings#experimental-mode) in **Advanced Settings**.
|
||||
</Callout>
|
||||
|
||||
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Advanced Settings**
|
||||
2. Enable **Experimental Mode**
|
||||
3. Under **GPU Acceleration**, enable **Vulkan Support**
|
||||
4. At **GPU Acceleration**, toggle on and select your preferred GPU(s)
|
||||
5. App reload is required after the selection
|
||||
</Tabs.Tab>
|
||||
|
||||
</Tabs>
|
||||
@ -215,6 +231,11 @@ To ensure a complete uninstallation, remove the app cache:
|
||||
1. Navigate to `C:\Users\[username]\AppData\Roaming`
|
||||
2. Delete Jan folder
|
||||
|
||||
or through **Terminal**:
|
||||
```
|
||||
cd C:\Users\%USERNAME%\AppData\Roaming
|
||||
rmdir /S jan
|
||||
```
|
||||
</Steps>
|
||||
|
||||
<Callout type="warning">
|
||||
|
||||
@ -25,7 +25,7 @@ import FAQBox from '@/components/FaqBox'
|
||||

|
||||
|
||||
|
||||
Jan is a ChatGPT-alternative that runs 100% offline on your [Desktop](/docs/desktop-installation). Our goal is to make it easy for a layperson[^1] to download and run LLMs and use AI with full control and [privacy](https://www.reuters.com/legal/legalindustry/privacy-paradox-with-ai-2023-10-31/).
|
||||
Jan is a ChatGPT-alternative that runs 100% offline on your desktop & mobile (*comming soon*). Our goal is to make it easy for a layperson[^1] to download and run LLMs and use AI with full control and [privacy](https://www.reuters.com/legal/legalindustry/privacy-paradox-with-ai-2023-10-31/).
|
||||
|
||||
Jan is powered by [Cortex](https://cortex.so/), our embeddable local AI engine.
|
||||
|
||||
@ -38,10 +38,10 @@ You'll be able to use it with [Continue.dev](https://jan.ai/integrations/coding/
|
||||
### Features
|
||||
|
||||
- Download popular open-source LLMs (Llama3, Gemma or Mistral,...) from [Model Hub](./docs/models/manage-models.mdx) or import any GGUF models
|
||||
- Connect to [cloud model services](https://jan.ai/docs/remote-inference/openai) (OpenAI, Anthropic, Mistral, Groq,...)
|
||||
- Connect to [cloud model services](/docs/remote-models/openai) (OpenAI, Anthropic, Mistral, Groq,...)
|
||||
- [Chat](./docs/threads.mdx) with AI models & [customize their parameters](./docs/models/model-parameters.mdx) in an intuitive interface
|
||||
- Use [local API server](https://jan.ai/api-reference) with OpenAI-equivalent API
|
||||
- Customize Jan with [extensions](https://jan.ai/docs/extensions)
|
||||
- Customize Jan with [extensions](/docs/extensions)
|
||||
|
||||
### Philosophy
|
||||
|
||||
@ -50,7 +50,7 @@ Jan is built to be [user-owned](about#-user-owned):
|
||||
- [Local-first](https://www.inkandswitch.com/local-first/), with all data stored locally
|
||||
- Runs 100% offline, with privacy by default
|
||||
- Free choice of AI models, both local and cloud-based
|
||||
- We do not [collect or sell user data](/privacy)
|
||||
- We do not collect or sell user data. See our [Privacy](/privacy).
|
||||
|
||||
<Callout>
|
||||
You can read more about our [philosophy](/about#philosophy) here.
|
||||
@ -77,25 +77,27 @@ Jan is built on the shoulders of many upstream open-source projects:
|
||||
</FAQBox>
|
||||
|
||||
<FAQBox title="How do I use Jan?">
|
||||
Download Jan Desktop on your computer, download a compatible LLM, connect to a remote AI with the API key, and start chatting. You can switch between models as needed.
|
||||
Download Jan on your computer, download a compatible model or connect to a cloud AI, and start chatting. See details in our [Quick Start](/docs/quickstart) guide.
|
||||
</FAQBox>
|
||||
|
||||
<FAQBox title="Is Jan compatible with my operating system?">
|
||||
Jan is available for Mac, Windows, and Linux, ensuring wide compatibility.
|
||||
See our comapatibility guide for [Mac](/docs/desktop/mac#compatibility), [Windows](/docs/desktop/windows#compatibility), and [Linux](docs/desktop/linux).
|
||||
|
||||
GPU-wise, Jan supports:
|
||||
- NVIDIA GPUs (CUDA)
|
||||
- AMD GPUs (Vulkan)
|
||||
- Intel Arc GPUs (Vulkan)
|
||||
- Other GPUs with Vulkan support
|
||||
|
||||
|
||||
</FAQBox>
|
||||
|
||||
<FAQBox title="Do you use my data?">
|
||||
<FAQBox title="Does Jan use my data?">
|
||||
No data is collected. Everything stays local on your device.
|
||||
<Callout type="warning">
|
||||
When using cloud AI services (like GPT-4 or Claude) through Jan, their data collection is outside our control. Please check their privacy policies.
|
||||
</Callout>
|
||||
You can help improve Jan by choosing to opt in anonymous basic usage data (like feature usage and user counts). Even so, your chats and personal information are never collected. Read more about what data you can contribute to us at [Privacy](./docs/privacy.mdx).
|
||||
You can help improve Jan by choosing to opt in anonymous basic usage data. Even so, your chats and personal information are never collected. Read more about what data you can contribute to us at [Privacy](./docs/privacy.mdx).
|
||||
</FAQBox>
|
||||
|
||||
|
||||
@ -104,35 +106,31 @@ When using cloud AI services (like GPT-4 or Claude) through Jan, their data coll
|
||||
</FAQBox>
|
||||
|
||||
<FAQBox title="How does Jan ensure my data remains private?">
|
||||
Jan prioritizes your privacy by running open-source AI models 100% offline on your computer. Conversations, documents, and files stay on your device in the Jan Data Folder located at:
|
||||
Jan prioritizes your privacy by running open-source AI models 100% offline on your computer. Conversations, documents, and files stay on your device in [Jan Data Folder](/docs/data-folder) located at:
|
||||
- Windows: `%APPDATA%/Jan/data`
|
||||
- Linux: `$XDG_CONFIG_HOME/Jan/data` or `~/.config/Jan/data`
|
||||
- macOS: `~/Library/Application Support/Jan/data`
|
||||
</FAQBox>
|
||||
|
||||
<FAQBox title="What does Jan stand for?">
|
||||
Jan stands for “Just a Name". We are, admittedly, bad at marketing 😂.
|
||||
</FAQBox>
|
||||
|
||||
<FAQBox title="Can I use Jan without an internet connection?">
|
||||
Yes, Jan can run without an internet connection, but you'll need to download a local model first. Once you've downloaded your preferred models, Jan will work entirely offline by default.
|
||||
Yes, Jan can run without an internet connection, but you'll need to [download a local model](/docs/models/manage-models#1-download-from-jan-hub-recommended) first. Once you've downloaded your preferred models, Jan will work entirely offline by default.
|
||||
</FAQBox>
|
||||
|
||||
<FAQBox title="Are there any costs associated with using Jan?">
|
||||
Jan is free and open-source. There are no subscription fees or hidden costs for all local models & features.
|
||||
|
||||
To use cloud AI models (like GPT-4 or Claude):
|
||||
To use [cloud AI models](/docs/models/manage-models#cloud-model) (like GPT-4 or Claude):
|
||||
- You'll need to have your own API keys & pay the standard rates charged by those providers.
|
||||
- Jan doesn't add any markup.
|
||||
|
||||
</FAQBox>
|
||||
<FAQBox title="What types of AI models can I download or import with Jan?">
|
||||
- Models from Jan's Hub are recommended for best compatibility.
|
||||
- You can also import GGUF models from Hugging Face or your device.
|
||||
- Models from [Jan Hub](/docs/models/manage-models#1-download-from-jan-hub-recommended) are recommended for best compatibility.
|
||||
- You can also [import GGUF models](/docs/models/manage-models#2-import-from-hugging-face) from Hugging Face or from your local files.
|
||||
</FAQBox>
|
||||
|
||||
<FAQBox title="How do I customize Jan using the programmable API?">
|
||||
Jan has an extensible architecture like VSCode and Obsidian - you can build custom features using our extensions API. Most of Jan's features are actually built as extensions.
|
||||
Jan has an extensible architecture like VSCode and Obsidian - you can build custom features using our extensions API. Most of Jan's features are actually built as [extensions](/docs/extensions).
|
||||
</FAQBox>
|
||||
|
||||
<FAQBox title="How can I contribute to Jan's development or suggest features?">
|
||||
@ -145,6 +143,7 @@ Jan has an extensible architecture like VSCode and Obsidian - you can build cust
|
||||
|
||||
<FAQBox title="How do I troubleshoot issues with installing or using Jan?">
|
||||
For troubleshooting, please visit [Troubleshooting](./docs/troubleshooting.mdx).
|
||||
|
||||
In case you can't find what you need in our troubleshooting guide, please reach out to us for extra help on our [Discord](https://discord.com/invite/FTk2MvZwJH) in the **#🆘|get-help** channel.
|
||||
</FAQBox>
|
||||
|
||||
@ -154,6 +153,10 @@ Jan has an extensible architecture like VSCode and Obsidian - you can build cust
|
||||
- Fork and build from our [GitHub](https://github.com/janhq/jan) repository.
|
||||
</FAQBox>
|
||||
|
||||
<FAQBox title="What does Jan stand for?">
|
||||
Jan stands for “Just a Name". We are, admittedly, bad at marketing 😂.
|
||||
</FAQBox>
|
||||
|
||||
<FAQBox title="Are you hiring?">
|
||||
Yes! We love hiring from our community. Check out our open positions at [Careers](https://homebrew.bamboohr.com/careers).
|
||||
</FAQBox>
|
||||
|
||||
@ -18,6 +18,8 @@ keywords:
|
||||
]
|
||||
---
|
||||
import { Callout, Steps } from 'nextra/components'
|
||||
import { Settings, EllipsisVertical, Plus, FolderOpen, Pencil } from 'lucide-react'
|
||||
|
||||
|
||||
# Model Management
|
||||
This guide provides comprehensive instructions on adding, customizing, and deleting models within Jan.
|
||||
@ -33,13 +35,13 @@ Local models run directly on your computer, which means they use your computer's
|
||||
|
||||
#### 1. Download from Jan Hub (Recommended)
|
||||
The easiest way to get started is using Jan's built-in model hub:
|
||||
1. Go to the **Hub**
|
||||
1. Go to **Hub**
|
||||
2. Browse available models and click on any model to see details about it
|
||||
3. Choose a model that fits your needs & hardware specifications
|
||||
4. Click **Download** on your chosen model
|
||||
|
||||
<Callout type="info">
|
||||
Jan will indicate if a model might be **Slow on your device** or requires **Not enough RAM** based on your system specifications.
|
||||
Jan will indicate if a model might be **Slow on your device** or **Not enough RAM** based on your system specifications.
|
||||
</Callout>
|
||||
|
||||
<br/>
|
||||
@ -53,25 +55,25 @@ You can import GGUF models directly from [Hugging Face](https://huggingface.co/)
|
||||
1. Visit [Hugging Face Models](https://huggingface.co/models).
|
||||
2. Find a GGUF model you want to use
|
||||
3. Copy the **model ID** (e.g., TheBloke/Mistral-7B-v0.1-GGUF) or its **URL**
|
||||
4. In Jan, paste the model ID/URL to **Search** bar in **Hub** or in **Settings** > **My Models**
|
||||
4. In Jan, paste the model ID/URL to **Search** bar in **Hub** or in **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **My Models**
|
||||
5. Select your preferred quantized version to download
|
||||
|
||||
<br/>
|
||||

|
||||

|
||||
<br/>
|
||||
|
||||
##### Option B: Use Deep Link
|
||||
You can use Jan's deep link feature to quickly import models:
|
||||
1. Visit [Hugging Face Models](https://huggingface.co/models).
|
||||
2. Find the GGUF model you want to use
|
||||
3. Copy the **model ID**, for example: `TheBloke/Mistral-7B-v0.1-GGUF`
|
||||
3. Copy the **model ID**, for example: `bartowski/Llama-3.2-3B-Instruct-GGUF`
|
||||
4. Create a **deep link URL** in this format:
|
||||
```
|
||||
jan://models/huggingface/<model_id>
|
||||
```
|
||||
5. Enter the URL in your browser & **Enter**, for example:
|
||||
```
|
||||
jan://models/huggingface/TheBloke/Mistral-7B-v0.1-GGUF
|
||||
jan://models/huggingface/bartowski/Llama-3.2-3B-Instruct-GGUF
|
||||
```
|
||||
6. A prompt will appear: `This site is trying to open Jan`, click **Open** to open Jan app.
|
||||
7. Select your preferred quantized version to download
|
||||
@ -81,16 +83,16 @@ Deep linking won't work for models requiring API tokens or usage agreements. You
|
||||
</Callout>
|
||||
|
||||
<br/>
|
||||

|
||||

|
||||
<br/>
|
||||
|
||||
|
||||
|
||||
#### 3. Import Local Files
|
||||
If you already have GGUF model files on your computer:
|
||||
1. In Jan, go to **Hub** or **Settings** > **My Models**
|
||||
1. In Jan, go to **Hub** or **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **My Models**
|
||||
2. Click **Import Model**
|
||||
3. Select your **GGUF** file
|
||||
3. Select your **GGUF** file(s)
|
||||
4. Choose how you want to import:
|
||||
- **Link Files:** Creates symbolic links to your model files (saves space)
|
||||
- **Duplicate:** Makes a copy of model files in Jan's directory
|
||||
@ -143,9 +145,9 @@ Key fields to configure:
|
||||
|
||||
|
||||
### Delete Models
|
||||
1. Go to **Settings** > **My Models**
|
||||
1. Go to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **My Models**
|
||||
2. Find the model you want to remove
|
||||
3. Select the three dots next to it and select **Delete Model**
|
||||
3. Select the three dots <EllipsisVertical width={16} height={16} style={{display:"inline"}}/> icon next to it and select **Delete Model**
|
||||
|
||||
<br/>
|
||||

|
||||
|
||||
@ -22,6 +22,7 @@ keywords:
|
||||
|
||||
import { Tabs } from 'nextra/components'
|
||||
import { Callout, Steps } from 'nextra/components'
|
||||
import { Settings } from 'lucide-react'
|
||||
|
||||
|
||||
# Quickstart
|
||||
@ -32,7 +33,7 @@ import { Callout, Steps } from 'nextra/components'
|
||||
2. Install the application on your system ([Mac](/docs/desktop/mac), [Windows](/docs/desktop/windows), [Linux](/docs/desktop/linux))
|
||||
3. Launch Jan
|
||||
|
||||
Once installed, you'll see the Jan application interface with no local models pre-installed yet. You'll be able to:
|
||||
Once installed, you'll see Jan application interface with no models pre-installed yet. You'll be able to:
|
||||
- Download and run local AI models
|
||||
- Connect to cloud AI providers if desired
|
||||
<br/>
|
||||
@ -48,7 +49,7 @@ Jan offers various local AI models, from smaller efficient models to larger more
|
||||
3. Choose a model that fits your needs & hardware specifications
|
||||
4. Click **Download** to begin installation
|
||||
<Callout type="info">
|
||||
Local models run directly on your computer, which means they use your computer's memory (RAM) and processing power. Please choose models carefully based on your hardware specifications ([Mac](/docs/desktop/mac#minimum-requirements), [Windows](docs/desktop/windows#compatibility), [Linux](docs/desktop/linux#compatibility)).
|
||||
Local models run directly on your computer, which means they use your computer's memory (RAM) and processing power. Please choose models carefully based on your hardware specifications ([Mac](/docs/desktop/mac#minimum-requirements), [Windows](/docs/desktop/windows#compatibility), [Linux](/docs/desktop/linux#compatibility)).
|
||||
</Callout>
|
||||
|
||||
For more model installation methods, please visit [Model Management](/docs/models/manage-models).
|
||||
@ -58,9 +59,9 @@ For more model installation methods, please visit [Model Management](/docs/model
|
||||
<br/>
|
||||
|
||||
### Step 3: Turn on GPU Acceleration (Optional)
|
||||
While the model downloads, let's optimize your hardware setup. If you have a compatible graphics card, you can significantly boost model performance by enabling GPU acceleration.
|
||||
1. Navigate to **Settings** → **Hardware**
|
||||
2. Enable your preferred GPU(s)
|
||||
While the model downloads, let's optimize your hardware setup. If you're on **Windows** or **Linux** and have a compatible graphics card, you can significantly boost model performance by enabling GPU acceleration.
|
||||
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Advanced Settings**
|
||||
2. At **GPU Acceleration**, toggle on and select your preferred GPU(s)
|
||||
3. App reload is required after the selection
|
||||
|
||||
<Callout type="info">
|
||||
@ -71,11 +72,11 @@ Ensure you have installed all required dependencies and drivers before enabling
|
||||

|
||||
|
||||
### Step 4: Customize Assistant Instructions
|
||||
Once your model has downloaded and you're ready to start your first conversation with Jan, you can customize how it responds by setting specific instructions:
|
||||
1. In any **Thread**, click the **Assistant** tab in the **right panel**
|
||||
2. Enter your instructions in the **Instructions** field to define how Jan should respond
|
||||
Once your model has been downloaded and you're ready to start your first conversation, you can customize how it responds by setting specific instructions:
|
||||
1. In any **Thread**, click the **Assistant** tab in the **right sidebar**
|
||||
2. Enter your instructions in **Instructions** field to define how Jan should respond
|
||||
|
||||
You can modify these instructions at any time during your conversation to adjust Jan's behavior for that specific thread.
|
||||
You can modify these instructions at any time during your conversation to adjust Jan's behavior for that specific thread. See detailed guide at [Assistant](/docs/assistants).
|
||||
<br/>
|
||||
|
||||

|
||||
@ -85,9 +86,9 @@ You can modify these instructions at any time during your conversation to adjust
|
||||
Now that your model is downloaded and instructions are set, you can begin chatting with Jan. Type your message in the **input field** at the bottom of the thread to start the conversation.
|
||||
|
||||
You can further customize your experience by:
|
||||
- Adjusting [model parameters](/docs/models/model-parameters) in the **Model** tab in the **right panel**
|
||||
- Trying different models for different tasks by clicking the **model selector** in **Model** tab or **input field**
|
||||
- Creating new threads with different instructions and model configurations
|
||||
- Adjust [model parameters](/docs/models/model-parameters) in the **Model** tab in the **right sidebar**
|
||||
- Try different models for different tasks by clicking the **model selector** in **Model** tab or **input field**
|
||||
- [Create new threads](/docs/threads#creating-new-thread) with different instructions and model configurations
|
||||
|
||||
|
||||
|
||||
@ -99,20 +100,19 @@ You can further customize your experience by:
|
||||
|
||||
|
||||
### Step 6: Connect to cloud models (Optional)
|
||||
Jan supports both local and remote AI models. You can connect to remote AI services that are OpenAI API-compatible, including: OpenAI (GPT-4, o1,...), Anthropic (Claude), Groq, Mistral, and more.
|
||||
Jan supports both local and cloud AI models. You can connect to cloud AI services that are OpenAI API-compatible, including: OpenAI (GPT-4, o1,...), Anthropic (Claude), Groq, Mistral, and more.
|
||||
1. Open any **Thread**
|
||||
2. Click the **Model** tab in the **right panel** or the **model selector** in input field
|
||||
3. Choose the **Cloud** tab
|
||||
4. Choose your preferred provider (Anthropic, OpenAI, etc.)
|
||||
5. Click the **Add (➕)** icon next to the provider
|
||||
6. Obtain a valid API key from your chosen provider, ensure the key has sufficient credits & appropriate permissions
|
||||
7. Copy & insert your **API Key** in Jan
|
||||
2. Click **Model** tab in the **right sidebar** or **model selector** in input field
|
||||
3. Once the selector is poped up, choose the **Cloud** tab
|
||||
4. Select your preferred provider (Anthropic, OpenAI, etc.), click **Add (➕)** icon next to the provider
|
||||
5. Obtain a valid API key from your chosen provider, ensure the key has sufficient credits & appropriate permissions
|
||||
6. Copy & insert your **API Key** in Jan
|
||||
|
||||
See [Remote APIs](/docs/remote-models/openai) for detailed configuration.
|
||||
|
||||
<br/>
|
||||
|
||||

|
||||

|
||||
|
||||
<br/>
|
||||
</Steps>
|
||||
|
||||
@ -30,7 +30,7 @@ Chat with your documents and images using Jan's RAG (Retrieval-Augmented Generat
|
||||
|
||||
## Enable File Search & Vision
|
||||
|
||||
To chat with PDFs using RAG in Jan, follow these steps:
|
||||
To chat with PDFs & images in Jan, follow these steps:
|
||||
|
||||
1. In any **Thread**, click the **Tools** tab in right sidebar
|
||||
2. Enable **Retrieval**
|
||||
@ -39,7 +39,7 @@ To chat with PDFs using RAG in Jan, follow these steps:
|
||||

|
||||
<br/>
|
||||
|
||||
3. Once enabled, you should be able to **upload file & images** from thread input field
|
||||
3. Once enabled, you should be able to **upload file(s) & image(s)** from threads input field
|
||||
<Callout type="info">
|
||||
Ensure that you are using a multimodal model.
|
||||
- File Search: Jan currently supports PDF format
|
||||
|
||||