chore: update docs for tauri

chore: update docs for tauri
This commit is contained in:
David 2025-06-02 11:26:36 +07:00 committed by Ramon Perez
parent e586f2387e
commit 051d6d3727
26 changed files with 211 additions and 138 deletions

View File

@ -77,7 +77,7 @@ const menus = [
},
{
menu: 'LinkedIn',
path: 'https://www.linkedin.com/company/homebrewltd',
path: 'https://www.linkedin.com/company/menloresearch',
external: true,
},
],

View File

@ -57,17 +57,16 @@ We have a thriving community built around [Jan](../docs), where we also discuss
- [Discord](https://discord.gg/AAGQNpJQtH)
- [Twitter](https://twitter.com/jandotai)
- [LinkedIn](https://www.linkedin.com/company/homebrewltd)
- [HuggingFace](https://huggingface.co/janhq)
- [LinkedIn](https://www.linkedin.com/company/menloresearch)
- Email: hello@jan.ai
## Philosophy
Homebrew is an opinionated company with a clear philosophy for the products we build:
[Menlo](https://menlo.ai/handbook/about) is an open R&D lab in pursuit of General Intelligence, that achieves real-world impact through agents and robots.
### 🔑 User Owned
We build tools that are user-owned. Our products are [open-source](https://en.wikipedia.org/wiki/Open_source), designed to run offline or be [self-hosted](https://www.reddit.com/r/selfhosted/). We make no attempt to lock you in, and our tools are free of [user-hostile dark patterns](https://twitter.com/karpathy/status/1761467904737067456?t=yGoUuKC9LsNGJxSAKv3Ubg) [^1].
We build tools that are user-owned. Our products are [open-source](https://en.wikipedia.org/wiki/Open_source), designed to run offline or be [self-hosted.](https://www.reddit.com/r/selfhosted/) We make no attempt to lock you in, and our tools are free of [user-hostile dark patterns](https://twitter.com/karpathy/status/1761467904737067456?t=yGoUuKC9LsNGJxSAKv3Ubg) [^1].
We adopt [Local-first](https://www.inkandswitch.com/local-first/) principles and store data locally in [universal file formats](https://stephango.com/file-over-app). We build for privacy by default, and we do not [collect or sell your data](/privacy).

Binary file not shown.

Before

Width:  |  Height:  |  Size: 153 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 85 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 450 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 118 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 453 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 452 KiB

After

Width:  |  Height:  |  Size: 1.5 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 499 KiB

After

Width:  |  Height:  |  Size: 457 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 151 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 135 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 161 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 162 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 161 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 386 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 620 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 392 KiB

View File

@ -26,7 +26,7 @@
"models": "Models",
"tools": "Tools",
"assistants": "Assistants",
"threads": "Threads",
"threads": "Chats",
"settings": "Settings",
"api-server": "Local API Server",
"inference-engines": {

View File

@ -1,6 +1,6 @@
---
title: Assistants
description: A step-by-step guide on customizing your assistant.
description: A step-by-step guide on customizing and managing your assistants.
keywords:
[
Jan,
@ -20,47 +20,72 @@ keywords:
import { Callout, Steps } from 'nextra/components'
# Assistants
Assistant is a configuration profile that determines how the AI should behave and respond to your inputs. It consists of:
- A set of instructions that guide the AI's behavior
- Model settings for AI responses
- Tool configurations (like [knowlegde retrieval](/docs/tools/retrieval) settings)
Currently, Jan comes with a single default Assistant named **Jan**, which is used across all your threads. We're working on the ability to create and switch between multiple assistants.
Jan allows you to manage multiple Assistants, each with its own configuration profile that determines how the AI should behave and respond to your inputs. You can add, edit, or delete assistants, and customize their instructions and settings.
## Set Assistant Instructions
By modifying assistant instructions, you can customize how Jan understands and responds to your queries, what context it should consider, and how it should format its responses.
![Assistants UI Overview](./_assets/assistants-ui-overview.png)
1. In any **Thread**, click the **Assistant** tab in the **right sidebar**
2. Enter your custom instructions in **Instructions** input field
3. Your instructions will be applied to the current thread right after you click out of the instruction field .
*Screenshot: The Assistants management page, where you can view, add, edit, or delete assistants. Each assistant has a name, description, and can be customized for different tasks.*
![Set Instructions](./_assets/quick-start-02.png)
## Accessing the Assistants Page
**Best Practices for Instructions:**
- Be clear and specific about the desired behavior
- Include any consistent preferences for formatting, tone, or style
1. Open Jan and look at the left sidebar.
2. Click on the **Assistants** tab (see highlighted section in the screenshot above).
3. The main panel will display all your current assistants
## Managing Assistants
- **Add a New Assistant**: Click the `+` button in the Assistants panel to create a new assistant profile.
- **Edit an Assistant**: Click the pencil (✏️) icon on any assistant card to update its name, description, or instructions.
- **Delete an Assistant**: Click the trash (🗑️) icon to remove an assistant you no longer need.
## Customizing Assistant Instructions
Each assistant can have its own set of instructions to guide its behavior. For example:
**Examples:**
```
Act as a software development mentor focused on Python and JavaScript.
Provide detailed explanations with code examples when relevant.
Use markdown formatting for code blocks.
```
Or:
```
Respond in a casual, friendly tone. Keep explanations brief and use simple language.
Provide examples when explaining complex topics.
```
## Best Practices
- Be clear and specific about the desired behavior for each assistant.
- Include preferences for formatting, tone, or style.
- Use different assistants for different tasks (e.g., translation, travel planning, financial advice).
## Apply Instructions to New Threads
You can save Assistant instructions to be automatically applied to all new threads:
---
1. In any **Thread**, click the **Assistant** tab in the **right sidebar**
2. Toggle the **Save instructions for new threads** slider
3. When enabled, all **new threads** will use these instructions as their default, old threads are not affected
*Note: The ability to create, edit, and delete assistants is available in the Assistants tab. Each assistant can be tailored for a specific use case, making Jan a flexible and powerful tool for your needs.*
<br/>
## Switching and Managing Assistants in Chat
![Assistant Slider](./_assets/assistant-01.png)
You can quickly switch between assistants, or create and edit them, directly from the Chat screen using the assistant dropdown menu at the top:
<br/>
![Assistant Dropdown](./_assets/assistant-dropdown.png)
- Click the assistant name (e.g., "Travel Planner") at the top of the Chat screen to open the dropdown menu.
- The dropdown lists all your assistants. Click any assistant to switch to it for the current chat session.
- To create a new assistant, select **Create Assistant** at the bottom of the dropdown. This opens the Add Assistant dialog:
![Add Assistant Dialog](./_assets/assistant-add-dialog.png)
- To edit an existing assistant, click the gear (⚙️) icon next to its name in the dropdown. This opens the Edit Assistant dialog:
![Edit Assistant Dialog](./_assets/assistant-edit-dialog.png)
### Add/Edit Assistant Dialogs
- Set an emoji and name for your assistant.
- Optionally add a description.
- Enter detailed instructions to guide the assistant's behavior.
- Adjust predefined parameters (like Temperature, Top P, etc.) or add custom parameters as needed.
- Click **Save** to apply your changes.
This workflow allows you to seamlessly manage and switch between assistants while chatting, making it easy to tailor Jan to your needs in real time.

View File

@ -31,8 +31,7 @@ This extension configures how Jan handles model downloads and management:
Access tokens authenticate your identity to Hugging Face Hub for model downloads.
1. Get your token from [Hugging Face Tokens](https://huggingface.co/docs/hub/en/security-tokens)
2. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Core Extensions** > **Model Management**
2. Enter your token in Jan: `hf_************************`
2. Enter your token in **Settings > Model Providers > Llama.cpp > Hugging Face Access Token**
<Callout type="warning">
Keep your access tokens secure and never share them.

View File

@ -40,9 +40,9 @@ and add it to Jan via the configuration's page and start talking to your favorit
### Features
- Download popular open-source LLMs (Llama3, Gemma3, Mistral,and more) from the HugggingFace [Model Hub](./docs/models/manage-models.mdx)
- Download popular open-source LLMs (Llama3, Gemma3, Mistral, and more) from the HuggingFace [Model Hub](./docs/models/manage-models.mdx)
or import any GGUF models available locally
- Connect to [cloud model services](/docs/remote-models/openai) (OpenAI, Anthropic, Mistral, Groq,...)
- Connect to [cloud model services](/docs/remote-models/openai) (OpenAI, Anthropic, Mistral, Groq, etc.)
- [Chat](./docs/threads.mdx) with AI models & [customize their parameters](./docs/models/model-parameters.mdx) via our
intuitive interface
- Use our [local API server](https://jan.ai/api-reference) with an OpenAI-equivalent API
@ -72,7 +72,6 @@ Jan is built on the shoulders of many upstream open-source projects:
- [Llama.cpp](https://github.com/ggerganov/llama.cpp/blob/master/LICENSE)
- [LangChain.js](https://github.com/langchain-ai/langchainjs/blob/main/LICENSE)
- [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM/blob/main/LICENSE)
- [TheBloke/GGUF](https://huggingface.co/TheBloke)
- [Scalar](https://github.com/scalar/scalar)
## FAQs
@ -136,7 +135,7 @@ Jan is built on the shoulders of many upstream open-source projects:
<FAQBox title="How can I contribute or get community help?">
- Join our [Discord community](https://discord.gg/qSwXFx6Krr) to connect with other users
- Contribute through [GitHub](https://github.com/menloresearch/jan) (no permission needed!)
- Get troubleshooting help in our [Discord](https://discord.com/invite/FTk2MvZwJH) #🆘|get-help channel
- Get troubleshooting help in our [Discord](https://discord.com/invite/FTk2MvZwJH) [#🆘|jan-help](https://discord.com/channels/1107178041848909847/1192090449725358130) channel
- Check our [Troubleshooting](./docs/troubleshooting.mdx) guide for common issues
</FAQBox>

View File

@ -49,13 +49,15 @@ Jan will indicate if a model might be **Slow on your device** or **Not enough RA
<br/>
#### 2. Import from [Hugging Face](https://huggingface.co/)
You can import GGUF models directly from [Hugging Face](https://huggingface.co/):
You can import GGUF models directly from Hugging Face:
**Note:** Some models require a Hugging Face Access Token. Enter your token in **Settings > Model Providers > Hugging Face** before importing.
##### Option A: Import in Jan
1. Visit [Hugging Face Models](https://huggingface.co/models).
2. Find a GGUF model you want to use
3. Copy the **model ID** (e.g., TheBloke/Mistral-7B-v0.1-GGUF) or its **URL**
4. In Jan, paste the model ID/URL to **Search** bar in **Hub** or in **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **My Models**
4. In Jan, paste the model ID/URL to the **Search** bar in **Hub**
5. Select your preferred quantized version to download
<br/>
@ -90,13 +92,12 @@ Deep linking won't work for models requiring API tokens or usage agreements. You
#### 3. Import Local Files
If you already have GGUF model files on your computer:
1. In Jan, go to **Hub** or **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **My Models**
2. Click **Import Model**
3. Select your **GGUF** file(s)
4. Choose how you want to import:
1. In Jan, go to **Settings > Model Providers > Llama.cpp**
2. Click **Import** and select your GGUF file(s)
3. Choose how you want to import:
- **Link Files:** Creates symbolic links to your model files (saves space)
- **Duplicate:** Makes a copy of model files in Jan's directory
5. Click **Import** to complete
4. Click **Import** to complete
<Callout type="warning">
You need to own your **model configurations**, use at your own risk. Misconfigurations may result in lower quality or unexpected outputs.
@ -190,7 +191,7 @@ Modify model parameters under the settings array. Key fields to configure:
</Steps>
### Delete Models
1. Go to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **My Models**
1. Go to **Settings > Model Providers > Llama.cpp**
2. Find the model you want to remove
3. Select the three dots <EllipsisVertical width={16} height={16} style={{display:"inline"}}/> icon next to it and select **Delete Model**

View File

@ -20,32 +20,40 @@ keywords:
import { Callout, Steps } from 'nextra/components'
# Model Parameters
To customize model settings for a conversation:
1. In any **Threads**, click **Model** tab in the **right sidebar**
2. You can customize the following parameter types:
- **Inference Parameters:** Control how the model generates responses
- **Model Parameters:** Define the model's core properties and capabilities
- **Engine Parameters:** Configure how the model runs on your hardware
To customize model settings for a conversation or a model:
- In **Threads**, click the **Gear icon** next to selected model
- Or, in **Settings > Model Providers > Llama.cpp**, click the **gear icon** next to a model for advanced settings
- Click the **edit button** next to a model to configure capabilities
## Inference & Engine Parameters (Gear Icon)
These settings are available in the model settings modal:
| Parameter | Description |
|---------------------|-------------|
| **Context Size** | Maximum prompt context length (how much text the model can consider at once). |
| **GPU Layers** | Number of model layers to offload to GPU. More layers = faster, but uses more VRAM. |
| **Temperature** | Controls response randomness. Lower = more focused, higher = more creative. |
| **Top K** | Top-K sampling. Limits next token selection to the K most likely. |
| **Top P** | Top-P (nucleus) sampling. Limits next token selection to a cumulative probability. |
| **Min P** | Minimum probability for token selection. |
| **Repeat Last N** | Number of tokens to consider for repeat penalty. |
| **Repeat Penalty** | Penalize repeating token sequences. |
| **Presence Penalty**| Penalize alpha presence (encourages new topics). |
| **Max Tokens** | Maximum length of the model's response. |
| **Stop Sequences** | Tokens or phrases that will end the model's response. |
| **Frequency Penalty** | Reduces word repetition. |
## Model Capabilities (Edit Button)
These toggles are available when you click the edit button next to a model:
- **Vision**: Enable image input/output
- **Tools**: Enable advanced tools (web search, file ops, code)
- **Embeddings**: Enable embedding generation
- **Web Search**: Allow model to search the web
- **Reasoning**: Enable advanced reasoning features
<br/>
![Download Model](../_assets/model-parameters.png)
<br/>
### Inference Parameters
These settings determine how the model generates and formats its outputs.
| Parameter | Description |
|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **Temperature** | - Controls response randomness.<br></br>- Lower values (0.0-0.5) give focused, deterministic outputs. Higher values (0.8-2.0) produce more creative, varied responses. |
| **Top P** | - Sets the cumulative probability threshold for token selection.<br></br>- Lower values (0.1-0.7) make responses more focused and conservative. Higher values (0.8-1.0) allow more diverse word choices.|
| **Stream** | - Enables real-time response streaming. |
| **Max Tokens** | - Limits the length of the model's response.<br></br>- A higher limit benefits detailed and complex responses, while a lower limit helps maintain conciseness.|
| **Stop Sequences** | - Defines tokens or phrases that will end the model's response.<br></br>- Use common concluding phrases or tokens specific to your applications domain to ensure outputs terminate appropriately. |
| **Frequency Penalty** | - Reduces word repetition.<br></br>- Higher values (0.5-2.0) encourage more varied language. Useful for creative writing and content generation.|
| **Presence Penalty** | - Encourages the model to explore new topics.<br></br>- Higher values (0.5-2.0) help prevent the model from fixating on already-discussed subjects.|
### Model Parameters
This setting defines and configures the model's behavior.
| Parameter | Description |

View File

@ -52,6 +52,8 @@ Jan offers various local AI models, from smaller efficient models to larger more
Local models run directly on your computer, which means they use your computer's memory (RAM) and processing power. Please choose models carefully based on your hardware specifications ([Mac](/docs/desktop/mac#minimum-requirements), [Windows](/docs/desktop/windows#compatibility), [Linux](/docs/desktop/linux#compatibility)).
</Callout>
**Note:** Some models from Hugging Face require an access token. Enter your token in **Settings > Model Providers > Llama.cpp > Hugging Face Access Token** before importing.
For more model installation methods, please visit [Model Management](/docs/models/manage-models).
<br/>
@ -61,8 +63,8 @@ For more model installation methods, please visit [Model Management](/docs/model
### Step 3: Turn on GPU Acceleration (Optional)
While the model downloads, let's optimize your hardware setup. If you're on **Windows** or **Linux** and have a
compatible graphics card, you can significantly boost model performance by enabling GPU acceleration.
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Local Engine** > **Llama.cpp**
2. At **llama-cpp Backend**, select backend. For example `windows-amd64-vulkan` if you have and AMD gaphic card. For
1. Navigate to **(<Settings width={16} height={16} style={{display:"inline"}}/>) Settings** > **Model Providers** > **Llama.cpp**
2. At **llama-cpp Backend**, select backend. For example `windows-amd64-vulkan` if you have an AMD gaphic card. For
more info, see [our guide](/docs/local-engines/llama-cpp).
<Callout type="info">
@ -75,18 +77,11 @@ on [Windows](/docs/desktop/windows#gpu-acceleration) & [Linux](/docs/desktop/lin
### Step 4: Customize Assistant Instructions
Once your model has been downloaded and you're ready to start your first conversation, you can customize how the model
should respond by setting specific instructions:
1. In any **Thread**, click the **Assistant** tab in the **right sidebar**
2. Enter your instructions in **Instructions** field to define how the model should respond. For example, "You are an
expert storyteller who writes engaging and imaginative stories for marketing campaigns. You don't follow the herd and
rather think outside the box when putting your copywriting skills to the test."
You can modify these instructions at any time during your conversation to adjust a model's behavior for that specific
thread. See detailed guide at [Assistant](/docs/assistants).
should respond by modifying specific instructions or model configurations in [Assistant.](/docs/assistants)
<br/>
![Assistant Instruction](./_assets/quick-start-02.png)
![Assistant Instruction](./_assets/assistant-dropdown.png)
<br/>
@ -96,7 +91,7 @@ Now that your model is downloaded and instructions are set, you can begin chatti
the **input field** at the bottom of the thread to start the conversation.
You can further customize your experience by:
- Adjusting the [model parameters](/docs/models/model-parameters) in the **Model** tab in the **right sidebar**
- Adjusting the [model parameters](/docs/models/model-parameters) in the **Model Configurations** by clicking on the **Gear icon** next to the selected model or in the **Assistant Settings**
- Try different models for different tasks by clicking the **model selector** in **Model** tab or **input field**
- [Create new threads](/docs/threads#creating-new-thread) with different instructions and model configurations
@ -112,11 +107,10 @@ You can further customize your experience by:
Jan supports both open source and cloud-based models. You can connect to cloud model providers that are including: OpenAI
(GPT-4o, o1,...), Anthropic (Claude), Groq, Mistral, and more.
1. Open any **Thread**
2. Click **Model** tab in the **right sidebar** or **model selector** in input field
3. Once the selector is poped up, choose the **Cloud** tab
4. Select your preferred provider (Anthropic, OpenAI, etc.), click **Add ()** icon next to the provider
5. Obtain a valid API key from your chosen provider, ensure the key has sufficient credits & appropriate permissions
6. Copy & insert your **API Key** in Jan
2. Select a model from the **model selector** dropdown in input field
3. Select your preferred provider (Anthropic, OpenAI, etc.), click **Gear icon** next to the provider
4. Obtain a valid API key from your chosen provider, ensure the key has sufficient credits & appropriate permissions
5. Copy & insert your **API Key** in Jan
See [Remote APIs](/docs/remote-models/openai) for detailed configuration.

View File

@ -26,49 +26,106 @@ import { Settings, EllipsisVertical, Plus, FolderOpen, Pencil } from 'lucide-rea
# Settings
This guide explains how to customize your Jan application settings.
To access **Settings**, click <Settings width={16} height={16} style={{display:"inline"}}/> icon in the bottom left corner of Jan.
This guide explains how to customize your Jan application settings. To access **Settings**, click <Settings width={16} height={16} style={{display:"inline"}}/> icon in the bottom left corner of Jan.
## My models
## Navigation Overview
Here's at **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **My Models** you can manage all your installed AI models:
The Settings sidebar includes:
- **General**
- **Appearance**
- **Privacy**
- **Model Providers**
- **Shortcuts**
- **Hardware**
- **MCP Servers**
- **Local API Server**
- **HTTPS Proxy**
- **Extensions**
### Manage Downloaded Models
## Appearance
**1. Import Models:** You can import model here as how you can do in **Hub**
- Option 1: Import from [Hugging Face](/docs/models/manage-models#option-a-import-in-jan) by entering model Hugging Face URL in **Search** bar
- Option 2: [Import local files](/docs/models/manage-models#option-a-import-in-jan)
Customize Jan's look and feel:
- **Theme:**
- Dark
- Light
- System (follows OS preference)
- **Font Size:**
- Small
- Medium
- Large
- Extra Large
- **Color Customization:**
- Window background, main view, primary, accent, destructive (choose from palette)
<br/>
![Import from HF](./_assets/model-management-04.png)
<br/>
## Model Management
Manage your installed AI models in **Settings** > **Model Providers** or **My Models**:
### Import Models
- **From Hugging Face:**
- Enter a model's Hugging Face URL or ID in the search bar.
- **Note:** Some models require a Hugging Face Access Token. Enter your token in **Settings > Model Providers > Hugging Face**.
- **From Local Files:**
- Click **Import Model** and select your GGUF files.
### Remove Models
- Click the three dots <EllipsisVertical width={16} height={16} style={{display:"inline"}}/> next to a model and select **Delete**.
### Start Models
- Click the three dots <EllipsisVertical width={16} height={16} style={{display:"inline"}}/> next to a model and select **Start**.
### Hugging Face Access Token
To download models from Hugging Face that require authentication:
1. Get your token from [Hugging Face Tokens](https://huggingface.co/docs/hub/en/security-tokens)
2. Enter it in **Settings > Model Providers > Hugging Face**
## Model Settings (Gear Icon)
Click the gear icon next to a model to configure advanced settings:
- **Context Size**: Maximum prompt context length
- **GPU Layers**: Number of model layers to offload to GPU
- **Temperature**: Controls randomness (higher = more random)
- **Top K**: Top-K sampling
- **Top P**: Top-P sampling
- **Min P**: Minimum probability for token selection
- **Repeat Last N**: Number of tokens to consider for repeat penalty
- **Repeat Penalty**: Penalize repeating token sequences
- **Presence Penalty**: Penalize alpha presence
_See [Model Parameters](/docs/models/model-parameters) for detailed explanations._
## Model Capabilities (Edit Button)
Click the edit button next to a model to toggle capabilities:
- **Vision**: Enable image input/output
- **Tools**: Enable advanced tools (web search, file ops, code)
- **Embeddings**: Enable embedding generation
- **Web Search**: Allow model to search the web
- **Reasoning**: Enable advanced reasoning features
**2. Remove Models**: Use the same instructions in [Delete Local Models](/docs/models/manage-models#delete-models)
<br/>
![Remove Model](./_assets/model-management-05.png)
<br/>
## Shortcuts
See and customize keyboard shortcuts for navigation, chat, and thread management.
**3. Start Models**
1. Choose the model you want to start
2. Click **three dots** <EllipsisVertical width={16} height={16} style={{display:"inline"}}/> icon next to the model
3. Select **Start Model**
## Hardware
<br/>
![Start Model](./_assets/settings-02.png)
<br/>
Monitor and manage system resources:
- **CPU, RAM, GPU**: View usage and specs
- **GPU Acceleration**: Enable/disable and configure GPU settings
## MCP Servers, Local API Server, HTTPS Proxy, Extensions
### Manage Cloud Models
- **MCP Servers**: Add/edit servers for advanced integrations
- **Local API Server**: Configure OpenAI-compatible local HTTP server
- **HTTPS Proxy**: Set up proxy for secure connections
- **Extensions**: Manage and configure Jan extensions
1. To install cloud models, click the **Add** (<Plus width={16} height={16} style={{display:"inline"}}/>) icon next your preferred providers (e.g., Anthropic, OpenAI, Groq), and add **API Key** to use. See [detailed instructions](/docs/remote-models/openai) for each provider.
2. Once a provider is installed, you can use its models & manage its settings by clicking on the **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) icon next to it.
## Factory Reset
<br/>
![Manage Cloud Provider](./_assets/settings-03.png)
<br/>
Restore Jan to its initial state (erases all data and models). Use only if necessary.
## Preferences

View File

@ -27,11 +27,11 @@ import { SquarePen, Pencil, Ellipsis, Paintbrush, Trash2 } from 'lucide-react'
Jan organizes your AI conversations into threads, making it easy to track and revisit your interactions. This guide will help you effectively manage your chat history.
## Creating New Thread
1. Click **New Thread** (<SquarePen width={16} height={16} style={{display:"inline"}}/>) icon at the left of Jan top navigation
1. Click **New Chat** (<SquarePen width={16} height={16} style={{display:"inline"}}/>) icon on the bottom left of Jan
2. Select your preferred model in **Model Selector** in input field & start chatting
<br/>
![Create New Thread](./_assets/threads-02.png)
![Create New Thread](./_assets/threads-new-chat.png)
## View Threads History
@ -40,34 +40,26 @@ Jan organizes your AI conversations into threads, making it easy to track and re
- View **Thread List**, scroll through your threads history
- Click any thread to open the full conversation
## Favorites and Recents
Jan helps you quickly access important and recent conversations with **Favorites** and **Recents** in the left sidebar:
- **Favorites**: Pin threads you use often for instant access. Click the star icon in the context menu next to any thread to add or remove it from Favorites.
- **Recents**: See your most recently accessed threads for quick navigation.
<br/>
![View Threads](./_assets/threads-01.png)
![Favorites and Recents](./_assets/threads-favorites-and-recents.png)
*Screenshot: The left sidebar showing Favorites and Recents sections for easy thread management.*
## Edit Thread Title
1. Navigate to the **Thread** that you want to edit title in left sidebar
2. Hover on the thread and click on **three dots** (<Ellipsis width={16} height={16} style={{display:"inline"}}/>) icon
3. Select <Pencil width={16} height={16} style={{display:"inline"}}/> **Edit Title**
3. Select <Pencil width={16} height={16} style={{display:"inline"}}/> **Rename**
4. Add new title & save
<br/>
![Edit Thread](./_assets/threads-03.png)
## Clean Thread
To remove all messages while keeping the thread & its settings:
1. Navigate to the **Thread** that you want to clean in left sidebar
2. Hover on the thread and click on **three dots** (<Ellipsis width={16} height={16} style={{display:"inline"}}/>) icon
3. Select <Paintbrush width={16} height={16} style={{display:"inline"}}/> **Clean Thread**
<Callout type="info">
This will delete all messages in the thread while preserving thread settings
</Callout>
<br/>
![Clean Thread](./_assets/threads-04.png)
![Context Menu](./_assets/threads-context-menu.png)
## Delete Thread
@ -79,15 +71,14 @@ When you want to completely remove a thread:
1. Navigate to the **Thread** that you want to delete in left sidebar
2. Hover on the thread and click on **three dots** (<Ellipsis width={16} height={16} style={{display:"inline"}}/>) icon
3. Select <Trash2 width={16} height={16} style={{display:"inline"}}/> **Delete Thread**
3. Select <Trash2 width={16} height={16} style={{display:"inline"}}/> **Delete**
<br/>
![Delete Thread](./_assets/threads-05.png)
![Delete Thread](./_assets/threads-context-menu.png)
### Delete all threads at once
In case you need to remove all threads at once, you'll need to manually delete the `threads` folder:
1. Open [Jan Data Folder](docs/settings#access-the-jan-data-folder)
2. Delete the `threads` folder
3. Restart Jan
In case you need to remove all threads at once:
1. Hover on the `Recents` category and click on **three dots** (<Ellipsis width={16} height={16} style={{display:"inline"}}/>) icon
2. Select <Trash2 width={16} height={16} style={{display:"inline"}}/> **Delete All**