fix: missing url on article

This commit is contained in:
Faisal Amir 2025-09-24 09:45:27 +07:00
parent 4d43841ae3
commit dc097eaef9
24 changed files with 56 additions and 56 deletions

View File

@ -155,7 +155,7 @@ Debugging headquarters (`/logs/app.txt`):
The silicon brain collection. Each model has its own `model.json`.
<Callout type="info">
Full parameters: [here](/docs/model-parameters)
Full parameters: [here](/docs/desktop/desktop/model-parameters)
</Callout>
### `threads/`
@ -216,5 +216,5 @@ Chat archive. Each thread (`/threads/jan_unixstamp/`) contains:
## Delete Jan Data
Uninstall guides: [Mac](/docs/desktop/mac#step-2-clean-up-data-optional),
[Windows](/docs/desktop/windows#step-2-handle-jan-data), or [Linux](docs/desktop/linux#uninstall-jan).
Uninstall guides: [Mac](/docs/desktop/desktop/install/mac#step-2-clean-up-data-optional),
[Windows](/docs/desktop/desktop/install/windows#step-2-handle-jan-data), or [Linux](docs/desktop/install/linux#uninstall-jan).

View File

@ -184,9 +184,9 @@ Jan is built on the shoulders of giants:
<FAQBox title="Is Jan compatible with my system?">
**Supported OS**:
- [Windows 10+](/docs/desktop/windows#compatibility)
- [macOS 12+](/docs/desktop/mac#compatibility)
- [Linux (Ubuntu 20.04+)](/docs/desktop/linux)
- [Windows 10+](/docs/desktop/desktop/install/windows#compatibility)
- [macOS 12+](/docs/desktop/desktop/install/mac#compatibility)
- [Linux (Ubuntu 20.04+)](/docs/desktop/desktop/install/linux)
**Hardware**:
- Minimum: 8GB RAM, 10GB storage
@ -216,7 +216,7 @@ Jan is built on the shoulders of giants:
<FAQBox title="How does Jan protect privacy?">
- Runs 100% offline once models are downloaded
- All data stored locally in [Jan Data Folder](/docs/data-folder)
- All data stored locally in [Jan Data Folder](/docs/desktop/desktop/data-folder)
- No telemetry without explicit consent
- Open source code you can audit

View File

@ -193,7 +193,7 @@ $XDG_CONFIG_HOME = /home/username/custom_config
~/.config/Jan/data
```
See [Jan Data Folder](/docs/data-folder) for details.
See [Jan Data Folder](/docs/desktop/data-folder) for details.
## GPU Acceleration
@ -244,7 +244,7 @@ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64
### Step 2: Enable GPU Acceleration
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Local Engine** > **Llama.cpp**
2. Select appropriate backend in **llama-cpp Backend**. Details in our [guide](/docs/local-engines/llama-cpp).
2. Select appropriate backend in **llama-cpp Backend**. Details in our [guide](/docs/desktop/local-engines/llama-cpp).
<Callout type="info">
CUDA offers better performance than Vulkan.
@ -258,7 +258,7 @@ CUDA offers better performance than Vulkan.
Requires Vulkan support.
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Hardware** > **GPUs**
2. Select appropriate backend in **llama-cpp Backend**. Details in our [guide](/docs/local-engines/llama-cpp).
2. Select appropriate backend in **llama-cpp Backend**. Details in our [guide](/docs/desktop/local-engines/llama-cpp).
</Tabs.Tab>
@ -266,7 +266,7 @@ Requires Vulkan support.
Requires Vulkan support.
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Hardware** > **GPUs**
2. Select appropriate backend in **llama-cpp Backend**. Details in our [guide](/docs/local-engines/llama-cpp).
2. Select appropriate backend in **llama-cpp Backend**. Details in our [guide](/docs/desktop/local-engines/llama-cpp).
</Tabs.Tab>
</Tabs>

View File

@ -111,7 +111,7 @@ Default location:
# Default installation directory
~/Library/Application\ Support/Jan/data
```
See [Jan Data Folder](/docs/data-folder) for details.
See [Jan Data Folder](/docs/desktop/data-folder) for details.
## Uninstall Jan
@ -158,7 +158,7 @@ No, it cannot be restored once you delete the Jan data folder during uninstallat
</FAQBox>
<Callout type="info">
💡 Warning: If you have any trouble during installation, please see our [Troubleshooting](/docs/troubleshooting)
💡 Warning: If you have any trouble during installation, please see our [Troubleshooting](/docs/desktop/troubleshooting)
guide to resolve your problem.
</Callout>

View File

@ -119,7 +119,7 @@ Default installation path:
~\Users\<YourUsername>\AppData\Roaming\Jan\data
```
See [Jan Data Folder](/docs/data-folder) for complete folder structure details.
See [Jan Data Folder](/docs/desktop/data-folder) for complete folder structure details.
## GPU Acceleration

View File

@ -24,7 +24,7 @@ import { Settings } from 'lucide-react'
`llama.cpp` is the core **inference engine** Jan uses to run AI models locally on your computer. This section covers the settings for the engine itself, which control *how* a model processes information on your hardware.
<Callout>
Looking for API server settings (like port, host, CORS)? They have been moved to the dedicated [**Local API Server**](/docs/api-server) page.
Looking for API server settings (like port, host, CORS)? They have been moved to the dedicated [**Local API Server**](/docs/desktop/desktop/api-server) page.
</Callout>
## Accessing Engine Settings

View File

@ -30,9 +30,9 @@ This guide shows you how to add, customize, and delete models within Jan.
Local models are managed through [Llama.cpp](https://github.com/ggerganov/llama.cpp), and these models are in a
format called GGUF. When you run them locally, they will use your computer's memory (RAM) and processing power, so
please make sure that you download models that match the hardware specifications for your operating system:
- [Mac](/docs/desktop/mac#compatibility)
- [Windows](/docs/desktop/windows#compatibility)
- [Linux](/docs/desktop/linux#compatibility).
- [Mac](/docs/desktop/desktop/install/mac#compatibility)
- [Windows](/docs/desktop/desktop/install/windows#compatibility)
- [Linux](/docs/desktop/desktop/install/linux#compatibility).
### Adding Models
@ -156,7 +156,7 @@ For advanced users who want to add a specific model that is not available within
Key fields to configure:
1. The **Settings** array is where you can set the path or location of your model in your computer, the context
length allowed, and the chat template expected by your model.
2. The [**Parameters**](/docs/model-parameters) are the adjustable settings that affect how your model operates or
2. The [**Parameters**](/docs/desktop/desktop/model-parameters) are the adjustable settings that affect how your model operates or
processes the data. The fields in the parameters array are typically general and can be used across different
models. Here is an example of model parameters:
@ -186,7 +186,7 @@ models. Here is an example of model parameters:
<Callout type="info">
When using cloud models, be aware of any associated costs and rate limits from the providers. See detailed guide for
each cloud model provider [here](/docs/remote-models/anthropic).
each cloud model provider [here](/docs/desktop/desktop/remote-models/anthropic).
</Callout>
Jan supports connecting to various AI cloud providers that are OpenAI API-compatible, including: OpenAI (GPT-4o, o3,...),

View File

@ -100,7 +100,7 @@ making your workflows more modular and adaptable over time.
<Callout type="info">
To use MCP effectively, ensure your AI model supports tool calling capabilities:
- For cloud models (like Claude or GPT-4): Verify tool calling is enabled in your API settings
- For local models: Enable tool calling in the model parameters [click the edit button in Model Capabilities](/docs/model-parameters#model-capabilities-edit-button)
- For local models: Enable tool calling in the model parameters [click the edit button in Model Capabilities](/docs/desktop/desktop/model-parameters#model-capabilities-edit-button)
- Check the model's documentation to confirm MCP compatibility
</Callout>

View File

@ -26,7 +26,7 @@ import { Callout } from 'nextra/components'
Jan is your AI. Period. Here's what we do with data.
<Callout>
Full privacy policy lives [here](/docs/privacy-policy), if you're into that sort of thing.
Full privacy policy lives [here](/docs/desktop/desktop/privacy-policy), if you're into that sort of thing.
</Callout>
<Callout type="info">

View File

@ -27,7 +27,7 @@ Get up and running with Jan in minutes. This guide will help you install Jan, do
### Step 1: Install Jan
1. [Download Jan](/download)
2. Install the app ([Mac](/docs/desktop/mac), [Windows](/docs/desktop/windows), [Linux](/docs/desktop/linux))
2. Install the app ([Mac](/docs/desktop/desktop/install/mac), [Windows](/docs/desktop/desktop/install/windows), [Linux](/docs/desktop/desktop/install/linux))
3. Launch Jan
### Step 2: Download Jan v1
@ -61,7 +61,7 @@ Try asking Jan v1 questions like:
- "What are the pros and cons of electric vehicles?"
<Callout type="tip">
**Want to give Jan v1 access to current web information?** Check out our [Serper MCP tutorial](/docs/mcp-examples/search/serper) to enable real-time web search with 2,500 free searches!
**Want to give Jan v1 access to current web information?** Check out our [Serper MCP tutorial](/docs/desktop/desktop/mcp-examples/search/serper) to enable real-time web search with 2,500 free searches!
</Callout>
</Steps>
@ -138,4 +138,4 @@ Connect to OpenAI, Anthropic, Groq, Mistral, and others:
![Connect Remote APIs](./_assets/quick-start-03.png)
For detailed setup, see [Remote APIs](/docs/remote-models/openai).
For detailed setup, see [Remote APIs](/docs/desktop/desktop/remote-models/openai).

View File

@ -56,7 +56,7 @@ Ensure your API key has sufficient credits
## Available Anthropic Models
Jan automatically includes Anthropic's available models. In case you want to use a specific Anthropic model
that you cannot find in **Jan**, follow instructions in [Add Cloud Models](/docs/manage-models#add-models-1):
that you cannot find in **Jan**, follow instructions in [Add Cloud Models](/docs/desktop/manage-models#add-models-1):
- See list of available models in [Anthropic Models](https://docs.anthropic.com/claude/docs/models-overview).
- The `id` property must match the model name in the list. For example, `claude-opus-4@20250514`, `claude-sonnet-4@20250514`, or `claude-3-5-haiku@20241022`.
@ -72,7 +72,7 @@ Common issues and solutions:
**2. Connection Problems**
- Check your internet connection
- Verify Anthropic's system status
- Look for error messages in [Jan's logs](/docs/troubleshooting#how-to-get-error-logs)
- Look for error messages in [Jan's logs](/docs/desktop/troubleshooting#how-to-get-error-logs)
**3. Model Unavailable**
- Confirm your API key has access to the model

View File

@ -55,7 +55,7 @@ Ensure your API key has sufficient credits.
## Available Cohere Models
Jan automatically includes Cohere's available models. In case you want to use a specific
Cohere model that you cannot find in **Jan**, follow instructions in [Add Cloud Models](/docs/manage-models):
Cohere model that you cannot find in **Jan**, follow instructions in [Add Cloud Models](/docs/desktop/manage-models):
- See list of available models in [Cohere Documentation](https://docs.cohere.com/v2/docs/models).
- The `id` property must match the model name in the list. For example, `command-nightly` or `command-light`.
@ -71,7 +71,7 @@ Common issues and solutions:
**2. Connection Problems**
- Check your internet connection
- Verify Cohere's [system status](https://status.cohere.com/)
- Look for error messages in [Jan's logs](/docs/troubleshooting#how-to-get-error-logs)
- Look for error messages in [Jan's logs](/docs/desktop/troubleshooting#how-to-get-error-logs)
**3. Model Unavailable**
- Confirm your API key has access to the model

View File

@ -53,7 +53,7 @@ Ensure your API key has sufficient credits
## Available Google Models
Jan automatically includes Google's available models like Gemini series. In case you want to use a specific
Gemini model that you cannot find in **Jan**, follow instructions in [Add Cloud Models](/docs/manage-models#add-models-1):
Gemini model that you cannot find in **Jan**, follow instructions in [Add Cloud Models](/docs/desktop/manage-models#add-models-1):
- See list of available models in [Google Models](https://ai.google.dev/gemini-api/docs/models/gemini).
- The `id` property must match the model name in the list. For example, `gemini-1.5-pro` or `gemini-2.0-flash-lite-preview`.
@ -69,7 +69,7 @@ Common issues and solutions:
**2. Connection Problems**
- Check your internet connection
- Verify [Gemini's system status](https://www.google.com/appsstatus/dashboard/)
- Look for error messages in [Jan's logs](/docs/troubleshooting#how-to-get-error-logs)
- Look for error messages in [Jan's logs](/docs/desktop/troubleshooting#how-to-get-error-logs)
**3. Model Unavailable**
- Confirm your API key has access to the model

View File

@ -54,7 +54,7 @@ Ensure your API key has sufficient credits
## Available Models Through Groq
Jan automatically includes Groq's available models. In case you want to use a specific Groq model that
you cannot find in **Jan**, follow the instructions in the [Add Cloud Models](/docs/manage-models#add-models-1):
you cannot find in **Jan**, follow the instructions in the [Add Cloud Models](/docs/desktop/manage-models#add-models-1):
- See list of available models in [Groq Documentation](https://console.groq.com/docs/models).
- The `id` property must match the model name in the list. For example, if you want to use Llama3.3 70B, you must set the `id` property to `llama-3.3-70b-versatile`.
@ -70,7 +70,7 @@ Common issues and solutions:
**2. Connection Problems**
- Check your internet connection
- Verify Groq's system status
- Look for error messages in [Jan's logs](/docs/troubleshooting#how-to-get-error-logs)
- Look for error messages in [Jan's logs](/docs/desktop/troubleshooting#how-to-get-error-logs)
**3. Model Unavailable**
- Confirm your API key has access to the model

View File

@ -141,7 +141,7 @@ Common issues and solutions:
**2. Connection Problems**
- Check your internet connection
- Verify Hugging Face's system status
- Look for error messages in [Jan's logs](/docs/troubleshooting#how-to-get-error-logs)
- Look for error messages in [Jan's logs](/docs/desktop/troubleshooting#how-to-get-error-logs)
**3. Model Unavailable**
- Confirm your API key has access to the model

View File

@ -56,7 +56,7 @@ Ensure your API key has sufficient credits
## Available Mistral Models
Jan automatically includes Mistral's available models. In case you want to use a specific Mistral model
that you cannot find in **Jan**, follow the instructions in [Add Cloud Models](/docs/manage-models#add-models-1):
that you cannot find in **Jan**, follow the instructions in [Add Cloud Models](/docs/desktop/manage-models#add-models-1):
- See list of available models in [Mistral AI Documentation](https://docs.mistral.ai/platform/endpoints).
- The `id` property must match the model name in the list. For example, if you want to use
Mistral Large, you must set the `id` property to `mistral-large-latest`
@ -73,7 +73,7 @@ Common issues and solutions:
**2. Connection Problems**
- Check your internet connection
- Verify Mistral AI's system status
- Look for error messages in [Jan's logs](/docs/troubleshooting#how-to-get-error-logs)
- Look for error messages in [Jan's logs](/docs/desktop/troubleshooting#how-to-get-error-logs)
**3. Model Unavailable**
- Confirm your API key has access to the model

View File

@ -58,7 +58,7 @@ Start chatting
## Available OpenAI Models
Jan automatically includes popular OpenAI models. In case you want to use a specific model that you
cannot find in Jan, follow instructions in [Add Cloud Models](/docs/manage-models#add-models-1):
cannot find in Jan, follow instructions in [Add Cloud Models](/docs/desktop/manage-models#add-models-1):
- See list of available models in [OpenAI Platform](https://platform.openai.com/docs/models/overview).
- The id property must match the model name in the list. For example, if you want to use the
[GPT-4.5](https://platform.openai.com/docs/models/), you must set the id property
@ -76,7 +76,7 @@ Common issues and solutions:
2. Connection Problems
- Check your internet connection
- Verify OpenAI's [system status](https://status.openai.com)
- Look for error messages in [Jan's logs](/docs/troubleshooting#how-to-get-error-logs)
- Look for error messages in [Jan's logs](/docs/desktop/troubleshooting#how-to-get-error-logs)
3. Model Unavailable
- Confirm your API key has access to the model

View File

@ -88,7 +88,7 @@ Common issues and solutions:
**2. Connection Problems**
- Check your internet connection
- Verify OpenRouter's [system status](https://status.openrouter.ai)
- Look for error messages in [Jan's logs](/docs/troubleshooting#how-to-get-error-logs)
- Look for error messages in [Jan's logs](/docs/desktop/troubleshooting#how-to-get-error-logs)
**3. Model Unavailable**
- Confirm the model is currently available on OpenRouter

View File

@ -69,7 +69,7 @@ Click the gear icon next to any model to adjust how it behaves:
- **Presence Penalty**: Encourages the model to use varied vocabulary
<Callout type="info">
For detailed explanations of these parameters, see our [Model Parameters Guide](/docs/model-parameters).
For detailed explanations of these parameters, see our [Model Parameters Guide](/docs/desktop/desktop/model-parameters).
</Callout>
## Hardware Monitoring
@ -117,7 +117,7 @@ Access privacy settings at **Settings** > **Privacy**:
- Change this setting anytime
<Callout type="info">
See exactly what we collect (with your permission) in our [Privacy Policy](/docs/privacy).
See exactly what we collect (with your permission) in our [Privacy Policy](/docs/desktop/desktop/privacy).
</Callout>
![Analytics](./_assets/settings-07.png)
@ -174,7 +174,7 @@ This includes configuration for:
- CORS (Cross-Origin Resource Sharing)
- Verbose Logging
[**Go to Local API Server Settings &rarr;**](/docs/api-server)
[**Go to Local API Server Settings &rarr;**](/docs/desktop/desktop/api-server)
## Emergency Options

View File

@ -226,7 +226,7 @@ When models won't respond or show these errors:
- **RAM:** Use models under 80% of available memory
- 8GB system: Use models under 6GB
- 16GB system: Use models under 13GB
- **Hardware:** Verify your system meets [minimum requirements](/docs/troubleshooting#step-1-verify-hardware-and-system-requirements)
- **Hardware:** Verify your system meets [minimum requirements](/docs/desktop/desktop/troubleshooting#step-1-verify-hardware-and-system-requirements)
**2. Adjust Model Settings**
- Open model settings in the chat sidebar
@ -318,9 +318,9 @@ If these solutions don't work:
- Include your logs and system info
**3. Check Resources:**
- [System requirements](/docs/troubleshooting#step-1-verify-hardware-and-system-requirements)
- [Model compatibility guides](/docs/manage-models)
- [Hardware setup guides](/docs/desktop/)
- [System requirements](/docs/desktop/desktop/troubleshooting#step-1-verify-hardware-and-system-requirements)
- [Model compatibility guides](/docs/desktop/desktop/manage-models)
- [Hardware setup guides](/docs/desktop/desktop/)
<Callout type="info">
When sharing logs, remove personal information first. We only keep logs for 24 hours, so report issues promptly.

View File

@ -68,7 +68,7 @@ Click the gear icon next to a model to configure advanced settings:
- **Repeat Penalty**: Controls how strongly the model avoids repeating phrases (higher values reduce repetition)
- **Presence Penalty**: Discourages reusing words that already appeared in the text (helps with variety)
_See [Model Parameters](/docs/model-parameters) for a more detailed explanation._
_See [Model Parameters](/docs/desktop/desktop/model-parameters) for a more detailed explanation._
## Hardware
@ -108,7 +108,7 @@ You can help improve Jan by sharing anonymous usage data:
2. You can change this setting at any time
<Callout type="info">
Read more about that we collect with opt-in users at [Privacy](/docs/privacy).
Read more about that we collect with opt-in users at [Privacy](/docs/desktop/desktop/privacy).
</Callout>
<br/>

View File

@ -328,19 +328,19 @@ This command ensures that the necessary permissions are granted for Jan's instal
When you start a chat with a model and encounter a **Failed to Fetch** or **Something's Amiss** error, here are some possible solutions to resolve it:
**1. Check System & Hardware Requirements**
- Hardware dependencies: Ensure your device meets all [hardware requirements](docs/troubleshooting#step-1-verify-hardware-and-system-requirements)
- OS: Ensure your operating system meets the minimum requirements ([Mac](/docs/desktop/mac#minimum-requirements), [Windows](/docs/desktop/windows#compatibility), [Linux](docs/desktop/linux#compatibility))
- Hardware dependencies: Ensure your device meets all [hardware requirements](docs/desktop/troubleshooting#step-1-verify-hardware-and-system-requirements)
- OS: Ensure your operating system meets the minimum requirements ([Mac](/docs/desktop/desktop/install/mac#minimum-requirements), [Windows](/docs/desktop/desktop/install/windows#compatibility), [Linux](/docs/desktop/desktop/install/linux#compatibility))
- RAM: Choose models that use less than 80% of your available RAM
- For 8GB systems: Use models under 6GB
- For 16GB systems: Use models under 13GB
**2. Check Model Parameters**
- In **Engine Settings** in right sidebar, check your `ngl` ([number of GPU layers](/docs/models/model-parameters#engine-parameters)) setting to see if it's too high
- In **Engine Settings** in right sidebar, check your `ngl` ([number of GPU layers](/docs/desktop/desktop/models/model-parameters#engine-parameters)) setting to see if it's too high
- Start with a lower NGL value and increase gradually based on your GPU memory
**3. Port Conflicts**
If you check your [app logs](/docs/troubleshooting#how-to-get-error-logs) & see "Bind address failed at 127.0.0.1:39291", check port availability:
If you check your [app logs](/docs/desktop/desktop/troubleshooting#how-to-get-error-logs) & see "Bind address failed at 127.0.0.1:39291", check port availability:
```
# Mac
netstat -an | grep 39291
@ -371,7 +371,7 @@ This will delete all chat history, models, and settings.
</Callout>
**5. Try a clean installation**
- Uninstall Jan & clean Jan data folders ([Mac](/docs/desktop/mac#uninstall-jan), [Windows](/docs/desktop/windows#uninstall-jan), [Linux](docs/desktop/linux#uninstall-jan))
- Uninstall Jan & clean Jan data folders ([Mac](/docs/desktop/desktop/install/mac#uninstall-jan), [Windows](/docs/desktop/desktop/install/windows#uninstall-jan), [Linux](/docs/desktop/desktop/install/linux#uninstall-jan))
- Install the latest [stable release](/download)
<Callout type="warning">
@ -392,7 +392,7 @@ The "Unexpected token" error usually relates to OpenAI API authentication or reg
## Need Further Support?
If you can't find what you need in our troubleshooting guide, feel free reach out to us for extra help:
- **Copy** your [app logs](/docs/troubleshooting#how-to-get-error-logs)
- **Copy** your [app logs](/docs/desktop/desktop/troubleshooting#how-to-get-error-logs)
- Go to our [Discord](https://discord.com/invite/FTk2MvZwJH) & send it to **#🆘|jan-help** channel for further support.

View File

@ -17,7 +17,7 @@ Jan now supports [NVIDIA TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) i
We've been excited for TensorRT-LLM for a while, and [had a lot of fun implementing it](https://github.com/menloresearch/nitro-tensorrt-llm). As part of the process, we've run some benchmarks, to see how TensorRT-LLM fares on consumer hardware (e.g. [4090s](https://www.nvidia.com/en-us/geforce/graphics-cards/40-series/), [3090s](https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/)) we commonly see in the [Jan's hardware community](https://discord.com/channels/1107178041848909847/1201834752206974996).
<Callout type="info" >
**Give it a try!** Jan's [TensorRT-LLM extension](/docs/built-in/tensorrt-llm) is available in Jan v0.4.9 and up ([see more](/docs/built-in/tensorrt-llm)). We precompiled some TensorRT-LLM models for you to try: `Mistral 7b`, `TinyLlama-1.1b`, `TinyJensen-1.1b` 😂
**Give it a try!** Jan's [TensorRT-LLM extension](/docs/desktop/built-in/tensorrt-llm) is available in Jan v0.4.9 and up ([see more](/docs/desktop/built-in/tensorrt-llm)). We precompiled some TensorRT-LLM models for you to try: `Mistral 7b`, `TinyLlama-1.1b`, `TinyJensen-1.1b` 😂
Bugs or feedback? Let us know on [GitHub](https://github.com/menloresearch/jan) or via [Discord](https://discord.com/channels/1107178041848909847/1201832734704795688).
</Callout>

View File

@ -126,7 +126,7 @@ any version with Model Context Protocol in it (>`v0.6.3`).
**The Key: Assistants + Tools**
Running deep research in Jan can be accomplished by combining [custom assistants](https://jan.ai/docs/assistants)
with [MCP search tools](https://jan.ai/docs/mcp-examples/search/exa). This pairing allows any model—local or
with [MCP search tools](https://jan.ai/docs/desktop/mcp-examples/search/exa). This pairing allows any model—local or
cloud—to follow a systematic research workflow, to create a report similar to that of other providers, with some
visible limitations (for now).