diff --git a/docs/src/pages/docs/desktop/data-folder.mdx b/docs/src/pages/docs/desktop/data-folder.mdx index 4c582c801..9db2402f4 100644 --- a/docs/src/pages/docs/desktop/data-folder.mdx +++ b/docs/src/pages/docs/desktop/data-folder.mdx @@ -155,7 +155,7 @@ Debugging headquarters (`/logs/app.txt`): The silicon brain collection. Each model has its own `model.json`. -Full parameters: [here](/docs/model-parameters) +Full parameters: [here](/docs/desktop/desktop/model-parameters) ### `threads/` @@ -216,5 +216,5 @@ Chat archive. Each thread (`/threads/jan_unixstamp/`) contains: ## Delete Jan Data -Uninstall guides: [Mac](/docs/desktop/mac#step-2-clean-up-data-optional), -[Windows](/docs/desktop/windows#step-2-handle-jan-data), or [Linux](docs/desktop/linux#uninstall-jan). +Uninstall guides: [Mac](/docs/desktop/desktop/install/mac#step-2-clean-up-data-optional), +[Windows](/docs/desktop/desktop/install/windows#step-2-handle-jan-data), or [Linux](docs/desktop/install/linux#uninstall-jan). diff --git a/docs/src/pages/docs/desktop/index.mdx b/docs/src/pages/docs/desktop/index.mdx index 5e37e76b3..c46ddfeda 100644 --- a/docs/src/pages/docs/desktop/index.mdx +++ b/docs/src/pages/docs/desktop/index.mdx @@ -184,9 +184,9 @@ Jan is built on the shoulders of giants: **Supported OS**: - - [Windows 10+](/docs/desktop/windows#compatibility) - - [macOS 12+](/docs/desktop/mac#compatibility) - - [Linux (Ubuntu 20.04+)](/docs/desktop/linux) + - [Windows 10+](/docs/desktop/desktop/install/windows#compatibility) + - [macOS 12+](/docs/desktop/desktop/install/mac#compatibility) + - [Linux (Ubuntu 20.04+)](/docs/desktop/desktop/install/linux) **Hardware**: - Minimum: 8GB RAM, 10GB storage @@ -216,7 +216,7 @@ Jan is built on the shoulders of giants: - Runs 100% offline once models are downloaded - - All data stored locally in [Jan Data Folder](/docs/data-folder) + - All data stored locally in [Jan Data Folder](/docs/desktop/desktop/data-folder) - No telemetry without explicit consent - Open source code you can audit diff --git a/docs/src/pages/docs/desktop/install/linux.mdx b/docs/src/pages/docs/desktop/install/linux.mdx index 7eacd67ce..2d42a59f1 100644 --- a/docs/src/pages/docs/desktop/install/linux.mdx +++ b/docs/src/pages/docs/desktop/install/linux.mdx @@ -193,7 +193,7 @@ $XDG_CONFIG_HOME = /home/username/custom_config ~/.config/Jan/data ``` -See [Jan Data Folder](/docs/data-folder) for details. +See [Jan Data Folder](/docs/desktop/data-folder) for details. ## GPU Acceleration @@ -244,7 +244,7 @@ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64 ### Step 2: Enable GPU Acceleration 1. Navigate to **Settings** () > **Local Engine** > **Llama.cpp** -2. Select appropriate backend in **llama-cpp Backend**. Details in our [guide](/docs/local-engines/llama-cpp). +2. Select appropriate backend in **llama-cpp Backend**. Details in our [guide](/docs/desktop/local-engines/llama-cpp). CUDA offers better performance than Vulkan. @@ -258,7 +258,7 @@ CUDA offers better performance than Vulkan. Requires Vulkan support. 1. Navigate to **Settings** () > **Hardware** > **GPUs** -2. Select appropriate backend in **llama-cpp Backend**. Details in our [guide](/docs/local-engines/llama-cpp). +2. Select appropriate backend in **llama-cpp Backend**. Details in our [guide](/docs/desktop/local-engines/llama-cpp). @@ -266,7 +266,7 @@ Requires Vulkan support. Requires Vulkan support. 1. Navigate to **Settings** () > **Hardware** > **GPUs** -2. Select appropriate backend in **llama-cpp Backend**. Details in our [guide](/docs/local-engines/llama-cpp). +2. Select appropriate backend in **llama-cpp Backend**. Details in our [guide](/docs/desktop/local-engines/llama-cpp). diff --git a/docs/src/pages/docs/desktop/install/mac.mdx b/docs/src/pages/docs/desktop/install/mac.mdx index d62c67878..827329d6e 100644 --- a/docs/src/pages/docs/desktop/install/mac.mdx +++ b/docs/src/pages/docs/desktop/install/mac.mdx @@ -111,7 +111,7 @@ Default location: # Default installation directory ~/Library/Application\ Support/Jan/data ``` -See [Jan Data Folder](/docs/data-folder) for details. +See [Jan Data Folder](/docs/desktop/data-folder) for details. ## Uninstall Jan @@ -158,7 +158,7 @@ No, it cannot be restored once you delete the Jan data folder during uninstallat -💡 Warning: If you have any trouble during installation, please see our [Troubleshooting](/docs/troubleshooting) +💡 Warning: If you have any trouble during installation, please see our [Troubleshooting](/docs/desktop/troubleshooting) guide to resolve your problem. diff --git a/docs/src/pages/docs/desktop/install/windows.mdx b/docs/src/pages/docs/desktop/install/windows.mdx index 7cda7c8a3..2c56e2319 100644 --- a/docs/src/pages/docs/desktop/install/windows.mdx +++ b/docs/src/pages/docs/desktop/install/windows.mdx @@ -119,7 +119,7 @@ Default installation path: ~\Users\\AppData\Roaming\Jan\data ``` -See [Jan Data Folder](/docs/data-folder) for complete folder structure details. +See [Jan Data Folder](/docs/desktop/data-folder) for complete folder structure details. ## GPU Acceleration diff --git a/docs/src/pages/docs/desktop/llama-cpp-server.mdx b/docs/src/pages/docs/desktop/llama-cpp-server.mdx index 3a3d24c46..54efcdd20 100644 --- a/docs/src/pages/docs/desktop/llama-cpp-server.mdx +++ b/docs/src/pages/docs/desktop/llama-cpp-server.mdx @@ -24,7 +24,7 @@ import { Settings } from 'lucide-react' `llama.cpp` is the core **inference engine** Jan uses to run AI models locally on your computer. This section covers the settings for the engine itself, which control *how* a model processes information on your hardware. -Looking for API server settings (like port, host, CORS)? They have been moved to the dedicated [**Local API Server**](/docs/api-server) page. +Looking for API server settings (like port, host, CORS)? They have been moved to the dedicated [**Local API Server**](/docs/desktop/desktop/api-server) page. ## Accessing Engine Settings diff --git a/docs/src/pages/docs/desktop/manage-models.mdx b/docs/src/pages/docs/desktop/manage-models.mdx index 645c36fe7..08781f47f 100644 --- a/docs/src/pages/docs/desktop/manage-models.mdx +++ b/docs/src/pages/docs/desktop/manage-models.mdx @@ -30,9 +30,9 @@ This guide shows you how to add, customize, and delete models within Jan. Local models are managed through [Llama.cpp](https://github.com/ggerganov/llama.cpp), and these models are in a format called GGUF. When you run them locally, they will use your computer's memory (RAM) and processing power, so please make sure that you download models that match the hardware specifications for your operating system: -- [Mac](/docs/desktop/mac#compatibility) -- [Windows](/docs/desktop/windows#compatibility) -- [Linux](/docs/desktop/linux#compatibility). +- [Mac](/docs/desktop/desktop/install/mac#compatibility) +- [Windows](/docs/desktop/desktop/install/windows#compatibility) +- [Linux](/docs/desktop/desktop/install/linux#compatibility). ### Adding Models @@ -156,7 +156,7 @@ For advanced users who want to add a specific model that is not available within Key fields to configure: 1. The **Settings** array is where you can set the path or location of your model in your computer, the context length allowed, and the chat template expected by your model. -2. The [**Parameters**](/docs/model-parameters) are the adjustable settings that affect how your model operates or +2. The [**Parameters**](/docs/desktop/desktop/model-parameters) are the adjustable settings that affect how your model operates or processes the data. The fields in the parameters array are typically general and can be used across different models. Here is an example of model parameters: @@ -186,7 +186,7 @@ models. Here is an example of model parameters: When using cloud models, be aware of any associated costs and rate limits from the providers. See detailed guide for -each cloud model provider [here](/docs/remote-models/anthropic). +each cloud model provider [here](/docs/desktop/desktop/remote-models/anthropic). Jan supports connecting to various AI cloud providers that are OpenAI API-compatible, including: OpenAI (GPT-4o, o3,...), diff --git a/docs/src/pages/docs/desktop/mcp.mdx b/docs/src/pages/docs/desktop/mcp.mdx index 03eaa0556..0c3fcfa1f 100644 --- a/docs/src/pages/docs/desktop/mcp.mdx +++ b/docs/src/pages/docs/desktop/mcp.mdx @@ -100,7 +100,7 @@ making your workflows more modular and adaptable over time. To use MCP effectively, ensure your AI model supports tool calling capabilities: - For cloud models (like Claude or GPT-4): Verify tool calling is enabled in your API settings - - For local models: Enable tool calling in the model parameters [click the edit button in Model Capabilities](/docs/model-parameters#model-capabilities-edit-button) + - For local models: Enable tool calling in the model parameters [click the edit button in Model Capabilities](/docs/desktop/desktop/model-parameters#model-capabilities-edit-button) - Check the model's documentation to confirm MCP compatibility diff --git a/docs/src/pages/docs/desktop/privacy.mdx b/docs/src/pages/docs/desktop/privacy.mdx index 4fd0a1830..5f9d0d3dd 100644 --- a/docs/src/pages/docs/desktop/privacy.mdx +++ b/docs/src/pages/docs/desktop/privacy.mdx @@ -26,7 +26,7 @@ import { Callout } from 'nextra/components' Jan is your AI. Period. Here's what we do with data. -Full privacy policy lives [here](/docs/privacy-policy), if you're into that sort of thing. +Full privacy policy lives [here](/docs/desktop/desktop/privacy-policy), if you're into that sort of thing. diff --git a/docs/src/pages/docs/desktop/quickstart.mdx b/docs/src/pages/docs/desktop/quickstart.mdx index b9a923b57..9999f1644 100644 --- a/docs/src/pages/docs/desktop/quickstart.mdx +++ b/docs/src/pages/docs/desktop/quickstart.mdx @@ -27,7 +27,7 @@ Get up and running with Jan in minutes. This guide will help you install Jan, do ### Step 1: Install Jan 1. [Download Jan](/download) -2. Install the app ([Mac](/docs/desktop/mac), [Windows](/docs/desktop/windows), [Linux](/docs/desktop/linux)) +2. Install the app ([Mac](/docs/desktop/desktop/install/mac), [Windows](/docs/desktop/desktop/install/windows), [Linux](/docs/desktop/desktop/install/linux)) 3. Launch Jan ### Step 2: Download Jan v1 @@ -61,7 +61,7 @@ Try asking Jan v1 questions like: - "What are the pros and cons of electric vehicles?" -**Want to give Jan v1 access to current web information?** Check out our [Serper MCP tutorial](/docs/mcp-examples/search/serper) to enable real-time web search with 2,500 free searches! +**Want to give Jan v1 access to current web information?** Check out our [Serper MCP tutorial](/docs/desktop/desktop/mcp-examples/search/serper) to enable real-time web search with 2,500 free searches! @@ -138,4 +138,4 @@ Connect to OpenAI, Anthropic, Groq, Mistral, and others: ![Connect Remote APIs](./_assets/quick-start-03.png) -For detailed setup, see [Remote APIs](/docs/remote-models/openai). +For detailed setup, see [Remote APIs](/docs/desktop/desktop/remote-models/openai). diff --git a/docs/src/pages/docs/desktop/remote-models/anthropic.mdx b/docs/src/pages/docs/desktop/remote-models/anthropic.mdx index 6662ecbb1..09418aad0 100644 --- a/docs/src/pages/docs/desktop/remote-models/anthropic.mdx +++ b/docs/src/pages/docs/desktop/remote-models/anthropic.mdx @@ -56,7 +56,7 @@ Ensure your API key has sufficient credits ## Available Anthropic Models Jan automatically includes Anthropic's available models. In case you want to use a specific Anthropic model -that you cannot find in **Jan**, follow instructions in [Add Cloud Models](/docs/manage-models#add-models-1): +that you cannot find in **Jan**, follow instructions in [Add Cloud Models](/docs/desktop/manage-models#add-models-1): - See list of available models in [Anthropic Models](https://docs.anthropic.com/claude/docs/models-overview). - The `id` property must match the model name in the list. For example, `claude-opus-4@20250514`, `claude-sonnet-4@20250514`, or `claude-3-5-haiku@20241022`. @@ -72,7 +72,7 @@ Common issues and solutions: **2. Connection Problems** - Check your internet connection - Verify Anthropic's system status -- Look for error messages in [Jan's logs](/docs/troubleshooting#how-to-get-error-logs) +- Look for error messages in [Jan's logs](/docs/desktop/troubleshooting#how-to-get-error-logs) **3. Model Unavailable** - Confirm your API key has access to the model diff --git a/docs/src/pages/docs/desktop/remote-models/cohere.mdx b/docs/src/pages/docs/desktop/remote-models/cohere.mdx index af9098480..05d2a4c74 100644 --- a/docs/src/pages/docs/desktop/remote-models/cohere.mdx +++ b/docs/src/pages/docs/desktop/remote-models/cohere.mdx @@ -55,7 +55,7 @@ Ensure your API key has sufficient credits. ## Available Cohere Models Jan automatically includes Cohere's available models. In case you want to use a specific -Cohere model that you cannot find in **Jan**, follow instructions in [Add Cloud Models](/docs/manage-models): +Cohere model that you cannot find in **Jan**, follow instructions in [Add Cloud Models](/docs/desktop/manage-models): - See list of available models in [Cohere Documentation](https://docs.cohere.com/v2/docs/models). - The `id` property must match the model name in the list. For example, `command-nightly` or `command-light`. @@ -71,7 +71,7 @@ Common issues and solutions: **2. Connection Problems** - Check your internet connection - Verify Cohere's [system status](https://status.cohere.com/) -- Look for error messages in [Jan's logs](/docs/troubleshooting#how-to-get-error-logs) +- Look for error messages in [Jan's logs](/docs/desktop/troubleshooting#how-to-get-error-logs) **3. Model Unavailable** - Confirm your API key has access to the model diff --git a/docs/src/pages/docs/desktop/remote-models/google.mdx b/docs/src/pages/docs/desktop/remote-models/google.mdx index d29f1290b..3984e429a 100644 --- a/docs/src/pages/docs/desktop/remote-models/google.mdx +++ b/docs/src/pages/docs/desktop/remote-models/google.mdx @@ -53,7 +53,7 @@ Ensure your API key has sufficient credits ## Available Google Models Jan automatically includes Google's available models like Gemini series. In case you want to use a specific -Gemini model that you cannot find in **Jan**, follow instructions in [Add Cloud Models](/docs/manage-models#add-models-1): +Gemini model that you cannot find in **Jan**, follow instructions in [Add Cloud Models](/docs/desktop/manage-models#add-models-1): - See list of available models in [Google Models](https://ai.google.dev/gemini-api/docs/models/gemini). - The `id` property must match the model name in the list. For example, `gemini-1.5-pro` or `gemini-2.0-flash-lite-preview`. @@ -69,7 +69,7 @@ Common issues and solutions: **2. Connection Problems** - Check your internet connection - Verify [Gemini's system status](https://www.google.com/appsstatus/dashboard/) -- Look for error messages in [Jan's logs](/docs/troubleshooting#how-to-get-error-logs) +- Look for error messages in [Jan's logs](/docs/desktop/troubleshooting#how-to-get-error-logs) **3. Model Unavailable** - Confirm your API key has access to the model diff --git a/docs/src/pages/docs/desktop/remote-models/groq.mdx b/docs/src/pages/docs/desktop/remote-models/groq.mdx index 7db6a97b2..95feb1d6e 100644 --- a/docs/src/pages/docs/desktop/remote-models/groq.mdx +++ b/docs/src/pages/docs/desktop/remote-models/groq.mdx @@ -54,7 +54,7 @@ Ensure your API key has sufficient credits ## Available Models Through Groq Jan automatically includes Groq's available models. In case you want to use a specific Groq model that -you cannot find in **Jan**, follow the instructions in the [Add Cloud Models](/docs/manage-models#add-models-1): +you cannot find in **Jan**, follow the instructions in the [Add Cloud Models](/docs/desktop/manage-models#add-models-1): - See list of available models in [Groq Documentation](https://console.groq.com/docs/models). - The `id` property must match the model name in the list. For example, if you want to use Llama3.3 70B, you must set the `id` property to `llama-3.3-70b-versatile`. @@ -70,7 +70,7 @@ Common issues and solutions: **2. Connection Problems** - Check your internet connection - Verify Groq's system status -- Look for error messages in [Jan's logs](/docs/troubleshooting#how-to-get-error-logs) +- Look for error messages in [Jan's logs](/docs/desktop/troubleshooting#how-to-get-error-logs) **3. Model Unavailable** - Confirm your API key has access to the model diff --git a/docs/src/pages/docs/desktop/remote-models/huggingface.mdx b/docs/src/pages/docs/desktop/remote-models/huggingface.mdx index 07f2103d2..4a7891586 100644 --- a/docs/src/pages/docs/desktop/remote-models/huggingface.mdx +++ b/docs/src/pages/docs/desktop/remote-models/huggingface.mdx @@ -141,7 +141,7 @@ Common issues and solutions: **2. Connection Problems** - Check your internet connection - Verify Hugging Face's system status -- Look for error messages in [Jan's logs](/docs/troubleshooting#how-to-get-error-logs) +- Look for error messages in [Jan's logs](/docs/desktop/troubleshooting#how-to-get-error-logs) **3. Model Unavailable** - Confirm your API key has access to the model diff --git a/docs/src/pages/docs/desktop/remote-models/mistralai.mdx b/docs/src/pages/docs/desktop/remote-models/mistralai.mdx index ea403a701..52271cd68 100644 --- a/docs/src/pages/docs/desktop/remote-models/mistralai.mdx +++ b/docs/src/pages/docs/desktop/remote-models/mistralai.mdx @@ -56,7 +56,7 @@ Ensure your API key has sufficient credits ## Available Mistral Models Jan automatically includes Mistral's available models. In case you want to use a specific Mistral model -that you cannot find in **Jan**, follow the instructions in [Add Cloud Models](/docs/manage-models#add-models-1): +that you cannot find in **Jan**, follow the instructions in [Add Cloud Models](/docs/desktop/manage-models#add-models-1): - See list of available models in [Mistral AI Documentation](https://docs.mistral.ai/platform/endpoints). - The `id` property must match the model name in the list. For example, if you want to use Mistral Large, you must set the `id` property to `mistral-large-latest` @@ -73,7 +73,7 @@ Common issues and solutions: **2. Connection Problems** - Check your internet connection - Verify Mistral AI's system status -- Look for error messages in [Jan's logs](/docs/troubleshooting#how-to-get-error-logs) +- Look for error messages in [Jan's logs](/docs/desktop/troubleshooting#how-to-get-error-logs) **3. Model Unavailable** - Confirm your API key has access to the model diff --git a/docs/src/pages/docs/desktop/remote-models/openai.mdx b/docs/src/pages/docs/desktop/remote-models/openai.mdx index 92be21a29..6c78f31c3 100644 --- a/docs/src/pages/docs/desktop/remote-models/openai.mdx +++ b/docs/src/pages/docs/desktop/remote-models/openai.mdx @@ -58,7 +58,7 @@ Start chatting ## Available OpenAI Models Jan automatically includes popular OpenAI models. In case you want to use a specific model that you -cannot find in Jan, follow instructions in [Add Cloud Models](/docs/manage-models#add-models-1): +cannot find in Jan, follow instructions in [Add Cloud Models](/docs/desktop/manage-models#add-models-1): - See list of available models in [OpenAI Platform](https://platform.openai.com/docs/models/overview). - The id property must match the model name in the list. For example, if you want to use the [GPT-4.5](https://platform.openai.com/docs/models/), you must set the id property @@ -76,7 +76,7 @@ Common issues and solutions: 2. Connection Problems - Check your internet connection - Verify OpenAI's [system status](https://status.openai.com) -- Look for error messages in [Jan's logs](/docs/troubleshooting#how-to-get-error-logs) +- Look for error messages in [Jan's logs](/docs/desktop/troubleshooting#how-to-get-error-logs) 3. Model Unavailable - Confirm your API key has access to the model diff --git a/docs/src/pages/docs/desktop/remote-models/openrouter.mdx b/docs/src/pages/docs/desktop/remote-models/openrouter.mdx index 186a504b9..0faf68dac 100644 --- a/docs/src/pages/docs/desktop/remote-models/openrouter.mdx +++ b/docs/src/pages/docs/desktop/remote-models/openrouter.mdx @@ -88,7 +88,7 @@ Common issues and solutions: **2. Connection Problems** - Check your internet connection - Verify OpenRouter's [system status](https://status.openrouter.ai) -- Look for error messages in [Jan's logs](/docs/troubleshooting#how-to-get-error-logs) +- Look for error messages in [Jan's logs](/docs/desktop/troubleshooting#how-to-get-error-logs) **3. Model Unavailable** - Confirm the model is currently available on OpenRouter diff --git a/docs/src/pages/docs/desktop/server-settings.mdx b/docs/src/pages/docs/desktop/server-settings.mdx index b352293e5..8e0a7bde9 100644 --- a/docs/src/pages/docs/desktop/server-settings.mdx +++ b/docs/src/pages/docs/desktop/server-settings.mdx @@ -69,7 +69,7 @@ Click the gear icon next to any model to adjust how it behaves: - **Presence Penalty**: Encourages the model to use varied vocabulary -For detailed explanations of these parameters, see our [Model Parameters Guide](/docs/model-parameters). +For detailed explanations of these parameters, see our [Model Parameters Guide](/docs/desktop/desktop/model-parameters). ## Hardware Monitoring @@ -117,7 +117,7 @@ Access privacy settings at **Settings** > **Privacy**: - Change this setting anytime -See exactly what we collect (with your permission) in our [Privacy Policy](/docs/privacy). +See exactly what we collect (with your permission) in our [Privacy Policy](/docs/desktop/desktop/privacy). ![Analytics](./_assets/settings-07.png) @@ -174,7 +174,7 @@ This includes configuration for: - CORS (Cross-Origin Resource Sharing) - Verbose Logging -[**Go to Local API Server Settings →**](/docs/api-server) +[**Go to Local API Server Settings →**](/docs/desktop/desktop/api-server) ## Emergency Options diff --git a/docs/src/pages/docs/desktop/server-troubleshooting.mdx b/docs/src/pages/docs/desktop/server-troubleshooting.mdx index 2bd8f649a..4f5c1e983 100644 --- a/docs/src/pages/docs/desktop/server-troubleshooting.mdx +++ b/docs/src/pages/docs/desktop/server-troubleshooting.mdx @@ -226,7 +226,7 @@ When models won't respond or show these errors: - **RAM:** Use models under 80% of available memory - 8GB system: Use models under 6GB - 16GB system: Use models under 13GB -- **Hardware:** Verify your system meets [minimum requirements](/docs/troubleshooting#step-1-verify-hardware-and-system-requirements) +- **Hardware:** Verify your system meets [minimum requirements](/docs/desktop/desktop/troubleshooting#step-1-verify-hardware-and-system-requirements) **2. Adjust Model Settings** - Open model settings in the chat sidebar @@ -318,9 +318,9 @@ If these solutions don't work: - Include your logs and system info **3. Check Resources:** -- [System requirements](/docs/troubleshooting#step-1-verify-hardware-and-system-requirements) -- [Model compatibility guides](/docs/manage-models) -- [Hardware setup guides](/docs/desktop/) +- [System requirements](/docs/desktop/desktop/troubleshooting#step-1-verify-hardware-and-system-requirements) +- [Model compatibility guides](/docs/desktop/desktop/manage-models) +- [Hardware setup guides](/docs/desktop/desktop/) When sharing logs, remove personal information first. We only keep logs for 24 hours, so report issues promptly. diff --git a/docs/src/pages/docs/desktop/settings.mdx b/docs/src/pages/docs/desktop/settings.mdx index def78e867..d910ec875 100644 --- a/docs/src/pages/docs/desktop/settings.mdx +++ b/docs/src/pages/docs/desktop/settings.mdx @@ -68,7 +68,7 @@ Click the gear icon next to a model to configure advanced settings: - **Repeat Penalty**: Controls how strongly the model avoids repeating phrases (higher values reduce repetition) - **Presence Penalty**: Discourages reusing words that already appeared in the text (helps with variety) -_See [Model Parameters](/docs/model-parameters) for a more detailed explanation._ +_See [Model Parameters](/docs/desktop/desktop/model-parameters) for a more detailed explanation._ ## Hardware @@ -108,7 +108,7 @@ You can help improve Jan by sharing anonymous usage data: 2. You can change this setting at any time -Read more about that we collect with opt-in users at [Privacy](/docs/privacy). +Read more about that we collect with opt-in users at [Privacy](/docs/desktop/desktop/privacy).
diff --git a/docs/src/pages/docs/desktop/troubleshooting.mdx b/docs/src/pages/docs/desktop/troubleshooting.mdx index 0a905e9b4..420cd17b3 100644 --- a/docs/src/pages/docs/desktop/troubleshooting.mdx +++ b/docs/src/pages/docs/desktop/troubleshooting.mdx @@ -328,19 +328,19 @@ This command ensures that the necessary permissions are granted for Jan's instal When you start a chat with a model and encounter a **Failed to Fetch** or **Something's Amiss** error, here are some possible solutions to resolve it: **1. Check System & Hardware Requirements** -- Hardware dependencies: Ensure your device meets all [hardware requirements](docs/troubleshooting#step-1-verify-hardware-and-system-requirements) -- OS: Ensure your operating system meets the minimum requirements ([Mac](/docs/desktop/mac#minimum-requirements), [Windows](/docs/desktop/windows#compatibility), [Linux](docs/desktop/linux#compatibility)) +- Hardware dependencies: Ensure your device meets all [hardware requirements](docs/desktop/troubleshooting#step-1-verify-hardware-and-system-requirements) +- OS: Ensure your operating system meets the minimum requirements ([Mac](/docs/desktop/desktop/install/mac#minimum-requirements), [Windows](/docs/desktop/desktop/install/windows#compatibility), [Linux](/docs/desktop/desktop/install/linux#compatibility)) - RAM: Choose models that use less than 80% of your available RAM - For 8GB systems: Use models under 6GB - For 16GB systems: Use models under 13GB **2. Check Model Parameters** -- In **Engine Settings** in right sidebar, check your `ngl` ([number of GPU layers](/docs/models/model-parameters#engine-parameters)) setting to see if it's too high +- In **Engine Settings** in right sidebar, check your `ngl` ([number of GPU layers](/docs/desktop/desktop/models/model-parameters#engine-parameters)) setting to see if it's too high - Start with a lower NGL value and increase gradually based on your GPU memory **3. Port Conflicts** -If you check your [app logs](/docs/troubleshooting#how-to-get-error-logs) & see "Bind address failed at 127.0.0.1:39291", check port availability: +If you check your [app logs](/docs/desktop/desktop/troubleshooting#how-to-get-error-logs) & see "Bind address failed at 127.0.0.1:39291", check port availability: ``` # Mac netstat -an | grep 39291 @@ -371,7 +371,7 @@ This will delete all chat history, models, and settings.
**5. Try a clean installation** -- Uninstall Jan & clean Jan data folders ([Mac](/docs/desktop/mac#uninstall-jan), [Windows](/docs/desktop/windows#uninstall-jan), [Linux](docs/desktop/linux#uninstall-jan)) +- Uninstall Jan & clean Jan data folders ([Mac](/docs/desktop/desktop/install/mac#uninstall-jan), [Windows](/docs/desktop/desktop/install/windows#uninstall-jan), [Linux](/docs/desktop/desktop/install/linux#uninstall-jan)) - Install the latest [stable release](/download) @@ -392,7 +392,7 @@ The "Unexpected token" error usually relates to OpenAI API authentication or reg ## Need Further Support? If you can't find what you need in our troubleshooting guide, feel free reach out to us for extra help: -- **Copy** your [app logs](/docs/troubleshooting#how-to-get-error-logs) +- **Copy** your [app logs](/docs/desktop/desktop/troubleshooting#how-to-get-error-logs) - Go to our [Discord](https://discord.com/invite/FTk2MvZwJH) & send it to **#🆘|jan-help** channel for further support. diff --git a/docs/src/pages/post/benchmarking-nvidia-tensorrt-llm.mdx b/docs/src/pages/post/benchmarking-nvidia-tensorrt-llm.mdx index 4d0df7cc5..0d4bc9aa2 100644 --- a/docs/src/pages/post/benchmarking-nvidia-tensorrt-llm.mdx +++ b/docs/src/pages/post/benchmarking-nvidia-tensorrt-llm.mdx @@ -17,7 +17,7 @@ Jan now supports [NVIDIA TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) i We've been excited for TensorRT-LLM for a while, and [had a lot of fun implementing it](https://github.com/menloresearch/nitro-tensorrt-llm). As part of the process, we've run some benchmarks, to see how TensorRT-LLM fares on consumer hardware (e.g. [4090s](https://www.nvidia.com/en-us/geforce/graphics-cards/40-series/), [3090s](https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/)) we commonly see in the [Jan's hardware community](https://discord.com/channels/1107178041848909847/1201834752206974996). - **Give it a try!** Jan's [TensorRT-LLM extension](/docs/built-in/tensorrt-llm) is available in Jan v0.4.9 and up ([see more](/docs/built-in/tensorrt-llm)). We precompiled some TensorRT-LLM models for you to try: `Mistral 7b`, `TinyLlama-1.1b`, `TinyJensen-1.1b` 😂 + **Give it a try!** Jan's [TensorRT-LLM extension](/docs/desktop/built-in/tensorrt-llm) is available in Jan v0.4.9 and up ([see more](/docs/desktop/built-in/tensorrt-llm)). We precompiled some TensorRT-LLM models for you to try: `Mistral 7b`, `TinyLlama-1.1b`, `TinyJensen-1.1b` 😂 Bugs or feedback? Let us know on [GitHub](https://github.com/menloresearch/jan) or via [Discord](https://discord.com/channels/1107178041848909847/1201832734704795688). diff --git a/docs/src/pages/post/deepresearch.mdx b/docs/src/pages/post/deepresearch.mdx index 62e584082..11edd4f04 100644 --- a/docs/src/pages/post/deepresearch.mdx +++ b/docs/src/pages/post/deepresearch.mdx @@ -126,7 +126,7 @@ any version with Model Context Protocol in it (>`v0.6.3`). **The Key: Assistants + Tools** Running deep research in Jan can be accomplished by combining [custom assistants](https://jan.ai/docs/assistants) -with [MCP search tools](https://jan.ai/docs/mcp-examples/search/exa). This pairing allows any model—local or +with [MCP search tools](https://jan.ai/docs/desktop/mcp-examples/search/exa). This pairing allows any model—local or cloud—to follow a systematic research workflow, to create a report similar to that of other providers, with some visible limitations (for now).