diff --git a/docs/src/pages/docs/desktop/_meta.json b/docs/src/pages/docs/desktop/_meta.json index 36c70cf27..1745297cb 100644 --- a/docs/src/pages/docs/desktop/_meta.json +++ b/docs/src/pages/docs/desktop/_meta.json @@ -42,6 +42,5 @@ }, "settings": "Settings", "data-folder": "Jan Data Folder", - "troubleshooting": "Troubleshooting", - "privacy": "Privacy" + "troubleshooting": "Troubleshooting" } diff --git a/docs/src/pages/docs/desktop/install/linux.mdx b/docs/src/pages/docs/desktop/install/linux.mdx index 2d42a59f1..442c005db 100644 --- a/docs/src/pages/docs/desktop/install/linux.mdx +++ b/docs/src/pages/docs/desktop/install/linux.mdx @@ -244,7 +244,7 @@ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64 ### Step 2: Enable GPU Acceleration 1. Navigate to **Settings** () > **Local Engine** > **Llama.cpp** -2. Select appropriate backend in **llama-cpp Backend**. Details in our [guide](/docs/desktop/local-engines/llama-cpp). +2. Select appropriate backend in **llama-cpp Backend**. Details in our [llama.cpp guide](/docs/desktop/llama-cpp). CUDA offers better performance than Vulkan. diff --git a/docs/src/pages/docs/desktop/jan-models/jan-nano-32.mdx b/docs/src/pages/docs/desktop/jan-models/jan-nano-32.mdx index b216f3b96..5f1446e42 100644 --- a/docs/src/pages/docs/desktop/jan-models/jan-nano-32.mdx +++ b/docs/src/pages/docs/desktop/jan-models/jan-nano-32.mdx @@ -59,7 +59,7 @@ The model and its different model variants are fully supported by Jan. ## Using Jan-Nano-32k **Step 1** -Download Jan from [here](https://jan.ai/docs/desktop/). +Download Jan from [here](https://jan.ai/download/). **Step 2** Go to the Hub Tab, search for Jan-Nano-Gguf, and click on the download button to the best model size for your system. @@ -118,8 +118,8 @@ Here are some example queries to showcase Jan-Nano's web search capabilities: - 4xA6000 for vllm server (inferencing) - What frontend should I use? - - Jan Beta (recommended) - Minimalistic and polished interface - - Download link: https://jan.ai/docs/desktop/beta + - Jan (recommended) + - Download link: https://jan.ai/download - Getting Jinja errors in LM Studio? - Use Qwen3 template from other LM Studio compatible models diff --git a/docs/src/pages/docs/desktop/settings.mdx b/docs/src/pages/docs/desktop/settings.mdx index 6bc750f43..cd4d01ede 100644 --- a/docs/src/pages/docs/desktop/settings.mdx +++ b/docs/src/pages/docs/desktop/settings.mdx @@ -108,7 +108,7 @@ You can help improve Jan by sharing anonymous usage data: 2. You can change this setting at any time -Read more about that we collect with opt-in users at [Privacy](/docs/desktop/privacy). +Read more about that we collect with opt-in users at [Privacy](/privacy).
@@ -141,7 +141,7 @@ This action cannot be undone. ### Jan Data Folder -Jan stores your data locally in your own filesystem in a universal file format. See detailed [Jan Folder Structure](docs/data-folder#folder-structure). +Jan stores your data locally in your own filesystem in a universal file format. See detailed [Jan Folder Structure](/docs/desktop/data-folder#directory-structure). **1. Open Jan Data Folder** diff --git a/docs/src/pages/docs/desktop/troubleshooting.mdx b/docs/src/pages/docs/desktop/troubleshooting.mdx index 16bbdfa9a..c2b84c03a 100644 --- a/docs/src/pages/docs/desktop/troubleshooting.mdx +++ b/docs/src/pages/docs/desktop/troubleshooting.mdx @@ -328,14 +328,14 @@ This command ensures that the necessary permissions are granted for Jan's instal When you start a chat with a model and encounter a **Failed to Fetch** or **Something's Amiss** error, here are some possible solutions to resolve it: **1. Check System & Hardware Requirements** -- Hardware dependencies: Ensure your device meets all [hardware requirements](docs/desktop/troubleshooting#step-1-verify-hardware-and-system-requirements) -- OS: Ensure your operating system meets the minimum requirements ([Mac](/docs/desktop/install/mac#minimum-requirements), [Windows](/docs/desktop/install/windows#compatibility), [Linux](/docs/desktop/install/linux#compatibility)) +- Hardware dependencies: Ensure your device meets all [hardware requirements](troubleshooting) +- OS: Ensure your operating system meets the minimum requirements ([Mac](https://www.jan.ai/docs/desktop/install/mac#minimum-requirements), [Windows](/windows#compatibility), [Linux](docs/desktop/linux#compatibility)) - RAM: Choose models that use less than 80% of your available RAM - For 8GB systems: Use models under 6GB - For 16GB systems: Use models under 13GB **2. Check Model Parameters** -- In **Engine Settings** in right sidebar, check your `ngl` ([number of GPU layers](/docs/desktop/models/model-parameters#engine-parameters)) setting to see if it's too high +- In **Engine Settings** in right sidebar, check your `ngl` ([number of GPU layers](/docs/desktop/model-parameters)) setting to see if it's too high - Start with a lower NGL value and increase gradually based on your GPU memory **3. Port Conflicts** diff --git a/docs/src/pages/post/benchmarking-nvidia-tensorrt-llm.mdx b/docs/src/pages/post/benchmarking-nvidia-tensorrt-llm.mdx index 0d4bc9aa2..3f5d376cb 100644 --- a/docs/src/pages/post/benchmarking-nvidia-tensorrt-llm.mdx +++ b/docs/src/pages/post/benchmarking-nvidia-tensorrt-llm.mdx @@ -17,7 +17,7 @@ Jan now supports [NVIDIA TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) i We've been excited for TensorRT-LLM for a while, and [had a lot of fun implementing it](https://github.com/menloresearch/nitro-tensorrt-llm). As part of the process, we've run some benchmarks, to see how TensorRT-LLM fares on consumer hardware (e.g. [4090s](https://www.nvidia.com/en-us/geforce/graphics-cards/40-series/), [3090s](https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/)) we commonly see in the [Jan's hardware community](https://discord.com/channels/1107178041848909847/1201834752206974996). - **Give it a try!** Jan's [TensorRT-LLM extension](/docs/desktop/built-in/tensorrt-llm) is available in Jan v0.4.9 and up ([see more](/docs/desktop/built-in/tensorrt-llm)). We precompiled some TensorRT-LLM models for you to try: `Mistral 7b`, `TinyLlama-1.1b`, `TinyJensen-1.1b` 😂 + **Give it a try!** Jan's TensorRT-LLM extension is available in Jan v0.4.9 and up ([see more](/docs/built-in/tensorrt-llm)). We precompiled some TensorRT-LLM models for you to try: `Mistral 7b`, `TinyLlama-1.1b`, `TinyJensen-1.1b` 😂 Bugs or feedback? Let us know on [GitHub](https://github.com/menloresearch/jan) or via [Discord](https://discord.com/channels/1107178041848909847/1201832734704795688). diff --git a/docs/src/pages/post/deepresearch.mdx b/docs/src/pages/post/deepresearch.mdx index 11edd4f04..f3f1c0ee7 100644 --- a/docs/src/pages/post/deepresearch.mdx +++ b/docs/src/pages/post/deepresearch.mdx @@ -125,8 +125,8 @@ any version with Model Context Protocol in it (>`v0.6.3`). **The Key: Assistants + Tools** -Running deep research in Jan can be accomplished by combining [custom assistants](https://jan.ai/docs/assistants) -with [MCP search tools](https://jan.ai/docs/desktop/mcp-examples/search/exa). This pairing allows any model—local or +Running deep research in Jan can be accomplished by combining [custom assistants](https://jan.ai/docs/desktop/assistants) +with [MCP search tools](https://jan.ai/docs/mcp-examples/search/exa). This pairing allows any model—local or cloud—to follow a systematic research workflow, to create a report similar to that of other providers, with some visible limitations (for now).