docs: fix broken internal links and remove privacy page
- Fix broken links in troubleshooting.mdx pointing to install pages - Remove privacy.mdx page and update _meta.json navigation - Update various documentation links for consistency - Ensure all internal links use proper absolute paths
This commit is contained in:
parent
da38384be2
commit
ae171574e8
@ -42,6 +42,5 @@
|
||||
},
|
||||
"settings": "Settings",
|
||||
"data-folder": "Jan Data Folder",
|
||||
"troubleshooting": "Troubleshooting",
|
||||
"privacy": "Privacy"
|
||||
"troubleshooting": "Troubleshooting"
|
||||
}
|
||||
|
||||
@ -244,7 +244,7 @@ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64
|
||||
### Step 2: Enable GPU Acceleration
|
||||
|
||||
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Local Engine** > **Llama.cpp**
|
||||
2. Select appropriate backend in **llama-cpp Backend**. Details in our [guide](/docs/desktop/local-engines/llama-cpp).
|
||||
2. Select appropriate backend in **llama-cpp Backend**. Details in our [llama.cpp guide](/docs/desktop/llama-cpp).
|
||||
|
||||
<Callout type="info">
|
||||
CUDA offers better performance than Vulkan.
|
||||
|
||||
@ -59,7 +59,7 @@ The model and its different model variants are fully supported by Jan.
|
||||
## Using Jan-Nano-32k
|
||||
|
||||
**Step 1**
|
||||
Download Jan from [here](https://jan.ai/docs/desktop/).
|
||||
Download Jan from [here](https://jan.ai/download/).
|
||||
|
||||
**Step 2**
|
||||
Go to the Hub Tab, search for Jan-Nano-Gguf, and click on the download button to the best model size for your system.
|
||||
@ -118,8 +118,8 @@ Here are some example queries to showcase Jan-Nano's web search capabilities:
|
||||
- 4xA6000 for vllm server (inferencing)
|
||||
|
||||
- What frontend should I use?
|
||||
- Jan Beta (recommended) - Minimalistic and polished interface
|
||||
- Download link: https://jan.ai/docs/desktop/beta
|
||||
- Jan (recommended)
|
||||
- Download link: https://jan.ai/download
|
||||
|
||||
- Getting Jinja errors in LM Studio?
|
||||
- Use Qwen3 template from other LM Studio compatible models
|
||||
|
||||
@ -108,7 +108,7 @@ You can help improve Jan by sharing anonymous usage data:
|
||||
2. You can change this setting at any time
|
||||
|
||||
<Callout type="info">
|
||||
Read more about that we collect with opt-in users at [Privacy](/docs/desktop/privacy).
|
||||
Read more about that we collect with opt-in users at [Privacy](/privacy).
|
||||
</Callout>
|
||||
|
||||
<br/>
|
||||
@ -141,7 +141,7 @@ This action cannot be undone.
|
||||
|
||||
|
||||
### Jan Data Folder
|
||||
Jan stores your data locally in your own filesystem in a universal file format. See detailed [Jan Folder Structure](docs/data-folder#folder-structure).
|
||||
Jan stores your data locally in your own filesystem in a universal file format. See detailed [Jan Folder Structure](/docs/desktop/data-folder#directory-structure).
|
||||
|
||||
**1. Open Jan Data Folder**
|
||||
|
||||
|
||||
@ -328,14 +328,14 @@ This command ensures that the necessary permissions are granted for Jan's instal
|
||||
When you start a chat with a model and encounter a **Failed to Fetch** or **Something's Amiss** error, here are some possible solutions to resolve it:
|
||||
|
||||
**1. Check System & Hardware Requirements**
|
||||
- Hardware dependencies: Ensure your device meets all [hardware requirements](docs/desktop/troubleshooting#step-1-verify-hardware-and-system-requirements)
|
||||
- OS: Ensure your operating system meets the minimum requirements ([Mac](/docs/desktop/install/mac#minimum-requirements), [Windows](/docs/desktop/install/windows#compatibility), [Linux](/docs/desktop/install/linux#compatibility))
|
||||
- Hardware dependencies: Ensure your device meets all [hardware requirements](troubleshooting)
|
||||
- OS: Ensure your operating system meets the minimum requirements ([Mac](https://www.jan.ai/docs/desktop/install/mac#minimum-requirements), [Windows](/windows#compatibility), [Linux](docs/desktop/linux#compatibility))
|
||||
- RAM: Choose models that use less than 80% of your available RAM
|
||||
- For 8GB systems: Use models under 6GB
|
||||
- For 16GB systems: Use models under 13GB
|
||||
|
||||
**2. Check Model Parameters**
|
||||
- In **Engine Settings** in right sidebar, check your `ngl` ([number of GPU layers](/docs/desktop/models/model-parameters#engine-parameters)) setting to see if it's too high
|
||||
- In **Engine Settings** in right sidebar, check your `ngl` ([number of GPU layers](/docs/desktop/model-parameters)) setting to see if it's too high
|
||||
- Start with a lower NGL value and increase gradually based on your GPU memory
|
||||
|
||||
**3. Port Conflicts**
|
||||
|
||||
@ -17,7 +17,7 @@ Jan now supports [NVIDIA TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) i
|
||||
We've been excited for TensorRT-LLM for a while, and [had a lot of fun implementing it](https://github.com/menloresearch/nitro-tensorrt-llm). As part of the process, we've run some benchmarks, to see how TensorRT-LLM fares on consumer hardware (e.g. [4090s](https://www.nvidia.com/en-us/geforce/graphics-cards/40-series/), [3090s](https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/)) we commonly see in the [Jan's hardware community](https://discord.com/channels/1107178041848909847/1201834752206974996).
|
||||
|
||||
<Callout type="info" >
|
||||
**Give it a try!** Jan's [TensorRT-LLM extension](/docs/desktop/built-in/tensorrt-llm) is available in Jan v0.4.9 and up ([see more](/docs/desktop/built-in/tensorrt-llm)). We precompiled some TensorRT-LLM models for you to try: `Mistral 7b`, `TinyLlama-1.1b`, `TinyJensen-1.1b` 😂
|
||||
**Give it a try!** Jan's TensorRT-LLM extension is available in Jan v0.4.9 and up ([see more](/docs/built-in/tensorrt-llm)). We precompiled some TensorRT-LLM models for you to try: `Mistral 7b`, `TinyLlama-1.1b`, `TinyJensen-1.1b` 😂
|
||||
|
||||
Bugs or feedback? Let us know on [GitHub](https://github.com/menloresearch/jan) or via [Discord](https://discord.com/channels/1107178041848909847/1201832734704795688).
|
||||
</Callout>
|
||||
|
||||
@ -125,8 +125,8 @@ any version with Model Context Protocol in it (>`v0.6.3`).
|
||||
|
||||
**The Key: Assistants + Tools**
|
||||
|
||||
Running deep research in Jan can be accomplished by combining [custom assistants](https://jan.ai/docs/assistants)
|
||||
with [MCP search tools](https://jan.ai/docs/desktop/mcp-examples/search/exa). This pairing allows any model—local or
|
||||
Running deep research in Jan can be accomplished by combining [custom assistants](https://jan.ai/docs/desktop/assistants)
|
||||
with [MCP search tools](https://jan.ai/docs/mcp-examples/search/exa). This pairing allows any model—local or
|
||||
cloud—to follow a systematic research workflow, to create a report similar to that of other providers, with some
|
||||
visible limitations (for now).
|
||||
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user