From 7f002409e3b4c45ff7339d694d163681b101097f Mon Sep 17 00:00:00 2001 From: eckartal Date: Mon, 29 Sep 2025 14:15:50 +0800 Subject: [PATCH] Update content files - Update tabby server example - Update troubleshooting documentation - Update NVIDIA TensorRT-LLM benchmarking post --- docs/src/pages/docs/desktop/server-examples/tabby.mdx | 2 +- docs/src/pages/docs/desktop/troubleshooting.mdx | 2 +- docs/src/pages/post/benchmarking-nvidia-tensorrt-llm.mdx | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/src/pages/docs/desktop/server-examples/tabby.mdx b/docs/src/pages/docs/desktop/server-examples/tabby.mdx index 917f40550..f25c89dab 100644 --- a/docs/src/pages/docs/desktop/server-examples/tabby.mdx +++ b/docs/src/pages/docs/desktop/server-examples/tabby.mdx @@ -90,7 +90,7 @@ Refer to the following documentation to install the Tabby extension on your favo Tabby offers an [Answer Engine](https://tabby.tabbyml.com/docs/administration/answer-engine/) on the homepage, which can leverage the Jan LLM and related contexts like code, documentation, and web pages to answer user questions. -Simply open the Tabby homepage at [localhost:8080](http://localhost:8080) and ask your questions. +Simply open the Tabby homepage at http://localhost:8080 and ask your questions. ### IDE Chat Sidebar diff --git a/docs/src/pages/docs/desktop/troubleshooting.mdx b/docs/src/pages/docs/desktop/troubleshooting.mdx index c2b84c03a..6d6c02703 100644 --- a/docs/src/pages/docs/desktop/troubleshooting.mdx +++ b/docs/src/pages/docs/desktop/troubleshooting.mdx @@ -329,7 +329,7 @@ When you start a chat with a model and encounter a **Failed to Fetch** or **Some **1. Check System & Hardware Requirements** - Hardware dependencies: Ensure your device meets all [hardware requirements](troubleshooting) -- OS: Ensure your operating system meets the minimum requirements ([Mac](https://www.jan.ai/docs/desktop/install/mac#minimum-requirements), [Windows](/windows#compatibility), [Linux](docs/desktop/linux#compatibility)) +- OS: Ensure your operating system meets the minimum requirements ([Mac](https://www.jan.ai/docs/desktop/install/mac#minimum-requirements), [Windows](/windows#compatibility), [Linux](https://www.jan.ai/docs/desktop/install/linux#compatibility) - RAM: Choose models that use less than 80% of your available RAM - For 8GB systems: Use models under 6GB - For 16GB systems: Use models under 13GB diff --git a/docs/src/pages/post/benchmarking-nvidia-tensorrt-llm.mdx b/docs/src/pages/post/benchmarking-nvidia-tensorrt-llm.mdx index 3f5d376cb..9fa67ea07 100644 --- a/docs/src/pages/post/benchmarking-nvidia-tensorrt-llm.mdx +++ b/docs/src/pages/post/benchmarking-nvidia-tensorrt-llm.mdx @@ -17,7 +17,7 @@ Jan now supports [NVIDIA TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) i We've been excited for TensorRT-LLM for a while, and [had a lot of fun implementing it](https://github.com/menloresearch/nitro-tensorrt-llm). As part of the process, we've run some benchmarks, to see how TensorRT-LLM fares on consumer hardware (e.g. [4090s](https://www.nvidia.com/en-us/geforce/graphics-cards/40-series/), [3090s](https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/)) we commonly see in the [Jan's hardware community](https://discord.com/channels/1107178041848909847/1201834752206974996). - **Give it a try!** Jan's TensorRT-LLM extension is available in Jan v0.4.9 and up ([see more](/docs/built-in/tensorrt-llm)). We precompiled some TensorRT-LLM models for you to try: `Mistral 7b`, `TinyLlama-1.1b`, `TinyJensen-1.1b` 😂 + **Give it a try!** Jan's TensorRT-LLM extension is available in Jan v0.4.9. We precompiled some TensorRT-LLM models for you to try: `Mistral 7b`, `TinyLlama-1.1b`, `TinyJensen-1.1b` 😂 Bugs or feedback? Let us know on [GitHub](https://github.com/menloresearch/jan) or via [Discord](https://discord.com/channels/1107178041848909847/1201832734704795688).