diff --git a/docs/src/pages/docs/_assets/trouble-shooting-03.png b/docs/src/pages/docs/_assets/trouble-shooting-03.png new file mode 100644 index 000000000..d07ed56d7 Binary files /dev/null and b/docs/src/pages/docs/_assets/trouble-shooting-03.png differ diff --git a/docs/src/pages/docs/desktop/linux.mdx b/docs/src/pages/docs/desktop/linux.mdx index 188ab100b..69e9623e2 100644 --- a/docs/src/pages/docs/desktop/linux.mdx +++ b/docs/src/pages/docs/desktop/linux.mdx @@ -239,9 +239,8 @@ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64 See [detailed instructions](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#post-installation-actions). ### Step 2: Enable GPU Acceleration -1. Navigate to **Settings** () > **Advanced Settings** -2. At **GPU Acceleration**, toggle on and select your preferred GPU(s) -3. App reload is required after the selection +1. Navigate to **Settings** () > **Local Engine** > **Llama.cpp** +2. At **llama-cpp Backend**, select backend. For example `windows-amd64-vulkan` if you have and AMD gaphic card. For more info, see [our guide](/docs/local-engines/llama-cpp). @@ -255,30 +254,16 @@ While **Vulkan** can enable Nvidia GPU acceleration in the Jan app, **CUDA** is AMD GPUs require **Vulkan** support. - - This feature is currently experimental and must be enabled through [Experimental Mode](/docs/settings#experimental-mode) in **Advanced Settings**. - - -1. Navigate to **Settings** () > **Advanced Settings** -2. Enable **Experimental Mode** -3. Under **GPU Acceleration**, enable **Vulkan Support** -4. At **GPU Acceleration**, toggle on and select your preferred GPU(s) -5. App reload is required after the selection +1. Navigate to **Settings** () > **Local Engine** > **Llama.cpp** +2. At **llama-cpp Backend**, select backend. For example `windows-amd64-vulkan` if you have and AMD gaphic card. For more info, see [our guide](/docs/local-engines/llama-cpp). Intel Arc GPUs require **Vulkan** support. - - This feature is currently experimental and must be enabled through [Experimental Mode](/docs/settings#experimental-mode) in **Advanced Settings**. - - -1. Navigate to **Settings** () > **Advanced Settings** -2. Enable **Experimental Mode** -3. Under **GPU Acceleration**, enable **Vulkan Support** -4. At **GPU Acceleration**, toggle on and select your preferred GPU(s) -5. App reload is required after the selection +1. Navigate to **Settings** () > **Local Engine** > **Llama.cpp** +2. At **llama-cpp Backend**, select backend. For example `windows-amd64-vulkan` if you have and AMD gaphic card. For more info, see [our guide](/docs/local-engines/llama-cpp). diff --git a/docs/src/pages/docs/desktop/windows.mdx b/docs/src/pages/docs/desktop/windows.mdx index 9fbe5dc82..14edab9fc 100644 --- a/docs/src/pages/docs/desktop/windows.mdx +++ b/docs/src/pages/docs/desktop/windows.mdx @@ -160,13 +160,8 @@ Expected output should show your GPU model and driver version. nvcc --version ``` ### Step 2: Enable GPU Acceleration -1. Navigate to **Settings** () > **Advanced Settings** -2. At **GPU Acceleration**, toggle on and select your preferred GPU(s) -3. App reload is required after the selection - - -While Jan supports both CUDA and Vulkan for NVIDIA GPUs, we strongly recommend using CUDA for optimal performance. - +1. Navigate to **Settings** () > **Local Engine** > **Llama.cpp** +2. At **llama-cpp Backend**, select backend. For example `windows-amd64-vulkan` if you have and AMD gaphic card. For more info, see [our guide](/docs/local-engines/llama-cpp). @@ -175,32 +170,17 @@ While Jan supports both CUDA and Vulkan for NVIDIA GPUs, we strongly recommend u AMD GPUs require **Vulkan** support. - - This feature is currently experimental and must be enabled through [Experimental Mode](/docs/settings#experimental-mode) in **Advanced Settings**. - - -1. Navigate to **Settings** () > **Advanced Settings** -2. Enable **Experimental Mode** -3. Under **GPU Acceleration**, enable **Vulkan Support** -4. At **GPU Acceleration**, toggle on and select your preferred GPU(s) -5. App reload is required after the selection +1. Navigate to **Settings** () > **Local Engine** > **Llama.cpp** +2. At **llama-cpp Backend**, select backend. For example `windows-amd64-vulkan` if you have and AMD gaphic card. For more info, see [our guide](/docs/local-engines/llama-cpp). Intel Arc GPUs require **Vulkan** support. - - This feature is currently experimental and must be enabled through [Experimental Mode](/docs/settings#experimental-mode) in **Advanced Settings**. - - -1. Navigate to **Settings** () > **Advanced Settings** -2. Enable **Experimental Mode** -3. Under **GPU Acceleration**, enable **Vulkan Support** -4. At **GPU Acceleration**, toggle on and select your preferred GPU(s) -5. App reload is required after the selection +1. Navigate to **Settings** () > **Local Engine** > **Llama.cpp** +2. At **llama-cpp Backend**, select backend. For example `windows-amd64-vulkan` if you have and AMD gaphic card. For more info, see [our guide](/docs/local-engines/llama-cpp). - diff --git a/docs/src/pages/docs/quickstart.mdx b/docs/src/pages/docs/quickstart.mdx index 62a923b51..c66178136 100644 --- a/docs/src/pages/docs/quickstart.mdx +++ b/docs/src/pages/docs/quickstart.mdx @@ -60,16 +60,15 @@ For more model installation methods, please visit [Model Management](/docs/model ### Step 3: Turn on GPU Acceleration (Optional) While the model downloads, let's optimize your hardware setup. If you're on **Windows** or **Linux** and have a compatible graphics card, you can significantly boost model performance by enabling GPU acceleration. -1. Navigate to **Settings** () > **Advanced Settings** -2. At **GPU Acceleration**, toggle on and select your preferred GPU(s) -3. App reload is required after the selection +1. Navigate to **Settings** () > **Local Engine** > **Llama.cpp** +2. At **llama-cpp Backend**, select backend. For example `windows-amd64-vulkan` if you have and AMD gaphic card. For more info, see [our guide](/docs/local-engines/llama-cpp). Ensure you have installed all required dependencies and drivers before enabling GPU acceleration. See **GPU Setup Guide** on [Windows](/docs/desktop/windows#gpu-acceleration) & [Linux](/docs/desktop/linux#gpu-acceleration) for detailed instructions.
-![Turn on GPU acceleration](./_assets/trouble-shooting-01.png) +![Turn on GPU acceleration](./_assets/trouble-shooting-03.png) ### Step 4: Customize Assistant Instructions Once your model has been downloaded and you're ready to start your first conversation, you can customize how it responds by setting specific instructions: diff --git a/docs/src/pages/docs/settings.mdx b/docs/src/pages/docs/settings.mdx index 6d3c7a316..3bfbb4a6e 100644 --- a/docs/src/pages/docs/settings.mdx +++ b/docs/src/pages/docs/settings.mdx @@ -163,11 +163,11 @@ Ensure you have installed all required dependencies and drivers before enabling
Turn on GPU acceleration to improve performance: -1. Select and **enable** your prefered GPU(s) -2. App reload is required after the selection +1. Navigate to **Settings** () > **Local Engine** > **Llama.cpp** +2. At **llama-cpp Backend**, select backend. For example `windows-amd64-vulkan` if you have and AMD gaphic card. For more info, see [our guide](/docs/local-engines/llama-cpp).
-![Hardware](./_assets/trouble-shooting-01.png) +![Hardware](./_assets/trouble-shooting-03.png)
**GPU Performance Optimization** diff --git a/docs/src/pages/docs/troubleshooting.mdx b/docs/src/pages/docs/troubleshooting.mdx index 6146ef3c2..ec3e31e50 100644 --- a/docs/src/pages/docs/troubleshooting.mdx +++ b/docs/src/pages/docs/troubleshooting.mdx @@ -206,6 +206,7 @@ To verify GPU acceleration is turned on:
![Hardware](./_assets/trouble-shooting-01.png) +![Hardware](./_assets/trouble-shooting-03.png)