fix: "GPU Acceleration" section missing after enabling Experimental Mode since version 0.5.15

This commit is contained in:
mimic 2025-04-07 16:09:48 +00:00 committed by Emeric MARTINEAU
parent 21984afa1b
commit 6b89e5cc48
6 changed files with 19 additions and 54 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

View File

@ -239,9 +239,8 @@ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64
See [detailed instructions](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#post-installation-actions). See [detailed instructions](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#post-installation-actions).
### Step 2: Enable GPU Acceleration ### Step 2: Enable GPU Acceleration
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Advanced Settings** 1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Local Engine** > **Llama.cpp**
2. At **GPU Acceleration**, toggle on and select your preferred GPU(s) 2. At **llama-cpp Backend**, select backend. For example `windows-amd64-vulkan` if you have and AMD gaphic card. For more info, see [our guide](/docs/local-engines/llama-cpp).
3. App reload is required after the selection
<Callout type="info"> <Callout type="info">
@ -255,30 +254,16 @@ While **Vulkan** can enable Nvidia GPU acceleration in the Jan app, **CUDA** is
<Tabs.Tab> <Tabs.Tab>
AMD GPUs require **Vulkan** support. AMD GPUs require **Vulkan** support.
<Callout type="warning"> 1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Local Engine** > **Llama.cpp**
This feature is currently experimental and must be enabled through [Experimental Mode](/docs/settings#experimental-mode) in **Advanced Settings**. 2. At **llama-cpp Backend**, select backend. For example `windows-amd64-vulkan` if you have and AMD gaphic card. For more info, see [our guide](/docs/local-engines/llama-cpp).
</Callout>
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Advanced Settings**
2. Enable **Experimental Mode**
3. Under **GPU Acceleration**, enable **Vulkan Support**
4. At **GPU Acceleration**, toggle on and select your preferred GPU(s)
5. App reload is required after the selection
</Tabs.Tab> </Tabs.Tab>
<Tabs.Tab> <Tabs.Tab>
Intel Arc GPUs require **Vulkan** support. Intel Arc GPUs require **Vulkan** support.
<Callout type="warning"> 1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Local Engine** > **Llama.cpp**
This feature is currently experimental and must be enabled through [Experimental Mode](/docs/settings#experimental-mode) in **Advanced Settings**. 2. At **llama-cpp Backend**, select backend. For example `windows-amd64-vulkan` if you have and AMD gaphic card. For more info, see [our guide](/docs/local-engines/llama-cpp).
</Callout>
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Advanced Settings**
2. Enable **Experimental Mode**
3. Under **GPU Acceleration**, enable **Vulkan Support**
4. At **GPU Acceleration**, toggle on and select your preferred GPU(s)
5. App reload is required after the selection
</Tabs.Tab> </Tabs.Tab>
</Tabs> </Tabs>

View File

@ -160,13 +160,8 @@ Expected output should show your GPU model and driver version.
nvcc --version nvcc --version
``` ```
### Step 2: Enable GPU Acceleration ### Step 2: Enable GPU Acceleration
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Advanced Settings** 1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Local Engine** > **Llama.cpp**
2. At **GPU Acceleration**, toggle on and select your preferred GPU(s) 2. At **llama-cpp Backend**, select backend. For example `windows-amd64-vulkan` if you have and AMD gaphic card. For more info, see [our guide](/docs/local-engines/llama-cpp).
3. App reload is required after the selection
<Callout type="info">
While Jan supports both CUDA and Vulkan for NVIDIA GPUs, we strongly recommend using CUDA for optimal performance.
</Callout>
</Steps> </Steps>
@ -175,32 +170,17 @@ While Jan supports both CUDA and Vulkan for NVIDIA GPUs, we strongly recommend u
<Tabs.Tab> <Tabs.Tab>
AMD GPUs require **Vulkan** support. AMD GPUs require **Vulkan** support.
<Callout type="warning"> 1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Local Engine** > **Llama.cpp**
This feature is currently experimental and must be enabled through [Experimental Mode](/docs/settings#experimental-mode) in **Advanced Settings**. 2. At **llama-cpp Backend**, select backend. For example `windows-amd64-vulkan` if you have and AMD gaphic card. For more info, see [our guide](/docs/local-engines/llama-cpp).
</Callout>
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Advanced Settings**
2. Enable **Experimental Mode**
3. Under **GPU Acceleration**, enable **Vulkan Support**
4. At **GPU Acceleration**, toggle on and select your preferred GPU(s)
5. App reload is required after the selection
</Tabs.Tab> </Tabs.Tab>
<Tabs.Tab> <Tabs.Tab>
Intel Arc GPUs require **Vulkan** support. Intel Arc GPUs require **Vulkan** support.
<Callout type="warning"> 1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Local Engine** > **Llama.cpp**
This feature is currently experimental and must be enabled through [Experimental Mode](/docs/settings#experimental-mode) in **Advanced Settings**. 2. At **llama-cpp Backend**, select backend. For example `windows-amd64-vulkan` if you have and AMD gaphic card. For more info, see [our guide](/docs/local-engines/llama-cpp).
</Callout>
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Advanced Settings**
2. Enable **Experimental Mode**
3. Under **GPU Acceleration**, enable **Vulkan Support**
4. At **GPU Acceleration**, toggle on and select your preferred GPU(s)
5. App reload is required after the selection
</Tabs.Tab> </Tabs.Tab>
</Tabs> </Tabs>

View File

@ -60,16 +60,15 @@ For more model installation methods, please visit [Model Management](/docs/model
### Step 3: Turn on GPU Acceleration (Optional) ### Step 3: Turn on GPU Acceleration (Optional)
While the model downloads, let's optimize your hardware setup. If you're on **Windows** or **Linux** and have a compatible graphics card, you can significantly boost model performance by enabling GPU acceleration. While the model downloads, let's optimize your hardware setup. If you're on **Windows** or **Linux** and have a compatible graphics card, you can significantly boost model performance by enabling GPU acceleration.
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Advanced Settings** 1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Local Engine** > **Llama.cpp**
2. At **GPU Acceleration**, toggle on and select your preferred GPU(s) 2. At **llama-cpp Backend**, select backend. For example `windows-amd64-vulkan` if you have and AMD gaphic card. For more info, see [our guide](/docs/local-engines/llama-cpp).
3. App reload is required after the selection
<Callout type="info"> <Callout type="info">
Ensure you have installed all required dependencies and drivers before enabling GPU acceleration. See **GPU Setup Guide** on [Windows](/docs/desktop/windows#gpu-acceleration) & [Linux](/docs/desktop/linux#gpu-acceleration) for detailed instructions. Ensure you have installed all required dependencies and drivers before enabling GPU acceleration. See **GPU Setup Guide** on [Windows](/docs/desktop/windows#gpu-acceleration) & [Linux](/docs/desktop/linux#gpu-acceleration) for detailed instructions.
</Callout> </Callout>
<br/> <br/>
![Turn on GPU acceleration](./_assets/trouble-shooting-01.png) ![Turn on GPU acceleration](./_assets/trouble-shooting-03.png)
### Step 4: Customize Assistant Instructions ### Step 4: Customize Assistant Instructions
Once your model has been downloaded and you're ready to start your first conversation, you can customize how it responds by setting specific instructions: Once your model has been downloaded and you're ready to start your first conversation, you can customize how it responds by setting specific instructions:

View File

@ -163,11 +163,11 @@ Ensure you have installed all required dependencies and drivers before enabling
</Callout> </Callout>
Turn on GPU acceleration to improve performance: Turn on GPU acceleration to improve performance:
1. Select and **enable** your prefered GPU(s) 1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Local Engine** > **Llama.cpp**
2. App reload is required after the selection 2. At **llama-cpp Backend**, select backend. For example `windows-amd64-vulkan` if you have and AMD gaphic card. For more info, see [our guide](/docs/local-engines/llama-cpp).
<br/> <br/>
![Hardware](./_assets/trouble-shooting-01.png) ![Hardware](./_assets/trouble-shooting-03.png)
<br/> <br/>
**GPU Performance Optimization** **GPU Performance Optimization**

View File

@ -206,6 +206,7 @@ To verify GPU acceleration is turned on:
<br/> <br/>
![Hardware](./_assets/trouble-shooting-01.png) ![Hardware](./_assets/trouble-shooting-01.png)
![Hardware](./_assets/trouble-shooting-03.png)
<br/> <br/>