fix: "GPU Acceleration" section missing after enabling Experimental Mode since version 0.5.15
This commit is contained in:
parent
21984afa1b
commit
6b89e5cc48
BIN
docs/src/pages/docs/_assets/trouble-shooting-03.png
Normal file
BIN
docs/src/pages/docs/_assets/trouble-shooting-03.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 36 KiB |
@ -239,9 +239,8 @@ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64
|
|||||||
See [detailed instructions](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#post-installation-actions).
|
See [detailed instructions](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#post-installation-actions).
|
||||||
|
|
||||||
### Step 2: Enable GPU Acceleration
|
### Step 2: Enable GPU Acceleration
|
||||||
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Advanced Settings**
|
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Local Engine** > **Llama.cpp**
|
||||||
2. At **GPU Acceleration**, toggle on and select your preferred GPU(s)
|
2. At **llama-cpp Backend**, select backend. For example `windows-amd64-vulkan` if you have and AMD gaphic card. For more info, see [our guide](/docs/local-engines/llama-cpp).
|
||||||
3. App reload is required after the selection
|
|
||||||
|
|
||||||
|
|
||||||
<Callout type="info">
|
<Callout type="info">
|
||||||
@ -255,30 +254,16 @@ While **Vulkan** can enable Nvidia GPU acceleration in the Jan app, **CUDA** is
|
|||||||
<Tabs.Tab>
|
<Tabs.Tab>
|
||||||
AMD GPUs require **Vulkan** support.
|
AMD GPUs require **Vulkan** support.
|
||||||
|
|
||||||
<Callout type="warning">
|
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Local Engine** > **Llama.cpp**
|
||||||
This feature is currently experimental and must be enabled through [Experimental Mode](/docs/settings#experimental-mode) in **Advanced Settings**.
|
2. At **llama-cpp Backend**, select backend. For example `windows-amd64-vulkan` if you have and AMD gaphic card. For more info, see [our guide](/docs/local-engines/llama-cpp).
|
||||||
</Callout>
|
|
||||||
|
|
||||||
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Advanced Settings**
|
|
||||||
2. Enable **Experimental Mode**
|
|
||||||
3. Under **GPU Acceleration**, enable **Vulkan Support**
|
|
||||||
4. At **GPU Acceleration**, toggle on and select your preferred GPU(s)
|
|
||||||
5. App reload is required after the selection
|
|
||||||
|
|
||||||
</Tabs.Tab>
|
</Tabs.Tab>
|
||||||
|
|
||||||
<Tabs.Tab>
|
<Tabs.Tab>
|
||||||
Intel Arc GPUs require **Vulkan** support.
|
Intel Arc GPUs require **Vulkan** support.
|
||||||
|
|
||||||
<Callout type="warning">
|
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Local Engine** > **Llama.cpp**
|
||||||
This feature is currently experimental and must be enabled through [Experimental Mode](/docs/settings#experimental-mode) in **Advanced Settings**.
|
2. At **llama-cpp Backend**, select backend. For example `windows-amd64-vulkan` if you have and AMD gaphic card. For more info, see [our guide](/docs/local-engines/llama-cpp).
|
||||||
</Callout>
|
|
||||||
|
|
||||||
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Advanced Settings**
|
|
||||||
2. Enable **Experimental Mode**
|
|
||||||
3. Under **GPU Acceleration**, enable **Vulkan Support**
|
|
||||||
4. At **GPU Acceleration**, toggle on and select your preferred GPU(s)
|
|
||||||
5. App reload is required after the selection
|
|
||||||
</Tabs.Tab>
|
</Tabs.Tab>
|
||||||
|
|
||||||
</Tabs>
|
</Tabs>
|
||||||
|
|||||||
@ -160,13 +160,8 @@ Expected output should show your GPU model and driver version.
|
|||||||
nvcc --version
|
nvcc --version
|
||||||
```
|
```
|
||||||
### Step 2: Enable GPU Acceleration
|
### Step 2: Enable GPU Acceleration
|
||||||
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Advanced Settings**
|
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Local Engine** > **Llama.cpp**
|
||||||
2. At **GPU Acceleration**, toggle on and select your preferred GPU(s)
|
2. At **llama-cpp Backend**, select backend. For example `windows-amd64-vulkan` if you have and AMD gaphic card. For more info, see [our guide](/docs/local-engines/llama-cpp).
|
||||||
3. App reload is required after the selection
|
|
||||||
|
|
||||||
<Callout type="info">
|
|
||||||
While Jan supports both CUDA and Vulkan for NVIDIA GPUs, we strongly recommend using CUDA for optimal performance.
|
|
||||||
</Callout>
|
|
||||||
|
|
||||||
</Steps>
|
</Steps>
|
||||||
|
|
||||||
@ -175,32 +170,17 @@ While Jan supports both CUDA and Vulkan for NVIDIA GPUs, we strongly recommend u
|
|||||||
<Tabs.Tab>
|
<Tabs.Tab>
|
||||||
AMD GPUs require **Vulkan** support.
|
AMD GPUs require **Vulkan** support.
|
||||||
|
|
||||||
<Callout type="warning">
|
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Local Engine** > **Llama.cpp**
|
||||||
This feature is currently experimental and must be enabled through [Experimental Mode](/docs/settings#experimental-mode) in **Advanced Settings**.
|
2. At **llama-cpp Backend**, select backend. For example `windows-amd64-vulkan` if you have and AMD gaphic card. For more info, see [our guide](/docs/local-engines/llama-cpp).
|
||||||
</Callout>
|
|
||||||
|
|
||||||
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Advanced Settings**
|
|
||||||
2. Enable **Experimental Mode**
|
|
||||||
3. Under **GPU Acceleration**, enable **Vulkan Support**
|
|
||||||
4. At **GPU Acceleration**, toggle on and select your preferred GPU(s)
|
|
||||||
5. App reload is required after the selection
|
|
||||||
|
|
||||||
</Tabs.Tab>
|
</Tabs.Tab>
|
||||||
|
|
||||||
<Tabs.Tab>
|
<Tabs.Tab>
|
||||||
Intel Arc GPUs require **Vulkan** support.
|
Intel Arc GPUs require **Vulkan** support.
|
||||||
|
|
||||||
<Callout type="warning">
|
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Local Engine** > **Llama.cpp**
|
||||||
This feature is currently experimental and must be enabled through [Experimental Mode](/docs/settings#experimental-mode) in **Advanced Settings**.
|
2. At **llama-cpp Backend**, select backend. For example `windows-amd64-vulkan` if you have and AMD gaphic card. For more info, see [our guide](/docs/local-engines/llama-cpp).
|
||||||
</Callout>
|
|
||||||
|
|
||||||
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Advanced Settings**
|
|
||||||
2. Enable **Experimental Mode**
|
|
||||||
3. Under **GPU Acceleration**, enable **Vulkan Support**
|
|
||||||
4. At **GPU Acceleration**, toggle on and select your preferred GPU(s)
|
|
||||||
5. App reload is required after the selection
|
|
||||||
</Tabs.Tab>
|
</Tabs.Tab>
|
||||||
|
|
||||||
</Tabs>
|
</Tabs>
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@ -60,16 +60,15 @@ For more model installation methods, please visit [Model Management](/docs/model
|
|||||||
|
|
||||||
### Step 3: Turn on GPU Acceleration (Optional)
|
### Step 3: Turn on GPU Acceleration (Optional)
|
||||||
While the model downloads, let's optimize your hardware setup. If you're on **Windows** or **Linux** and have a compatible graphics card, you can significantly boost model performance by enabling GPU acceleration.
|
While the model downloads, let's optimize your hardware setup. If you're on **Windows** or **Linux** and have a compatible graphics card, you can significantly boost model performance by enabling GPU acceleration.
|
||||||
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Advanced Settings**
|
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Local Engine** > **Llama.cpp**
|
||||||
2. At **GPU Acceleration**, toggle on and select your preferred GPU(s)
|
2. At **llama-cpp Backend**, select backend. For example `windows-amd64-vulkan` if you have and AMD gaphic card. For more info, see [our guide](/docs/local-engines/llama-cpp).
|
||||||
3. App reload is required after the selection
|
|
||||||
|
|
||||||
<Callout type="info">
|
<Callout type="info">
|
||||||
Ensure you have installed all required dependencies and drivers before enabling GPU acceleration. See **GPU Setup Guide** on [Windows](/docs/desktop/windows#gpu-acceleration) & [Linux](/docs/desktop/linux#gpu-acceleration) for detailed instructions.
|
Ensure you have installed all required dependencies and drivers before enabling GPU acceleration. See **GPU Setup Guide** on [Windows](/docs/desktop/windows#gpu-acceleration) & [Linux](/docs/desktop/linux#gpu-acceleration) for detailed instructions.
|
||||||
</Callout>
|
</Callout>
|
||||||
<br/>
|
<br/>
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
### Step 4: Customize Assistant Instructions
|
### Step 4: Customize Assistant Instructions
|
||||||
Once your model has been downloaded and you're ready to start your first conversation, you can customize how it responds by setting specific instructions:
|
Once your model has been downloaded and you're ready to start your first conversation, you can customize how it responds by setting specific instructions:
|
||||||
|
|||||||
@ -163,11 +163,11 @@ Ensure you have installed all required dependencies and drivers before enabling
|
|||||||
</Callout>
|
</Callout>
|
||||||
|
|
||||||
Turn on GPU acceleration to improve performance:
|
Turn on GPU acceleration to improve performance:
|
||||||
1. Select and **enable** your prefered GPU(s)
|
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Local Engine** > **Llama.cpp**
|
||||||
2. App reload is required after the selection
|
2. At **llama-cpp Backend**, select backend. For example `windows-amd64-vulkan` if you have and AMD gaphic card. For more info, see [our guide](/docs/local-engines/llama-cpp).
|
||||||
|
|
||||||
<br/>
|
<br/>
|
||||||

|

|
||||||
<br/>
|
<br/>
|
||||||
|
|
||||||
**GPU Performance Optimization**
|
**GPU Performance Optimization**
|
||||||
|
|||||||
@ -206,6 +206,7 @@ To verify GPU acceleration is turned on:
|
|||||||
|
|
||||||
<br/>
|
<br/>
|
||||||

|

|
||||||
|

|
||||||
<br/>
|
<br/>
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user