fixed path issue

This commit is contained in:
Ramon Perez 2025-08-28 21:51:53 +10:00
parent 798cae28a9
commit 786f5f801f
2 changed files with 3 additions and 5 deletions

View File

@ -18,8 +18,6 @@ keywords:
---
import { Aside, Steps } from '@astrojs/starlight/components'
# Model Parameters
Model parameters control how your AI thinks and responds. Think of them as the AI's personality settings and performance controls.
## How to Access Settings

View File

@ -34,7 +34,7 @@ llama.cpp is the core inference engine that powers Jan's ability to run AI model
Navigate to **Settings** > **Model Providers** > **Llama.cpp**:
![llama.cpp Settings](../../../../assets/llama.cpp-01-updated.png)
![llama.cpp Settings](../../../assets/llama.cpp-01-updated.png)
<Aside type="note">
Most users don't need to change these settings. Jan automatically detects your hardware and picks optimal defaults.
@ -277,7 +277,7 @@ Control how models use system and GPU memory:
Each model can override engine defaults. Access via the gear icon next to any model:
![Model Settings](../../../../assets/trouble-shooting-04.png)
![Model Settings](../../../assets/trouble-shooting-04.png)
| Setting | What It Controls | Impact |
|---------|-----------------|---------|
@ -385,4 +385,4 @@ export GGML_CUDA_NO_PINNED=1
- [Model Parameters Guide](/docs/jan/explanation/model-parameters) - Fine-tune model behavior
- [Troubleshooting Guide](/docs/jan/troubleshooting) - Detailed problem-solving
- [Hardware Requirements](/docs/desktop/mac#compatibility) - System specifications
- [API Server Settings](./api-server) - Configure the local API
- [API Server Settings](./api-server) - Configure the local API