fixed path issue
This commit is contained in:
parent
798cae28a9
commit
786f5f801f
@ -18,8 +18,6 @@ keywords:
|
||||
---
|
||||
import { Aside, Steps } from '@astrojs/starlight/components'
|
||||
|
||||
# Model Parameters
|
||||
|
||||
Model parameters control how your AI thinks and responds. Think of them as the AI's personality settings and performance controls.
|
||||
|
||||
## How to Access Settings
|
||||
|
||||
@ -34,7 +34,7 @@ llama.cpp is the core inference engine that powers Jan's ability to run AI model
|
||||
|
||||
Navigate to **Settings** > **Model Providers** > **Llama.cpp**:
|
||||
|
||||

|
||||

|
||||
|
||||
<Aside type="note">
|
||||
Most users don't need to change these settings. Jan automatically detects your hardware and picks optimal defaults.
|
||||
@ -277,7 +277,7 @@ Control how models use system and GPU memory:
|
||||
|
||||
Each model can override engine defaults. Access via the gear icon next to any model:
|
||||
|
||||

|
||||

|
||||
|
||||
| Setting | What It Controls | Impact |
|
||||
|---------|-----------------|---------|
|
||||
@ -385,4 +385,4 @@ export GGML_CUDA_NO_PINNED=1
|
||||
- [Model Parameters Guide](/docs/jan/explanation/model-parameters) - Fine-tune model behavior
|
||||
- [Troubleshooting Guide](/docs/jan/troubleshooting) - Detailed problem-solving
|
||||
- [Hardware Requirements](/docs/desktop/mac#compatibility) - System specifications
|
||||
- [API Server Settings](./api-server) - Configure the local API
|
||||
- [API Server Settings](./api-server) - Configure the local API
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user