Update 2025-09-18-auto-optimize-vision-imports.mdx
This commit is contained in:
parent
2c251d0cef
commit
b6169a48e6
@ -19,11 +19,9 @@ import { Callout } from 'nextra/components'
|
||||
|
||||
### 🚀 Auto Optimize (Experimental)
|
||||
|
||||
**Intelligent performance tuning** — Jan now automatically applies the best llama.cpp settings for your specific hardware:
|
||||
**Intelligent performance tuning** — Jan can now apply the best llama.cpp settings for your specific hardware:
|
||||
- **Hardware analysis**: Automatically detects your CPU, GPU, and memory configuration
|
||||
- **Optimal settings**: Applies recommended parameters for maximum performance
|
||||
- **One-click optimization**: Enable with a single toggle in experimental settings
|
||||
- **Performance boost**: Get the best possible inference speed without manual tuning
|
||||
- **One-click optimization**: Applies optimal parameters with a single click in model settings
|
||||
|
||||
<Callout type="info">
|
||||
Auto Optimize is currently experimental and will be refined based on user feedback. It analyzes your system specs and applies proven configurations for optimal llama.cpp performance.
|
||||
@ -35,7 +33,6 @@ Auto Optimize is currently experimental and will be refined based on user feedba
|
||||
|
||||
**Enhanced multimodal support** — Import and use vision models seamlessly:
|
||||
- **Direct vision model import**: Import vision-capable models from any source
|
||||
- **Automatic capability detection**: Jan identifies vision support automatically
|
||||
- **Improved compatibility**: Better handling of multimodal model formats
|
||||
|
||||
### 🔧 Custom Backend Support
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user