Update 2025-09-18-auto-optimize-vision-imports.mdx

This commit is contained in:
Bui Quang Huy 2025-09-20 09:04:11 +08:00 committed by GitHub
parent 2c251d0cef
commit b6169a48e6
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -19,11 +19,9 @@ import { Callout } from 'nextra/components'
### 🚀 Auto Optimize (Experimental) ### 🚀 Auto Optimize (Experimental)
**Intelligent performance tuning** — Jan now automatically applies the best llama.cpp settings for your specific hardware: **Intelligent performance tuning** — Jan can now apply the best llama.cpp settings for your specific hardware:
- **Hardware analysis**: Automatically detects your CPU, GPU, and memory configuration - **Hardware analysis**: Automatically detects your CPU, GPU, and memory configuration
- **Optimal settings**: Applies recommended parameters for maximum performance - **One-click optimization**: Applies optimal parameters with a single click in model settings
- **One-click optimization**: Enable with a single toggle in experimental settings
- **Performance boost**: Get the best possible inference speed without manual tuning
<Callout type="info"> <Callout type="info">
Auto Optimize is currently experimental and will be refined based on user feedback. It analyzes your system specs and applies proven configurations for optimal llama.cpp performance. Auto Optimize is currently experimental and will be refined based on user feedback. It analyzes your system specs and applies proven configurations for optimal llama.cpp performance.
@ -35,7 +33,6 @@ Auto Optimize is currently experimental and will be refined based on user feedba
**Enhanced multimodal support** — Import and use vision models seamlessly: **Enhanced multimodal support** — Import and use vision models seamlessly:
- **Direct vision model import**: Import vision-capable models from any source - **Direct vision model import**: Import vision-capable models from any source
- **Automatic capability detection**: Jan identifies vision support automatically
- **Improved compatibility**: Better handling of multimodal model formats - **Improved compatibility**: Better handling of multimodal model formats
### 🔧 Custom Backend Support ### 🔧 Custom Backend Support