From b6169a48e6e43d639865760456004de878d68661 Mon Sep 17 00:00:00 2001 From: Bui Quang Huy <34532913+LazyYuuki@users.noreply.github.com> Date: Sat, 20 Sep 2025 09:04:11 +0800 Subject: [PATCH] Update 2025-09-18-auto-optimize-vision-imports.mdx --- .../changelog/2025-09-18-auto-optimize-vision-imports.mdx | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/docs/src/pages/changelog/2025-09-18-auto-optimize-vision-imports.mdx b/docs/src/pages/changelog/2025-09-18-auto-optimize-vision-imports.mdx index 5e77e39c9..e9d814e1a 100644 --- a/docs/src/pages/changelog/2025-09-18-auto-optimize-vision-imports.mdx +++ b/docs/src/pages/changelog/2025-09-18-auto-optimize-vision-imports.mdx @@ -19,11 +19,9 @@ import { Callout } from 'nextra/components' ### 🚀 Auto Optimize (Experimental) -**Intelligent performance tuning** — Jan now automatically applies the best llama.cpp settings for your specific hardware: +**Intelligent performance tuning** — Jan can now apply the best llama.cpp settings for your specific hardware: - **Hardware analysis**: Automatically detects your CPU, GPU, and memory configuration -- **Optimal settings**: Applies recommended parameters for maximum performance -- **One-click optimization**: Enable with a single toggle in experimental settings -- **Performance boost**: Get the best possible inference speed without manual tuning +- **One-click optimization**: Applies optimal parameters with a single click in model settings Auto Optimize is currently experimental and will be refined based on user feedback. It analyzes your system specs and applies proven configurations for optimal llama.cpp performance. @@ -35,7 +33,6 @@ Auto Optimize is currently experimental and will be refined based on user feedba **Enhanced multimodal support** — Import and use vision models seamlessly: - **Direct vision model import**: Import vision-capable models from any source -- **Automatic capability detection**: Jan identifies vision support automatically - **Improved compatibility**: Better handling of multimodal model formats ### 🔧 Custom Backend Support