Merge pull request #6524 from menloresearch/docs/update-changelog
docs: update changelog for v0.6.10
This commit is contained in:
commit
361c9eeff4
BIN
docs/public/assets/images/changelog/jan-import-vlm-model.gif
Normal file
BIN
docs/public/assets/images/changelog/jan-import-vlm-model.gif
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 6.2 MiB |
Binary file not shown.
|
After Width: | Height: | Size: 1.3 MiB |
@ -0,0 +1,48 @@
|
||||
---
|
||||
title: "Jan v0.6.10: Auto Optimize, custom backends, and vision model imports"
|
||||
version: 0.6.10
|
||||
description: "New experimental Auto Optimize feature, custom llama.cpp backend support, vision model imports, and critical bug fixes"
|
||||
date: 2025-09-18
|
||||
ogImage: "/assets/images/changelog/jan-v0.6.10-auto-optimize.gif"
|
||||
---
|
||||
|
||||
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
|
||||
import { Callout } from 'nextra/components'
|
||||
|
||||
<ChangelogHeader title="Jan v0.6.10: Auto Optimize, custom backends, and vision model imports" date="2025-09-18" ogImage="/assets/images/changelog/jan-v0.6.10-auto-optimize.gif" />
|
||||
|
||||
## Highlights 🎉
|
||||
|
||||
- **Auto Optimize**: One-click hardware-aware performance tuning for llama.cpp.
|
||||
- **Custom Backend Support**: Import and manage your preferred llama.cpp versions.
|
||||
- **Import Vision Models**: Seamlessly import and use vision-capable models.
|
||||
|
||||
### 🚀 Auto Optimize (Experimental)
|
||||
|
||||
**Intelligent performance tuning** — Jan can now apply the best llama.cpp settings for your specific hardware:
|
||||
- **Hardware analysis**: Automatically detects your CPU, GPU, and memory configuration
|
||||
- **One-click optimization**: Applies optimal parameters with a single click in model settings
|
||||
|
||||
<Callout type="info">
|
||||
Auto Optimize is currently experimental and will be refined based on user feedback. It analyzes your system specs and applies proven configurations for optimal llama.cpp performance.
|
||||
</Callout>
|
||||
|
||||
### 👁️ Vision Model Imports
|
||||
|
||||
<img src="/assets/images/changelog/jan-import-vlm-model.gif" alt="Vision Model Import Demo" width="600" />
|
||||
|
||||
**Enhanced multimodal support** — Import and use vision models seamlessly:
|
||||
- **Direct vision model import**: Import vision-capable models from any source
|
||||
- **Improved compatibility**: Better handling of multimodal model formats
|
||||
|
||||
### 🔧 Custom Backend Support
|
||||
|
||||
**Import your preferred llama.cpp version** — Full control over your AI backend:
|
||||
- **Custom llama.cpp versions**: Import and use any llama.cpp build you prefer
|
||||
- **Version flexibility**: Use bleeding-edge builds or stable releases
|
||||
- **Backup CDN**: New CDN fallback when GitHub downloads fail
|
||||
- **User confirmation**: Prompts before auto-updating llama.cpp
|
||||
|
||||
Update your Jan or [download the latest](https://jan.ai/).
|
||||
|
||||
For the complete list of changes, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.6.10).
|
||||
Loading…
x
Reference in New Issue
Block a user