21 Commits

Author SHA1 Message Date
Vanalite
03ee9c14a3 fix: Apply model name change correctly
# Conflicts:
#	web-app/src/lib/utils.ts
2025-09-29 16:39:32 +07:00
Akarshan Biswas
11b3a60675
fix: refactor, fix and move gguf support utilities to backend (#6584)
* feat: move estimateKVCacheSize to BE

* feat: Migrate model planning to backend

This commit migrates the model load planning logic from the frontend to the Tauri backend. This refactors the `planModelLoad` and `isModelSupported` methods into the `tauri-plugin-llamacpp` plugin, making them directly callable from the Rust core.

The model planning now incorporates a more robust and accurate memory estimation, considering both VRAM and system RAM, and introduces a `batch_size` parameter to the model plan.

**Key changes:**

- **Moved `planModelLoad` to `tauri-plugin-llamacpp`:** The core logic for determining GPU layers, context length, and memory offloading is now in Rust for better performance and accuracy.
- **Moved `isModelSupported` to `tauri-plugin-llamacpp`:** The model support check is also now handled by the backend.
- **Removed `getChatClient` from `AIEngine`:** This optional method was not implemented and has been removed from the abstract class.
- **Improved KV Cache estimation:** The `estimate_kv_cache_internal` function in Rust now accounts for `attention.key_length` and `attention.value_length` if available, and considers sliding window attention for more precise estimates.
- **Introduced `batch_size` in ModelPlan:** The model plan now includes a `batch_size` property, which will be automatically adjusted based on the determined `ModelMode` (e.g., lower for CPU/Hybrid modes).
- **Updated `llamacpp-extension`:** The frontend extension now calls the new Tauri commands for model planning and support checks.
- **Removed `batch_size` from `llamacpp-extension/settings.json`:** The batch size is now dynamically determined by the planning logic and will be set as a model setting directly.
- **Updated `ModelSetting` and `useModelProvider` hooks:** These now handle the new `batch_size` property in model settings.
- **Added new Tauri commands and permissions:** `get_model_size`, `is_model_supported`, and `plan_model_load` are new commands with corresponding permissions.
- **Consolidated `ModelSupportStatus` and `KVCacheEstimate`:** These types are now defined in `src/tauri/plugins/tauri-plugin-llamacpp/src/gguf/types.rs`.

This refactoring centralizes critical model resource management logic, improving consistency and maintainability, and lays the groundwork for more sophisticated model loading strategies.

* feat: refine model planner to handle more memory scenarios

This commit introduces several improvements to the `plan_model_load` function, enhancing its ability to determine a suitable model loading strategy based on system memory constraints. Specifically, it includes:

-   **VRAM calculation improvements:**  Corrects the calculation of total VRAM by iterating over GPUs and multiplying by 1024*1024, improving accuracy.
-   **Hybrid plan optimization:**  Implements a more robust hybrid plan strategy, iterating through GPU layer configurations to find the highest possible GPU usage while remaining within VRAM limits.
-   **Minimum context length enforcement:** Enforces a minimum context length for the model, ensuring that the model can be loaded and used effectively.
-   **Fallback to CPU mode:** If a hybrid plan isn't feasible, it now correctly falls back to a CPU-only mode.
-   **Improved logging:** Enhanced logging to provide more detailed information about the memory planning process, including VRAM, RAM, and GPU layers.
-   **Batch size adjustment:** Updated batch size based on the selected mode, ensuring efficient utilization of available resources.
-   **Error handling and edge cases:**  Improved error handling and edge case management to prevent unexpected failures.
-   **Constants:** Added constants for easier maintenance and understanding.
-   **Power-of-2 adjustment:** Added power of 2 adjustment for max context length to ensure correct sizing for the LLM.

These changes improve the reliability and robustness of the model planning process, allowing it to handle a wider range of hardware configurations and model sizes.

* Add log for raw GPU info from tauri-plugin-hardware

* chore: update linux runner for tauri build

* feat: Improve GPU memory calculation for unified memory

This commit improves the logic for calculating usable VRAM, particularly for systems with **unified memory** like Apple Silicon. Previously, the application would report 0 total VRAM if no dedicated GPUs were found, leading to incorrect calculations and failed model loads.

This change modifies the VRAM calculation to fall back to the total system RAM if no discrete GPUs are detected. This is a common and correct approach for unified memory architectures, where the CPU and GPU share the same memory pool.

Additionally, this commit refactors the logic for calculating usable VRAM and RAM to prevent potential underflow by checking if the total memory is greater than the reserved bytes before subtracting. This ensures the calculation remains safe and correct.

* chore: fix update migration version

* fix: enable unified memory support on model support indicator

* Use total_system_memory in bytes

---------

Co-authored-by: Minh141120 <minh.itptit@gmail.com>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-09-25 12:17:57 +05:30
Akarshan
8f67f29317
feat: add support for mmproj offload setting
Expose the new `mmproj_offload` option in the model settings UI and include it in the `ModelPlan` type. The component now collects the offload flag (`result.offloadMmproj`) and queues it with other setting updates to ensure a single atomic change, preventing race conditions when toggling this feature. This enables users to control MMProj offloading directly from the app.
2025-09-11 13:08:01 +05:30
Faisal Amir
14c7fc0450 chore: update argument 2025-09-11 14:23:56 +07:00
Faisal Amir
bc29046c06 enhancement: send params mmptoj_path for optimize setting 2025-09-11 13:23:25 +07:00
Faisal Amir
791563e6ba enhancement: add label experimental for optimize setting 2025-09-11 13:11:37 +07:00
Akarshan Biswas
7a174e621a
feat: Smart model management (#6390)
* feat: Smart model management

* **New UI option** – `memory_util` added to `settings.json` with a dropdown (high / medium / low) to let users control how aggressively the engine uses system memory.
* **Configuration updates** – `LlamacppConfig` now includes `memory_util`; the extension class stores it in a new `memoryMode` property and handles updates through `updateConfig`.
* **System memory handling**
  * Introduced `SystemMemory` interface and `getTotalSystemMemory()` to report combined VRAM + RAM.
  * Added helper methods `getKVCachePerToken`, `getLayerSize`, and a new `ModelPlan` type.
* **Smart model‑load planner** – `planModelLoad()` computes:
  * Number of GPU layers that can fit in usable VRAM.
  * Maximum context length based on KV‑cache size and the selected memory utilization mode (high/medium/low).
  * Whether KV‑cache must be off‑loaded to CPU and the overall loading mode (GPU, Hybrid, CPU, Unsupported).
  * Detailed logging of the planning decision.
* **Improved support check** – `isModelSupported()` now:
  * Uses the combined VRAM/RAM totals from `getTotalSystemMemory()`.
  * Applies an 80% usable‑memory heuristic.
  * Returns **GREEN** only when both weights and KV‑cache fit in VRAM, **YELLOW** when they fit only in total memory or require CPU off‑load, and **RED** when the model cannot fit at all.
* **Cleanup** – Removed unused `GgufMetadata` import; updated imports and type definitions accordingly.
* **Documentation/comments** – Added explanatory JSDoc comments for the new methods and clarified the return semantics of `isModelSupported`.

* chore: migrate no_kv_offload from llamacpp setting to model setting

* chore: add UI auto optimize model setting

* feat: improve model loading planner with mmproj support and smarter memory budgeting

* Extend `ModelPlan` with optional `noOffloadMmproj` flag to indicate when a multimodal projector can stay in VRAM.
* Add `mmprojPath` parameter to `planModelLoad` and calculate its size, attempting to keep it on GPU when possible.
* Refactor system memory detection:
  * Use `used_memory` (actual free RAM) instead of total RAM for budgeting.
  * Introduced `usableRAM` placeholder for future use.
* Rewrite KV‑cache size calculation:
  * Properly handle GQA models via `attention.head_count_kv`.
  * Compute bytes per token as `nHeadKV * headDim * 2 * 2 * nLayer`.
* Replace the old 70 % VRAM heuristic with a more flexible budget:
  * Reserve a fixed VRAM amount and apply an overhead factor.
  * Derive usable system RAM from total memory minus VRAM.
* Implement a robust allocation algorithm:
  * Prioritize placing the mmproj in VRAM.
  * Search for the best balance of GPU layers and context length.
  * Fallback strategies for hybrid and pure‑CPU modes with detailed safety checks.
* Add extensive validation of model size, KV‑cache size, layer size, and memory mode.
* Improve logging throughout the planning process for easier debugging.
* Adjust final plan return shape to include the new `noOffloadMmproj` field.

* remove unused variable

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-09-11 09:48:03 +05:30
Dinh Long Nguyen
a30eb7f968
feat: Jan Web (reusing Jan Desktop UI) (#6298)
* add platform guards

* add service management

* fix types

* move to zustand for servicehub

* update App Updater

* update tauri missing move

* update app updater

* refactor: move PlatformFeatures to separate const file

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* change tauri fetch name

* update implementation

* update extension fetch

* make web version run properly

* disabled unused web settings

* fix all tests

* fix lint

* fix tests

* add mock for extension

* fix build

* update make and mise

* fix tsconfig for web-extensions

* fix loader type

* cleanup

* fix test

* update error handling + mcp should be working

* Update mcp init

* use separate is_web_app build property

* Remove fixed model catalog url

* fix additional tests

* fix download issue (event emitter not implemented correctly)

* Update Title html

* fix app logs

* update root tsx render timing

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-05 01:47:46 +07:00
Faisal Amir
fdc8e07f86 chore: update model setting include offload-mmproj 2025-08-19 20:00:08 +07:00
Akarshan Biswas
1f1605bdf9
feat: Add support for overriding tensor buffer type (#6062)
* feat: Add support for overriding tensor buffer type

This commit introduces a new configuration option, `override_tensor_buffer_t`, which allows users to specify a regex for matching tensor names to override their buffer type. This is an advanced setting primarily useful for optimizing the performance of large models, particularly Mixture of Experts (MoE) models.

By overriding the tensor buffer type, users can keep critical parts of the model, like the attention layers, on the GPU while offloading other parts, such as the expert feed-forward networks, to the CPU. This can lead to significant speed improvements for massive models.

Additionally, this change refines the error message to be more specific when a model fails to load. The previous message "Failed to load llama-server" has been updated to "Failed to load model" to be more accurate.

* chore: update FE to suppoer override-tensor

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-08-07 10:31:34 +05:30
Faisal Amir
5d001dfd5a
feat: jinja template customize per model instead provider level (#6053) 2025-08-05 21:21:41 +07:00
Akarshan Biswas
a1af70f7a9
feat: Enhance Llama.cpp backend management with persistence (#5886)
* feat: Enhance Llama.cpp backend management with persistence

This commit introduces significant improvements to how the Llama.cpp extension manages and updates its backend installations, focusing on user preference persistence and smarter auto-updates.

Key changes include:

* **Persistent Backend Type Preference:** The extension now stores the user's preferred backend type (e.g., `cuda`, `cpu`, `metal`) in `localStorage`. This ensures that even after updates or restarts, the system attempts to use the user's previously selected backend type, if available.
* **Intelligent Auto-Update:** The auto-update mechanism has been refined to prioritize updating to the **latest version of the *currently selected backend type*** rather than always defaulting to the "best available" backend (which might change). This respects user choice while keeping the chosen backend type up-to-date.
* **Improved Initial Installation/Configuration:** For fresh installations or cases where the `version_backend` setting is invalid, the system now intelligently determines and installs the best available backend, then persists its type.
* **Refined Old Backend Cleanup:** The `removeOldBackends` function has been renamed to `removeOldBackend` and modified to specifically clean up *older versions of the currently selected backend type*, preventing the accumulation of unnecessary files while preserving other backend types the user might switch to.
* **Robust Local Storage Handling:** New private methods (`getStoredBackendType`, `setStoredBackendType`, `clearStoredBackendType`) are introduced to safely interact with `localStorage`, including error handling for potential `localStorage` access issues.
* **Version Filtering Utility:** A new utility `findLatestVersionForBackend` helps in identifying the latest available version for a specific backend type from a list of supported backends.

These changes provide a more stable, user-friendly, and maintainable backend management experience for the Llama.cpp extension.

Fixes: #5883

* fix: cortex models migration should be done once

* feat: Optimize Llama.cpp backend preference storage and UI updates

This commit refines the Llama.cpp extension's backend management by:

* **Optimizing `localStorage` Writes:** The system now only writes the backend type preference to `localStorage` if the new value is different from the currently stored one. This reduces unnecessary `localStorage` operations.
* **Ensuring UI Consistency on Initial Setup:** When a fresh installation or an invalid backend configuration is detected, the UI settings are now explicitly updated to reflect the newly determined `effectiveBackendString`, ensuring the displayed setting matches the active configuration.

These changes improve performance by reducing redundant storage operations and enhance user experience by maintaining UI synchronization with the backend state.

* Revert "fix: provider settings should be refreshed on page load (#5887)"

This reverts commit ce6af62c7df4a7e7ea8c0896f307309d6bf38771.

* fix: add loader version backend llamacpp

* fix: wrong key name

* fix: model setting issues

* fix: virtual dom hub

* chore: cleanup

* chore: hide device ofload setting

---------

Co-authored-by: Louis <louis@jan.ai>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-07-24 18:33:35 +07:00
Faisal Amir
1d443e1f7d
fix: support load model configurations (#5843)
* fix: support load model configurations

* chore: remove log

* chore: sampling params add from send completion

* chore: remove comment

* chore: remove comment on predefined file

* chore: update test model service
2025-07-22 19:52:12 +07:00
Sam Hoang Van
c32dd092d0
Enhance i18n and add missing i18n for all component (#5314)
* Refactor translation imports and update text for localization across settings and system monitor routes

- Changed translation import from 'react-i18next' to '@/i18n/react-i18next-compat' in multiple files.
- Updated various text strings to use translation keys for better localization support in:
  - Local API Server settings
  - MCP Servers settings
  - Privacy settings
  - Provider settings
  - Shortcuts settings
  - System Monitor
  - Thread details
- Ensured consistent use of translation keys for all user-facing text.

Update web-app/src/routes/settings/appearance.tsx

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

Update web-app/src/routes/settings/appearance.tsx

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

Update web-app/src/locales/vn/settings.json

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

Update web-app/src/containers/dialogs/DeleteMCPServerConfirm.tsx

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

Update web-app/src/locales/id/common.json

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* Add Chinese (Simplified and Traditional) localization files for various components

- Created `tools.json`, `updater.json`, `assistants.json`, `chat.json`, `common.json`, `hub.json`, `logs.json`, `mcp-servers.json`, `provider.json`, `providers.json`, `settings.json`, `setup.json`, `system-monitor.json`, `tool-approval.json` in both `zh-CN` and `zh-TW` locales.
- Added translations for tool approval, updater notifications, assistant management, chat interface, common UI elements, hub interactions, logging messages, MCP server configurations, provider management, settings options, setup instructions, and system monitoring.

* Refactor localization strings for improved clarity and consistency in English, Indonesian, and Vietnamese settings files

* Fix missing key and reword

* fix pr comment

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-06-20 15:33:54 +07:00
Faisal Amir
3accef8c92
fix: minor ui (#5247) 2025-06-12 09:04:47 +07:00
Louis
fb7dc21135
chore: update MCP servers list (#5195)
* chore: update MCP servers list

* chore: rename

* chore: attempt to stop models before updating

* chore: clean up logs

* chore: fix bug refresh
2025-06-04 21:47:27 +07:00
Louis
0fbc4a4664
chore: add function to model settings (#5108) 2025-05-26 18:53:08 +07:00
Faisal Amir
8afb962739
feat: add quick access model setting via dropdown model (#5104)
* feat: add quick access model setting via dropdown model

* Update web-app/src/containers/DropdownModelProvider.tsx

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-05-26 13:29:09 +07:00
Louis
6676e0ced8
chore: add relocate jan data folder function to new FE (#5043)
* chore: typo

* fix: linter issues

* chore: fix linter

* chore: fix linter

* chore: add relocate data folder
2025-05-21 10:48:10 +07:00
vansangpfiev
90da49f873
chore: add some ts-ignore to make tauri build works (#5010)
* chore: add some ts-ignore to make build works

* chore: remove tauri build nightly script

* chore: update core package.json

* chore: fix build

* chore: add devtools for tauri beta, nightly

* chore: change transport-sse to transport-sse-server

* chore: comment out dll files

* chore: add ts-ignore

* chore: update nightly CI
2025-05-19 14:49:03 +07:00
Faisal Amir
852ea84cd8
epic: Jan with new UI/UX (#4964)
* chore: initial new FE setup

* chore: update namespace text-left-panel foreground variable

* chore: enable dynamic mainview color

* chore: remove greetings new chat

* chore: fix chat input style

* chore: simplify hook useAppearance

* chore: enable internationalization

* chore: prepare vn locale

* chore: keyboardshortcut layout

* chore: update keyboard shortcut exclude pathname

* chore: update state active setting route

* chore: fix update theme by system

* chore: handle dynamic primary color

* chore: fix left panel navigation active state and styled item privacy analytic

* chore: reorder general setting being a first

* chore: add function reset appearance

* chore: update scrollbar

* chore: update delete thread with dialog confirmation

* chore: update state dialog inside dropdown menu

* chore: wip thread detail or chat page

* chore: wip model dropdown

* chore: prepare model dropdown select

* chore: update model providers setting

* chore: show provider on model dropdown based isActive toogle

* chore: update layout model provider

* chore: update state active on storage

* chore: update gap of item dropdown model

* chore: update select model base on id

* chore: update edit model capabilities

* chore: add dialog to add model

* chore: update sheet for model setting

* chore: add sheet setting each model

* chore: make dynamic syntax highlight

* chore: fix menu setting appearance theme

* chore: markdown render support emoji

* chore: markdown support latex

* chore: change codeblock default theme

* chore: update ui codeblock

* chore: custom render link taget new window

* chore: fix copy button codeblock

* chore: update accent and desctructive color

* chore: setup user chat message

* chore: prepare some page settings

* chore: simple list extension and prepare mcp, local api, and hardware

* chore: mcp-serve

* chore: MCP server UI

* chore: update local api server config

* chore: adjust chat input

* chore: update local api server log

* chore: prepare hub page

* chore: remove help page

* chore: update mock

* chore: prepare http proxy setting UI

* chore: adjust local api server and title every action

* fix: chore FE package (#4962)

* fix: update command which referred to non-existent web app

* fix: added commented out macos platform for now

* fix: remove the platform name as macos

* fix: remove unnecessary line for platform name in HeaderPage component

* fix: update dev script to specify port 3000 for Vite

* feat: model providers and chat completion

* enhancement: threads performance

* fix: thread content update

* chore: clean up threads

* fix: performance issue with streaming and state loop

* fix: streaming

* fix: react markdow

* feat: extension manager

* chore: add nodePolyfills include path

* chore: improve performance avoid unhandle rejection

* chore: update pre margin bottom

* chore: swith thread should be deafult scroll to bottom

* chore: wip scroll to bottom

* chore: add model loader

* chore: add platform utils

* feat: threads functionality

* chore: setup toaster

* chore: persist threads deletion

* fix: create thread with new message

* chore: create new thread should change route path

* chore: navigate after delet dialog thread

* chore: thread favorites and orders

* chore: dismiss deleting modal on delete

* chore: remove undefined properties

* chore: remove deprecated run step

* chore: fix delete thread

* chore: create empty thread content on started streaming

* chore: correct messages store key

* chore: stuck at generating state

* chore: preapre chat toolbar

* chore: introduce in-memory app state

* chore: update extensions migration logic

* chore: remove redundant extensions migration gate

* chore: message toolbar user and assistant

* chore: add logo gemini

* feat: remote providers with model capabilities

* chore: maintain provider settings

* chore: move speed token into chat input

* chore: temp harcoded model loader

* chore: make chat text selectable and truncate model list

* chore: update shortcut UI

* Feat/implement threads (#4977)

* chore: add fuse.js library for enhanced search functionality

* feat: implement thread filtering with Fuse.js for improved search capabilities

* fix: update the fuseOptions

* feat: add search functionality to LeftPanel and refactor thread retrieval logic

* refactor: optimize thread filtering and improve search functionality in LeftPanel

* fix: more edits

* refactor: remove duplicate import of useAppState in StreamingContent component

* chore: update navigate after delete all thread

* chore: pass prop speedToken from new chat input

* chore: persist provider general settings

* chore: styling search left panel

* chore: cleanup margin

* chore: update size icon

* chore: improve chat input

* chore: imprve list markdown

* chore: animate border

* feat: local model provider work

* chore: persist manually added model

* chore: prepare download management ui and show version on general setting

* chore: improve pre tag

* chore: remove buton install extension and improve light theme download

* chore: add missing hardware information handler

* chore: cleanup small ui

* chore: update default provider settings

* fix: missing fs commands

* chore: correct provider models

* chore: prepare delete model

* chore: handle thinking block

* chore: fix conditional message toolbar

* chore: pophover download select none

* enhancement: add prune mode

* chore: model settings

* chore: bump engine version tauri

* chore: update style thinking

* chore: add indicator and toogle mcp server

* chore: wip hub

* chore: update model settings

* chore: mvp hub

* chore: add function rename title

* chore: update function delete message

* chore: update rename title

* chore: update model settings

* chore: persist MCP configs

* refactor: clean up utils

* chore: add tools to completion request

* chore: clean up

* chore: ignore assets

---------

Co-authored-by: Ivan Leo <ivanleomk@gmail.com>
Co-authored-by: Louis <louis@jan.ai>
2025-05-15 19:38:59 +07:00