33 Commits

Author SHA1 Message Date
Vanalite
7fd83e0bf8 feat: add migration for cohere provider 2025-10-03 21:55:28 +07:00
Faisal Amir
d3ea61b2e9 chore: update setting base url also 2025-10-03 21:49:42 +07:00
Faisal Amir
651ba446bf chore: migration base_url anthropic 2025-10-03 21:40:59 +07:00
Nghia Doan
c5a5968bf8
Merge pull request #6643 from menloresearch/fix/model-name-change
fix: Apply model name change correctly
2025-09-30 22:41:05 +07:00
Akarshan Biswas
11b3a60675
fix: refactor, fix and move gguf support utilities to backend (#6584)
* feat: move estimateKVCacheSize to BE

* feat: Migrate model planning to backend

This commit migrates the model load planning logic from the frontend to the Tauri backend. This refactors the `planModelLoad` and `isModelSupported` methods into the `tauri-plugin-llamacpp` plugin, making them directly callable from the Rust core.

The model planning now incorporates a more robust and accurate memory estimation, considering both VRAM and system RAM, and introduces a `batch_size` parameter to the model plan.

**Key changes:**

- **Moved `planModelLoad` to `tauri-plugin-llamacpp`:** The core logic for determining GPU layers, context length, and memory offloading is now in Rust for better performance and accuracy.
- **Moved `isModelSupported` to `tauri-plugin-llamacpp`:** The model support check is also now handled by the backend.
- **Removed `getChatClient` from `AIEngine`:** This optional method was not implemented and has been removed from the abstract class.
- **Improved KV Cache estimation:** The `estimate_kv_cache_internal` function in Rust now accounts for `attention.key_length` and `attention.value_length` if available, and considers sliding window attention for more precise estimates.
- **Introduced `batch_size` in ModelPlan:** The model plan now includes a `batch_size` property, which will be automatically adjusted based on the determined `ModelMode` (e.g., lower for CPU/Hybrid modes).
- **Updated `llamacpp-extension`:** The frontend extension now calls the new Tauri commands for model planning and support checks.
- **Removed `batch_size` from `llamacpp-extension/settings.json`:** The batch size is now dynamically determined by the planning logic and will be set as a model setting directly.
- **Updated `ModelSetting` and `useModelProvider` hooks:** These now handle the new `batch_size` property in model settings.
- **Added new Tauri commands and permissions:** `get_model_size`, `is_model_supported`, and `plan_model_load` are new commands with corresponding permissions.
- **Consolidated `ModelSupportStatus` and `KVCacheEstimate`:** These types are now defined in `src/tauri/plugins/tauri-plugin-llamacpp/src/gguf/types.rs`.

This refactoring centralizes critical model resource management logic, improving consistency and maintainability, and lays the groundwork for more sophisticated model loading strategies.

* feat: refine model planner to handle more memory scenarios

This commit introduces several improvements to the `plan_model_load` function, enhancing its ability to determine a suitable model loading strategy based on system memory constraints. Specifically, it includes:

-   **VRAM calculation improvements:**  Corrects the calculation of total VRAM by iterating over GPUs and multiplying by 1024*1024, improving accuracy.
-   **Hybrid plan optimization:**  Implements a more robust hybrid plan strategy, iterating through GPU layer configurations to find the highest possible GPU usage while remaining within VRAM limits.
-   **Minimum context length enforcement:** Enforces a minimum context length for the model, ensuring that the model can be loaded and used effectively.
-   **Fallback to CPU mode:** If a hybrid plan isn't feasible, it now correctly falls back to a CPU-only mode.
-   **Improved logging:** Enhanced logging to provide more detailed information about the memory planning process, including VRAM, RAM, and GPU layers.
-   **Batch size adjustment:** Updated batch size based on the selected mode, ensuring efficient utilization of available resources.
-   **Error handling and edge cases:**  Improved error handling and edge case management to prevent unexpected failures.
-   **Constants:** Added constants for easier maintenance and understanding.
-   **Power-of-2 adjustment:** Added power of 2 adjustment for max context length to ensure correct sizing for the LLM.

These changes improve the reliability and robustness of the model planning process, allowing it to handle a wider range of hardware configurations and model sizes.

* Add log for raw GPU info from tauri-plugin-hardware

* chore: update linux runner for tauri build

* feat: Improve GPU memory calculation for unified memory

This commit improves the logic for calculating usable VRAM, particularly for systems with **unified memory** like Apple Silicon. Previously, the application would report 0 total VRAM if no dedicated GPUs were found, leading to incorrect calculations and failed model loads.

This change modifies the VRAM calculation to fall back to the total system RAM if no discrete GPUs are detected. This is a common and correct approach for unified memory architectures, where the CPU and GPU share the same memory pool.

Additionally, this commit refactors the logic for calculating usable VRAM and RAM to prevent potential underflow by checking if the total memory is greater than the reserved bytes before subtracting. This ensures the calculation remains safe and correct.

* chore: fix update migration version

* fix: enable unified memory support on model support indicator

* Use total_system_memory in bytes

---------

Co-authored-by: Minh141120 <minh.itptit@gmail.com>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-09-25 12:17:57 +05:30
Akarshan Biswas
7a174e621a
feat: Smart model management (#6390)
* feat: Smart model management

* **New UI option** – `memory_util` added to `settings.json` with a dropdown (high / medium / low) to let users control how aggressively the engine uses system memory.
* **Configuration updates** – `LlamacppConfig` now includes `memory_util`; the extension class stores it in a new `memoryMode` property and handles updates through `updateConfig`.
* **System memory handling**
  * Introduced `SystemMemory` interface and `getTotalSystemMemory()` to report combined VRAM + RAM.
  * Added helper methods `getKVCachePerToken`, `getLayerSize`, and a new `ModelPlan` type.
* **Smart model‑load planner** – `planModelLoad()` computes:
  * Number of GPU layers that can fit in usable VRAM.
  * Maximum context length based on KV‑cache size and the selected memory utilization mode (high/medium/low).
  * Whether KV‑cache must be off‑loaded to CPU and the overall loading mode (GPU, Hybrid, CPU, Unsupported).
  * Detailed logging of the planning decision.
* **Improved support check** – `isModelSupported()` now:
  * Uses the combined VRAM/RAM totals from `getTotalSystemMemory()`.
  * Applies an 80% usable‑memory heuristic.
  * Returns **GREEN** only when both weights and KV‑cache fit in VRAM, **YELLOW** when they fit only in total memory or require CPU off‑load, and **RED** when the model cannot fit at all.
* **Cleanup** – Removed unused `GgufMetadata` import; updated imports and type definitions accordingly.
* **Documentation/comments** – Added explanatory JSDoc comments for the new methods and clarified the return semantics of `isModelSupported`.

* chore: migrate no_kv_offload from llamacpp setting to model setting

* chore: add UI auto optimize model setting

* feat: improve model loading planner with mmproj support and smarter memory budgeting

* Extend `ModelPlan` with optional `noOffloadMmproj` flag to indicate when a multimodal projector can stay in VRAM.
* Add `mmprojPath` parameter to `planModelLoad` and calculate its size, attempting to keep it on GPU when possible.
* Refactor system memory detection:
  * Use `used_memory` (actual free RAM) instead of total RAM for budgeting.
  * Introduced `usableRAM` placeholder for future use.
* Rewrite KV‑cache size calculation:
  * Properly handle GQA models via `attention.head_count_kv`.
  * Compute bytes per token as `nHeadKV * headDim * 2 * 2 * nLayer`.
* Replace the old 70 % VRAM heuristic with a more flexible budget:
  * Reserve a fixed VRAM amount and apply an overhead factor.
  * Derive usable system RAM from total memory minus VRAM.
* Implement a robust allocation algorithm:
  * Prioritize placing the mmproj in VRAM.
  * Search for the best balance of GPU layers and context length.
  * Fallback strategies for hybrid and pure‑CPU modes with detailed safety checks.
* Add extensive validation of model size, KV‑cache size, layer size, and memory mode.
* Improve logging throughout the planning process for easier debugging.
* Adjust final plan return shape to include the new `noOffloadMmproj` field.

* remove unused variable

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-09-11 09:48:03 +05:30
Dinh Long Nguyen
a30eb7f968
feat: Jan Web (reusing Jan Desktop UI) (#6298)
* add platform guards

* add service management

* fix types

* move to zustand for servicehub

* update App Updater

* update tauri missing move

* update app updater

* refactor: move PlatformFeatures to separate const file

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* change tauri fetch name

* update implementation

* update extension fetch

* make web version run properly

* disabled unused web settings

* fix all tests

* fix lint

* fix tests

* add mock for extension

* fix build

* update make and mise

* fix tsconfig for web-extensions

* fix loader type

* cleanup

* fix test

* update error handling + mcp should be working

* Update mcp init

* use separate is_web_app build property

* Remove fixed model catalog url

* fix additional tests

* fix download issue (event emitter not implemented correctly)

* Update Title html

* fix app logs

* update root tsx render timing

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-05 01:47:46 +07:00
Dinh Long Nguyen
02f7b88dab
Bring QA (0.6.9) changes to dev (#6296)
* fix: check for env value before setting (#6266)

* fix: check for env value before setting

* Use empty instead of none

* fix: update linux build script to be consistent with CI (#6269)

The local build script for Linux was failing due to a bundling error. This commit updates the `build:tauri:linux` script in `package.json` to be consistent with the CI build pipeline, which resolves the issue.

The updated script now includes:
- **`NO_STRIP=1`**: This environment variable prevents the `linuxdeploy` utility from stripping debugging symbols, which was a potential cause of the bundling failure.
- **`--verbose`**: This flag provides more detailed output during the build, which can be useful for debugging similar issues in the future.

* fix: compatibility imported model

* fix: update copy mmproj setting desc

* fix: toggle vision for remote model

* chore: add tooltip visions

* chore: show model setting only for local provider

* fix/update-ui-info

* chore: update filter hub while searching

* fix: system monitor window permission

* chore: update credit description

---------

Co-authored-by: Akarshan Biswas <akarshan.biswas@gmail.com>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
Co-authored-by: Minh141120 <minh.itptit@gmail.com>
Co-authored-by: Nguyen Ngoc Minh <91668012+Minh141120@users.noreply.github.com>
2025-08-26 15:35:56 +07:00
Faisal Amir
985a8f31ae
fix: migrations model setting (#6165) 2025-08-13 18:21:48 +07:00
Akarshan Biswas
1f1605bdf9
feat: Add support for overriding tensor buffer type (#6062)
* feat: Add support for overriding tensor buffer type

This commit introduces a new configuration option, `override_tensor_buffer_t`, which allows users to specify a regex for matching tensor names to override their buffer type. This is an advanced setting primarily useful for optimizing the performance of large models, particularly Mixture of Experts (MoE) models.

By overriding the tensor buffer type, users can keep critical parts of the model, like the attention layers, on the GPU while offloading other parts, such as the expert feed-forward networks, to the CPU. This can lead to significant speed improvements for massive models.

Additionally, this change refines the error message to be more specific when a model fails to load. The previous message "Failed to load llama-server" has been updated to "Failed to load model" to be more accurate.

* chore: update FE to suppoer override-tensor

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-08-07 10:31:34 +05:30
Faisal Amir
5d001dfd5a
feat: jinja template customize per model instead provider level (#6053) 2025-08-05 21:21:41 +07:00
Faisal Amir
787c4ee073
fix: wrong desc setting cont_batching (#6034) 2025-08-02 21:48:43 +07:00
Louis
1fc37a9349
fix: migrate app settings to the new version (#5936)
* fix: migrate app settings to the new version

* fix: edge cases

* fix: migrate HF import model on Windows

* fix hardware page broken after downgraded

* test: correct test

* fix: backward compatible hardware info
2025-07-27 21:13:05 +07:00
Faisal Amir
7dec980630
fix: persist model capabilities refresh app (#5918) 2025-07-25 20:27:51 +07:00
Akarshan Biswas
a1af70f7a9
feat: Enhance Llama.cpp backend management with persistence (#5886)
* feat: Enhance Llama.cpp backend management with persistence

This commit introduces significant improvements to how the Llama.cpp extension manages and updates its backend installations, focusing on user preference persistence and smarter auto-updates.

Key changes include:

* **Persistent Backend Type Preference:** The extension now stores the user's preferred backend type (e.g., `cuda`, `cpu`, `metal`) in `localStorage`. This ensures that even after updates or restarts, the system attempts to use the user's previously selected backend type, if available.
* **Intelligent Auto-Update:** The auto-update mechanism has been refined to prioritize updating to the **latest version of the *currently selected backend type*** rather than always defaulting to the "best available" backend (which might change). This respects user choice while keeping the chosen backend type up-to-date.
* **Improved Initial Installation/Configuration:** For fresh installations or cases where the `version_backend` setting is invalid, the system now intelligently determines and installs the best available backend, then persists its type.
* **Refined Old Backend Cleanup:** The `removeOldBackends` function has been renamed to `removeOldBackend` and modified to specifically clean up *older versions of the currently selected backend type*, preventing the accumulation of unnecessary files while preserving other backend types the user might switch to.
* **Robust Local Storage Handling:** New private methods (`getStoredBackendType`, `setStoredBackendType`, `clearStoredBackendType`) are introduced to safely interact with `localStorage`, including error handling for potential `localStorage` access issues.
* **Version Filtering Utility:** A new utility `findLatestVersionForBackend` helps in identifying the latest available version for a specific backend type from a list of supported backends.

These changes provide a more stable, user-friendly, and maintainable backend management experience for the Llama.cpp extension.

Fixes: #5883

* fix: cortex models migration should be done once

* feat: Optimize Llama.cpp backend preference storage and UI updates

This commit refines the Llama.cpp extension's backend management by:

* **Optimizing `localStorage` Writes:** The system now only writes the backend type preference to `localStorage` if the new value is different from the currently stored one. This reduces unnecessary `localStorage` operations.
* **Ensuring UI Consistency on Initial Setup:** When a fresh installation or an invalid backend configuration is detected, the UI settings are now explicitly updated to reflect the newly determined `effectiveBackendString`, ensuring the displayed setting matches the active configuration.

These changes improve performance by reducing redundant storage operations and enhance user experience by maintaining UI synchronization with the backend state.

* Revert "fix: provider settings should be refreshed on page load (#5887)"

This reverts commit ce6af62c7df4a7e7ea8c0896f307309d6bf38771.

* fix: add loader version backend llamacpp

* fix: wrong key name

* fix: model setting issues

* fix: virtual dom hub

* chore: cleanup

* chore: hide device ofload setting

---------

Co-authored-by: Louis <louis@jan.ai>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-07-24 18:33:35 +07:00
Louis
d6ad797769
fix: llama.cpp backend shows blank list sometime (#5876) 2025-07-23 20:04:38 +07:00
Louis
3e30c61fb0
fix: app should refresh local provider models list on launch (#5868) 2025-07-23 08:36:09 +07:00
Louis
05b9d4e9fd
feat: add claude-4 (#5829)
* feat: add claude-4

* fix: sorting order
2025-07-21 12:30:56 +07:00
Akarshan
59ad2eb784
Merge branch 'dev' into release/v0.6.6 2025-07-18 18:29:20 +05:30
Louis
3eaa3424e1
fix: fetch models from custom provider causes app to crash 2025-07-16 15:36:45 +07:00
Louis
8bd4a3389f
refactor: frontend uses new engine extension
# Conflicts:
#	extensions/model-extension/resources/default.json
#	web-app/src/containers/dialogs/DeleteProvider.tsx
#	web-app/src/routes/hub.tsx
2025-07-02 12:28:24 +07:00
Faisal Amir
c463090edb
🐛fix: delete pre populate remote models (#5516) 2025-06-25 13:49:55 +07:00
Louis
6faca3e732
refactor: remove JS server package (#5192)
* refactor: remove js server package

* chore: migrate HF token data
2025-06-04 15:33:35 +07:00
Louis
0fbc4a4664
chore: add function to model settings (#5108) 2025-05-26 18:53:08 +07:00
Louis
b8de48c9e9
fix: enhance tool use and model provider not persisted issues (#5094)
* chore: enhance tool use loop

* fix: create new custom provider is not saved

* chore: bump llama.cpp b5488

* chore: normalize reasoning assistant response

* chore: fix tool call parse in stream mode

* fix: give tool call default generated id

* fix: system instruction should be on top of the history

* chore: allow users to add parameters
2025-05-26 15:12:55 +07:00
Louis
cdd13594a3
fix: model import name issues (#5093) 2025-05-24 14:48:38 +07:00
Louis
125104320e
chore: handle many issues with app settings and message actions (#5086)
* chore: handle many issues with app settings and message actions

* Update web-app/src/services/mcp.ts

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-05-23 21:23:52 +07:00
Louis
634efb9d9d
chore: providers should default on (#5083)
* chore: providers should default on

* chore: add emoji image source

* chore: correct connect-src
2025-05-23 16:26:17 +07:00
Louis
994de67f9e
fix: provider activation status (#5081)
* fix: provider activation status

* fix: thread assistant model
2025-05-23 12:04:49 +07:00
Louis
d5393e4563
feat: add custom OpenAI provider (#5033)
* feat: add custom OpenAI provider

* chore: add HF token setting

* chore: move HF token setting to llama.cpp provider - later deprecate model extension
2025-05-20 14:30:51 +07:00
Faisal Amir
ba4d1d3c12 chore: fix typo 2025-05-20 10:49:57 +07:00
Louis
2345ff172d
chore: update model handlers on the new frontend (#5011)
* chore: provide model handlers to new frontend

* chore: add API server function to the new front end
2025-05-19 10:39:43 +07:00
Faisal Amir
852ea84cd8
epic: Jan with new UI/UX (#4964)
* chore: initial new FE setup

* chore: update namespace text-left-panel foreground variable

* chore: enable dynamic mainview color

* chore: remove greetings new chat

* chore: fix chat input style

* chore: simplify hook useAppearance

* chore: enable internationalization

* chore: prepare vn locale

* chore: keyboardshortcut layout

* chore: update keyboard shortcut exclude pathname

* chore: update state active setting route

* chore: fix update theme by system

* chore: handle dynamic primary color

* chore: fix left panel navigation active state and styled item privacy analytic

* chore: reorder general setting being a first

* chore: add function reset appearance

* chore: update scrollbar

* chore: update delete thread with dialog confirmation

* chore: update state dialog inside dropdown menu

* chore: wip thread detail or chat page

* chore: wip model dropdown

* chore: prepare model dropdown select

* chore: update model providers setting

* chore: show provider on model dropdown based isActive toogle

* chore: update layout model provider

* chore: update state active on storage

* chore: update gap of item dropdown model

* chore: update select model base on id

* chore: update edit model capabilities

* chore: add dialog to add model

* chore: update sheet for model setting

* chore: add sheet setting each model

* chore: make dynamic syntax highlight

* chore: fix menu setting appearance theme

* chore: markdown render support emoji

* chore: markdown support latex

* chore: change codeblock default theme

* chore: update ui codeblock

* chore: custom render link taget new window

* chore: fix copy button codeblock

* chore: update accent and desctructive color

* chore: setup user chat message

* chore: prepare some page settings

* chore: simple list extension and prepare mcp, local api, and hardware

* chore: mcp-serve

* chore: MCP server UI

* chore: update local api server config

* chore: adjust chat input

* chore: update local api server log

* chore: prepare hub page

* chore: remove help page

* chore: update mock

* chore: prepare http proxy setting UI

* chore: adjust local api server and title every action

* fix: chore FE package (#4962)

* fix: update command which referred to non-existent web app

* fix: added commented out macos platform for now

* fix: remove the platform name as macos

* fix: remove unnecessary line for platform name in HeaderPage component

* fix: update dev script to specify port 3000 for Vite

* feat: model providers and chat completion

* enhancement: threads performance

* fix: thread content update

* chore: clean up threads

* fix: performance issue with streaming and state loop

* fix: streaming

* fix: react markdow

* feat: extension manager

* chore: add nodePolyfills include path

* chore: improve performance avoid unhandle rejection

* chore: update pre margin bottom

* chore: swith thread should be deafult scroll to bottom

* chore: wip scroll to bottom

* chore: add model loader

* chore: add platform utils

* feat: threads functionality

* chore: setup toaster

* chore: persist threads deletion

* fix: create thread with new message

* chore: create new thread should change route path

* chore: navigate after delet dialog thread

* chore: thread favorites and orders

* chore: dismiss deleting modal on delete

* chore: remove undefined properties

* chore: remove deprecated run step

* chore: fix delete thread

* chore: create empty thread content on started streaming

* chore: correct messages store key

* chore: stuck at generating state

* chore: preapre chat toolbar

* chore: introduce in-memory app state

* chore: update extensions migration logic

* chore: remove redundant extensions migration gate

* chore: message toolbar user and assistant

* chore: add logo gemini

* feat: remote providers with model capabilities

* chore: maintain provider settings

* chore: move speed token into chat input

* chore: temp harcoded model loader

* chore: make chat text selectable and truncate model list

* chore: update shortcut UI

* Feat/implement threads (#4977)

* chore: add fuse.js library for enhanced search functionality

* feat: implement thread filtering with Fuse.js for improved search capabilities

* fix: update the fuseOptions

* feat: add search functionality to LeftPanel and refactor thread retrieval logic

* refactor: optimize thread filtering and improve search functionality in LeftPanel

* fix: more edits

* refactor: remove duplicate import of useAppState in StreamingContent component

* chore: update navigate after delete all thread

* chore: pass prop speedToken from new chat input

* chore: persist provider general settings

* chore: styling search left panel

* chore: cleanup margin

* chore: update size icon

* chore: improve chat input

* chore: imprve list markdown

* chore: animate border

* feat: local model provider work

* chore: persist manually added model

* chore: prepare download management ui and show version on general setting

* chore: improve pre tag

* chore: remove buton install extension and improve light theme download

* chore: add missing hardware information handler

* chore: cleanup small ui

* chore: update default provider settings

* fix: missing fs commands

* chore: correct provider models

* chore: prepare delete model

* chore: handle thinking block

* chore: fix conditional message toolbar

* chore: pophover download select none

* enhancement: add prune mode

* chore: model settings

* chore: bump engine version tauri

* chore: update style thinking

* chore: add indicator and toogle mcp server

* chore: wip hub

* chore: update model settings

* chore: mvp hub

* chore: add function rename title

* chore: update function delete message

* chore: update rename title

* chore: update model settings

* chore: persist MCP configs

* refactor: clean up utils

* chore: add tools to completion request

* chore: clean up

* chore: ignore assets

---------

Co-authored-by: Ivan Leo <ivanleomk@gmail.com>
Co-authored-by: Louis <louis@jan.ai>
2025-05-15 19:38:59 +07:00