308 Commits

Author SHA1 Message Date
Faisal Amir
b7dae19756
feat: custom downloaded model name (#6588)
* feat: add field edit model name

* fix: update model

* chore: updaet UI form with save button, and handle edit capabilities and  rename folder will need save button

* fix: relocate model

* chore: update and refresh list model provider also update test case

* chore: state loader

* fix: model path

* fix: model config update

* chore: fix remove depencies provider on edit model dialog

* chore: avoid shifted model name or id

---------

Co-authored-by: Louis <louis@jan.ai>
2025-09-26 15:25:44 +07:00
Akarshan Biswas
11b3a60675
fix: refactor, fix and move gguf support utilities to backend (#6584)
* feat: move estimateKVCacheSize to BE

* feat: Migrate model planning to backend

This commit migrates the model load planning logic from the frontend to the Tauri backend. This refactors the `planModelLoad` and `isModelSupported` methods into the `tauri-plugin-llamacpp` plugin, making them directly callable from the Rust core.

The model planning now incorporates a more robust and accurate memory estimation, considering both VRAM and system RAM, and introduces a `batch_size` parameter to the model plan.

**Key changes:**

- **Moved `planModelLoad` to `tauri-plugin-llamacpp`:** The core logic for determining GPU layers, context length, and memory offloading is now in Rust for better performance and accuracy.
- **Moved `isModelSupported` to `tauri-plugin-llamacpp`:** The model support check is also now handled by the backend.
- **Removed `getChatClient` from `AIEngine`:** This optional method was not implemented and has been removed from the abstract class.
- **Improved KV Cache estimation:** The `estimate_kv_cache_internal` function in Rust now accounts for `attention.key_length` and `attention.value_length` if available, and considers sliding window attention for more precise estimates.
- **Introduced `batch_size` in ModelPlan:** The model plan now includes a `batch_size` property, which will be automatically adjusted based on the determined `ModelMode` (e.g., lower for CPU/Hybrid modes).
- **Updated `llamacpp-extension`:** The frontend extension now calls the new Tauri commands for model planning and support checks.
- **Removed `batch_size` from `llamacpp-extension/settings.json`:** The batch size is now dynamically determined by the planning logic and will be set as a model setting directly.
- **Updated `ModelSetting` and `useModelProvider` hooks:** These now handle the new `batch_size` property in model settings.
- **Added new Tauri commands and permissions:** `get_model_size`, `is_model_supported`, and `plan_model_load` are new commands with corresponding permissions.
- **Consolidated `ModelSupportStatus` and `KVCacheEstimate`:** These types are now defined in `src/tauri/plugins/tauri-plugin-llamacpp/src/gguf/types.rs`.

This refactoring centralizes critical model resource management logic, improving consistency and maintainability, and lays the groundwork for more sophisticated model loading strategies.

* feat: refine model planner to handle more memory scenarios

This commit introduces several improvements to the `plan_model_load` function, enhancing its ability to determine a suitable model loading strategy based on system memory constraints. Specifically, it includes:

-   **VRAM calculation improvements:**  Corrects the calculation of total VRAM by iterating over GPUs and multiplying by 1024*1024, improving accuracy.
-   **Hybrid plan optimization:**  Implements a more robust hybrid plan strategy, iterating through GPU layer configurations to find the highest possible GPU usage while remaining within VRAM limits.
-   **Minimum context length enforcement:** Enforces a minimum context length for the model, ensuring that the model can be loaded and used effectively.
-   **Fallback to CPU mode:** If a hybrid plan isn't feasible, it now correctly falls back to a CPU-only mode.
-   **Improved logging:** Enhanced logging to provide more detailed information about the memory planning process, including VRAM, RAM, and GPU layers.
-   **Batch size adjustment:** Updated batch size based on the selected mode, ensuring efficient utilization of available resources.
-   **Error handling and edge cases:**  Improved error handling and edge case management to prevent unexpected failures.
-   **Constants:** Added constants for easier maintenance and understanding.
-   **Power-of-2 adjustment:** Added power of 2 adjustment for max context length to ensure correct sizing for the LLM.

These changes improve the reliability and robustness of the model planning process, allowing it to handle a wider range of hardware configurations and model sizes.

* Add log for raw GPU info from tauri-plugin-hardware

* chore: update linux runner for tauri build

* feat: Improve GPU memory calculation for unified memory

This commit improves the logic for calculating usable VRAM, particularly for systems with **unified memory** like Apple Silicon. Previously, the application would report 0 total VRAM if no dedicated GPUs were found, leading to incorrect calculations and failed model loads.

This change modifies the VRAM calculation to fall back to the total system RAM if no discrete GPUs are detected. This is a common and correct approach for unified memory architectures, where the CPU and GPU share the same memory pool.

Additionally, this commit refactors the logic for calculating usable VRAM and RAM to prevent potential underflow by checking if the total memory is greater than the reserved bytes before subtracting. This ensures the calculation remains safe and correct.

* chore: fix update migration version

* fix: enable unified memory support on model support indicator

* Use total_system_memory in bytes

---------

Co-authored-by: Minh141120 <minh.itptit@gmail.com>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-09-25 12:17:57 +05:30
Minh141120
9568ff12e8 feat: add cleanup logic for windows installer 2025-09-24 15:47:46 +07:00
Nguyen Ngoc Minh
c46e13b8b1
Merge pull request #6545 from menloresearch/chore/standardize-build-windows
chore: use default nsis template
2025-09-23 22:23:17 +07:00
Minh141120
f1d97ac834 chore: install vc_redist.x64 from script 2025-09-23 20:42:34 +07:00
Roushan Kumar Singh
3f51c35229
feat: support .zip archives for manual backend install (#6534)
* feat(llamacpp): support .zip archives for manual backend install

* Update Lock Files
2025-09-23 18:02:06 +05:30
Minh141120
6dc38f18cf chore: standardize build process on windows 2025-09-23 19:00:39 +07:00
Minh141120
50b66eff74 chore: update hooks to install vcredist.exe and update path for dll and license file 2025-09-23 19:00:39 +07:00
Minh141120
7257eb4ae6 chore: add LICENSE and vulkan-1.dll at instdir 2025-09-23 19:00:39 +07:00
Minh141120
8951123557 chore: add libvulkan for windows 2025-09-23 19:00:39 +07:00
Minh141120
0151274659 chore: update installerIcon on nsis config 2025-09-23 19:00:39 +07:00
Minh141120
53707a5083 chore: use default nsis template 2025-09-23 19:00:39 +07:00
Louis
7f09c36a92
fix: LocalAPI server trusted host should accept asterisk (#6551) 2025-09-23 17:45:37 +07:00
Nghia Doan
6f827872fb
fix: Catch local API server various errors (#6548)
* fix: Catch local API server various errors

* chore: Add tests to cover error catches
2025-09-23 17:40:16 +07:00
Louis
568ee857d5
fix: custom fetch for all providers (#6538)
* fix: custom fetch for all providers

* fix: run in development should use built-in fetch
2025-09-23 09:55:36 +07:00
Akarshan Biswas
885da29f28
feat: add getTokensCount method to compute token usage (#6467)
* feat: add getTokensCount method to compute token usage

Implemented a new async `getTokensCount` function in the LLaMA.cpp extension.
The method validates the model session, checks process health, applies the request template, and tokenizes the resulting prompt to return the token count. Includes detailed error handling for crashed models and API failures, enabling callers to assess token usage before sending completions.

* Fix: typos

* chore: update ui token usage

* chore: remove unused code

* feat: add image token handling for multimodal LlamaCPP models

Implemented support for counting image tokens when using vision-enabled models:
- Extended `SessionInfo` with optional `mmprojPath` to store the multimodal project file.
- Propagated `mmproj_path` from the Tauri plugin into the session info.
- Added import of `chatCompletionRequestMessage` and enhanced token calculation logic in the LlamaCPP extension:
- Detects image content in messages.
- Reads GGUF metadata from `mmprojPath` to compute accurate image token counts.
- Provides a fallback estimation if metadata reading fails.
- Returns the sum of text and image tokens.
- Introduced helper methods `calculateImageTokens` and `estimateImageTokensFallback`.
- Minor clean‑ups such as comment capitalization and debug logging.

* chore: update FE send params message include content type image_url

* fix mmproj path from session info and num tokens calculation

* fix: Correct image token estimation calculation in llamacpp extension

This commit addresses an inaccurate token count for images in the llama.cpp extension.

The previous logic incorrectly calculated the token count based on image patch size and dimensions. This has been replaced with a more precise method that uses the clip.vision.projection_dim value from the model metadata.

Additionally, unnecessary debug logging was removed, and a new log was added to show the mmproj metadata for improved visibility.

* fix per image calc

* fix: crash due to force unwrap

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
Co-authored-by: Louis <louis@jan.ai>
2025-09-23 07:52:19 +05:30
Faisal Amir
29862435ab
Merge pull request #6526 from github-roushan/fix-number-input
fix(number-input): preserve '0.0x' format when typing (#6520)
2025-09-19 23:25:22 +07:00
Akarshan Biswas
991bbec53a
fix: Typo in openapi JSON (#6528) 2025-09-19 12:53:39 +05:30
Roushan Singh
ae2532d40d fix(number-input): preserve '0.0x' format when typing (#6520) 2025-09-19 11:36:06 +05:30
Akarshan Biswas
d1a8bdc4e3
feat: Add Jan API server Swagger UI (#6502)
* feat: Add Jan API server Swagger UI

- Serve OpenAPI spec (`static/openapi.json`) directly from the proxy server.
- Implement Swagger UI assets (`swagger-ui.css`, `swagger-ui-bundle.js`, `favicon.ico`) and a simple HTML wrapper under `/docs`.
- Extend the proxy whitelist to include Swagger UI routes.
- Add routing logic for `/openapi.json`, `/docs`, and Swagger UI static files.
- Update whitelisted paths and integrate CORS handling for the new endpoints.

* feat: serve Swagger UI at root path

The Swagger UI endpoint previously lived under `/docs`. The route handling and
exclusion list have been updated so the UI is now served directly at `/`.
This simplifies access, aligns with the expected root URL in the Tauri
frontend, and removes the now‑unused `/docs` path handling.

* feat: add model loading state and translations for local API server

Implemented a loading indicator for model startup, updated the start/stop button to reflect model loading and server starting states, and disabled interactions while pending. Added new translation keys (`loadingModel`, `startingServer`) across all supported locales (en, de, id, pl, vn, zh-CN, zh-TW) and integrated them into the UI. Included a small delay after model start to ensure backend state consistency. This improves user feedback and prevents race conditions during server initialization.
2025-09-19 09:11:55 +05:30
Dinh Long Nguyen
645548e931
Merge pull request #6516 from menloresearch/release/v0.6.10 2025-09-18 19:15:54 +07:00
Louis
86a92ead85
Merge pull request #6469 from menloresearch/fix/deeplink-not-work-on-windows
fix: deeplink issue on Windows
2025-09-18 17:47:00 +07:00
Louis
0ab6417caa
fix: left click should not show menu 2025-09-17 19:58:59 +07:00
Louis
4a4423ed6b
fix: failed tests 2025-09-17 19:55:57 +07:00
Louis
a85ffa270e
fix: missing tray feature for testing 2025-09-17 19:48:30 +07:00
Louis
15164fc0be
feat: system tray icon build flag 2025-09-17 15:54:20 +07:00
Minh141120
022048f577 chore: embed webview2 bootstrapper in tauri windows 2025-09-16 18:43:32 +07:00
theishangoswami
c02d8200ac added exa mcp 2025-09-16 04:38:03 +05:30
Maksym Krasovakyi
71e2e24112 Add model response timeout for local api server as configurable value via UI 2025-09-15 14:25:09 +03:00
Akarshan Biswas
e80a865def
fix: detect allocation failures as out-of-memory errors (#6459)
The Llama.cpp backend can emit the phrase “failed to allocate” when it runs out of memory.
Adding this check ensures such messages are correctly classified as out‑of‑memory errors,
providing more accurate error handling CPU backends.
2025-09-15 12:35:24 +05:30
Dinh Long Nguyen
b5b6e1dc19
add mcp for web (#6411)
* add mcp for web

* update /jan/v1 endpoint to /v1

* update mise and makefile

* update yarn lock

* use mcp oauth properly
2025-09-12 12:14:10 +07:00
dinhlongviolin1
e2e572ccab
refactor: moved get_short_path to utils and use it in decompress 2025-09-11 09:52:10 +05:30
Akarshan
7ac927ff02
feat: enhance llamacpp backend management and installation
- Add `src-tauri/resources/` to `.gitignore`.
- Introduced utilities to read locally installed backends (`getLocalInstalledBackends`) and fetch remote supported backends (`fetchRemoteSupportedBackends`).
- Refactored `listSupportedBackends` to merge remote and local entries with deduplication and proper sorting.
- Exported `getBackendDir` and integrated it into the extension.
- Added helper `parseBackendVersion` and new method `checkBackendForUpdates` to detect newer backend versions.
- Implemented `installBackend` for manual backend archive installation, including platform‑specific binary path handling.
- Updated command‑line argument logic for `--flash-attn` to respect version‑specific defaults.
- Modified Tauri filesystem `decompress` command to remove overly strict path validation.
2025-09-11 09:52:09 +05:30
Dinh Long Nguyen
5cd81bc6e8
feat: improve testing (#6395)
* add more test rust test

* fix servicehub test

* fix tauri failing on windows
2025-09-09 12:16:25 +07:00
Dinh Long Nguyen
a30eb7f968
feat: Jan Web (reusing Jan Desktop UI) (#6298)
* add platform guards

* add service management

* fix types

* move to zustand for servicehub

* update App Updater

* update tauri missing move

* update app updater

* refactor: move PlatformFeatures to separate const file

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* change tauri fetch name

* update implementation

* update extension fetch

* make web version run properly

* disabled unused web settings

* fix all tests

* fix lint

* fix tests

* add mock for extension

* fix build

* update make and mise

* fix tsconfig for web-extensions

* fix loader type

* cleanup

* fix test

* update error handling + mcp should be working

* Update mcp init

* use separate is_web_app build property

* Remove fixed model catalog url

* fix additional tests

* fix download issue (event emitter not implemented correctly)

* Update Title html

* fix app logs

* update root tsx render timing

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-05 01:47:46 +07:00
Minh141120
0efdf54819 chore: relocate LICENSE to resources folder 2025-08-26 14:18:35 +07:00
Minh141120
8cdb9e943d chore: bundle license to linux dist 2025-08-26 13:57:45 +07:00
Minh141120
e2adb3037a chore: bundle license to resource mac 2025-08-26 12:30:23 +07:00
Minh141120
87f3b168c9 chore: bundle license to app 2025-08-26 11:17:59 +07:00
Faisal Amir
4137821e53 fix: system monitor window permission 2025-08-25 17:38:30 +07:00
Akarshan Biswas
510c70bdf7
feat: Add model compatibility check and memory estimation (#6243)
* feat: Add model compatibility check and memory estimation

This commit introduces a new feature to check if a given model is supported based on available device memory.

The change includes:
- A new `estimateKVCache` method that calculates the required memory for the model's KV cache. It uses GGUF metadata such as `block_count`, `head_count`, `key_length`, and `value_length` to perform the calculation.
- An `isModelSupported` method that combines the model file size and the estimated KV cache size to determine the total memory required. It then checks if any available device has sufficient free memory to load the model.
- An updated error message for the `version_backend` check to be more user-friendly, suggesting a stable internet connection as a potential solution for backend setup failures.

This functionality helps prevent the application from attempting to load models that would exceed the device's memory capacity, leading to more stable and predictable behavior.

fixes: #5505

* Update extensions/llamacpp-extension/src/index.ts

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* Update extensions/llamacpp-extension/src/index.ts

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* Extend this to available system RAM if GGML device is not available

* fix: Improve model metadata and memory checks

This commit refactors the logic for checking if a model is supported by a system's available memory.

**Key changes:**
- **Remote model support**: The `read_gguf_metadata` function can now fetch metadata from a remote URL by reading the file in chunks.
- **Improved KV cache size calculation**: The KV cache size is now estimated more accurately by using `attention.key_length` and `attention.value_length` from the GGUF metadata, with a fallback to `embedding_length`.
- **Granular memory check statuses**: The `isModelSupported` function now returns a more specific status (`'RED'`, `'YELLOW'`, `'GREEN'`) to indicate whether the model weights or the KV cache are too large for the available memory.
- **Consolidated logic**: The logic for checking local and remote models has been consolidated into a single `isModelSupported` function, improving code clarity and maintainability.

These changes provide more robust and informative model compatibility checks, especially for models hosted on remote servers.

* Update extensions/llamacpp-extension/src/index.ts

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* Make ctx_size optional and use sum free memory across ggml devices

* feat: hub and dropdown model selection handle model compatibility

* feat: update bage model info color

* chore: enable detail page to get compatibility model

* chore: update copy

* chore: update shrink indicator UI

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-08-21 16:13:50 +05:30
Akarshan Biswas
5c3a6fec32
feat: Add support for custom environmental variables to llama.cpp (#6256)
This commit adds a new setting `llamacpp_env` to the llama.cpp extension, allowing users to specify custom environment variables. These variables are passed to the backend process when it starts.

A new function `parseEnvFromString` is introduced to handle the parsing of the semicolon-separated key-value pairs from the user input. The environment variables are then used in the `load` function and when listing available devices. This enables more flexible configuration of the llama.cpp backend, such as specifying visible GPUs for Vulkan.

This change also updates the Tauri command `get_devices` to accept environment variables, ensuring that device discovery respects the user's settings.
2025-08-21 15:50:37 +05:30
Dinh Long Nguyen
32a2ca95b6
feat: gguf file size + hash validation (#5266) (#6259)
* feat: gguf file size + hash validation

* fix tests fe

* update cargo tests

* handle asyn download for both models and mmproj

* move progress tracker to models

* handle file download cancelled

* add cancellation mid hash run
2025-08-21 16:17:58 +07:00
Louis
6b55812739
Merge pull request #6249 from menloresearch/feat/detect-cpu-arch-run-time
feat: detect cpu arch in runtime
2025-08-21 11:51:13 +07:00
Louis
51a9021994
fix: test 2025-08-21 11:30:48 +07:00
Louis
973a8dd8cc
fix: simplify cpu arch detection 2025-08-21 10:47:39 +07:00
Louis
6850dda108
feat: MCP server error handling 2025-08-20 23:42:12 +07:00
Louis
2398c0ab33
Update src-tauri/plugins/tauri-plugin-hardware/src/tests.rs
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-08-20 22:22:25 +07:00
Louis
ebae86f3e6
feat: detect cpu arch in runtime 2025-08-20 21:37:34 +07:00
Dinh Long Nguyen
b0eec07a01
Add contributing section for jan (#6231) (#6232)
* Add contributing section for jan

* Update CONTRIBUTING.md

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-08-20 10:18:35 +07:00