350 Commits

Author SHA1 Message Date
Roushan Kumar Singh
247db95bad
resolve TypeScript and Rust warnings (#6612)
* chore: fix warnings

* fix: add missing scrollContainerRef dependencies to React hooks

* fix: typo

* fix: remove unsupported fetch option and enable AsyncIterable types

- Removed `connectTimeout` from fetch init (not supported in RequestInit)
- Updated tsconfig to target ES2018

* chore: refactor rename

* fix(hooks): update dependency arrays for useThreadScrolling effects

* Add type.d.ts to extend requestinit with connectionTimeout

* remove commentd unused import
2025-10-01 16:06:41 +07:00
Vanalite
262a1a9544 Merge remote-tracking branch 'origin/dev' into mobile/dev
# Conflicts:
#	src-tauri/src/core/setup.rs
#	src-tauri/src/lib.rs
#	web-app/src/hooks/useChat.ts
2025-10-01 09:52:01 +07:00
Dinh Long Nguyen
9a72a2d5d5 fix tauri test 2025-09-30 22:43:14 +07:00
Dinh Long Nguyen
d50226b4dd add missing closing test 2025-09-30 22:36:52 +07:00
Dinh Long Nguyen
817680565e remove test conflict 2025-09-30 22:33:51 +07:00
Dinh Long Nguyen
84f46dc997
Merge branch 'dev' into feat/sync-release=to-dev 2025-09-30 22:31:20 +07:00
Louis
3c7eb64353
fix: mcp bin path (#6667)
* fix: mcp bin path

* chore: clean up unused structs

* fix: bin name

* fix: tests
2025-09-30 22:29:15 +07:00
Dinh Long Nguyen
e6bc1182a6
Merge branch 'dev' into feat/sync-release=to-dev 2025-09-30 22:04:27 +07:00
Vanalite
6bd623c020 fix: Fix cargo test 2025-09-30 17:19:58 +07:00
Vanalite
a62852f384 fix: Restore default permission on desktop build
Restore desktop capabilities
Restore linter correctness
Restore different capabilities on each platform
2025-09-30 17:01:09 +07:00
Minh141120
508cbe16f8 refactor: remove redundant resource 2025-09-30 15:49:14 +07:00
Minh141120
0b8f3e01fb feat: add msi installer for windows 2025-09-30 15:32:29 +07:00
Vanalite
43d20e2a32 fix: revert the modification of vulkan 2025-09-30 14:50:54 +07:00
Akarshan
34b254e2d8
fix: Improve KV cache estimation robustness
The KV cache size calculation in estimate_kv_cache_internal now includes a fallback mechanism for models that do not explicitly define key_length and value_length in the GGUF metadata.

If these attention keys are missing, the head dimension (and thus key/value length) is calculated using the formula embedding_length / total_heads. This improves robustness and compatibility with GGUF models that don't have the proper keys in metadata.

Also adds logging of the full model metadata for easier debugging of the estimation process.
2025-09-30 11:14:18 +05:30
Nguyen Ngoc Minh
d315522c5a
Merge pull request #6618 from github-roushan/show-supported-files
Show supported files
2025-09-30 12:19:22 +07:00
Vanalite
549c962248 fix: Fix nvidia and vulkan after upgrade to be compatible with mobile compiling too 2025-09-30 09:44:21 +07:00
Louis
54d17c9c72
fix: migrate new mcp server config (#6651) 2025-09-30 00:07:57 +07:00
Vanalite
5e57caee43 Merge remote-tracking branch 'origin/dev' into mobile/dev
# Conflicts:
#	extensions/yarn.lock
#	package.json
#	src-tauri/plugins/tauri-plugin-hardware/src/vendor/vulkan.rs
#	src-tauri/src/lib.rs
#	yarn.lock
2025-09-29 22:22:00 +07:00
Nghia Doan
70ac13e536
Merge pull request #6643 from menloresearch/fix/model-name-change
fix: Apply model name change correctly
2025-09-29 22:15:13 +07:00
Louis
5fd249c72d
refactor: deprecate Vulkan external binaries (#6638)
* refactor: deprecate vulkan binary

refactor: clean up vulkan lib

chore: cleanup

chore: clean up

chore: clean up

fix: build

* fix: skip binaries download env

* Update src-tauri/utils/src/system.rs

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update src-tauri/utils/src/system.rs

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-09-29 17:47:59 +07:00
Vanalite
03ee9c14a3 fix: Apply model name change correctly
# Conflicts:
#	web-app/src/lib/utils.ts
2025-09-29 16:39:32 +07:00
Roushan Singh
7d6e0c22ac chore: fix Encoded logging 2025-09-26 15:02:23 +05:30
Faisal Amir
b7dae19756
feat: custom downloaded model name (#6588)
* feat: add field edit model name

* fix: update model

* chore: updaet UI form with save button, and handle edit capabilities and  rename folder will need save button

* fix: relocate model

* chore: update and refresh list model provider also update test case

* chore: state loader

* fix: model path

* fix: model config update

* chore: fix remove depencies provider on edit model dialog

* chore: avoid shifted model name or id

---------

Co-authored-by: Louis <louis@jan.ai>
2025-09-26 15:25:44 +07:00
Vanalite
e8cac2823f feat: Add dev-ios to makefile for ios development 2025-09-26 11:59:50 +07:00
Vanalite
b0ad2a6b7a feat: Add dev-android to makefile 2025-09-26 11:53:14 +07:00
Vanalite
a0aa0074f4 Merge remote-tracking branch 'origin/dev' into mobile/dev
# Conflicts:
#	web-app/src/routeTree.gen.ts
#	web-app/src/routes/index.tsx
2025-09-26 11:09:50 +07:00
Vanalite
fdf9f40aef fix: Android releasable build 2025-09-26 09:42:00 +07:00
Akarshan Biswas
11b3a60675
fix: refactor, fix and move gguf support utilities to backend (#6584)
* feat: move estimateKVCacheSize to BE

* feat: Migrate model planning to backend

This commit migrates the model load planning logic from the frontend to the Tauri backend. This refactors the `planModelLoad` and `isModelSupported` methods into the `tauri-plugin-llamacpp` plugin, making them directly callable from the Rust core.

The model planning now incorporates a more robust and accurate memory estimation, considering both VRAM and system RAM, and introduces a `batch_size` parameter to the model plan.

**Key changes:**

- **Moved `planModelLoad` to `tauri-plugin-llamacpp`:** The core logic for determining GPU layers, context length, and memory offloading is now in Rust for better performance and accuracy.
- **Moved `isModelSupported` to `tauri-plugin-llamacpp`:** The model support check is also now handled by the backend.
- **Removed `getChatClient` from `AIEngine`:** This optional method was not implemented and has been removed from the abstract class.
- **Improved KV Cache estimation:** The `estimate_kv_cache_internal` function in Rust now accounts for `attention.key_length` and `attention.value_length` if available, and considers sliding window attention for more precise estimates.
- **Introduced `batch_size` in ModelPlan:** The model plan now includes a `batch_size` property, which will be automatically adjusted based on the determined `ModelMode` (e.g., lower for CPU/Hybrid modes).
- **Updated `llamacpp-extension`:** The frontend extension now calls the new Tauri commands for model planning and support checks.
- **Removed `batch_size` from `llamacpp-extension/settings.json`:** The batch size is now dynamically determined by the planning logic and will be set as a model setting directly.
- **Updated `ModelSetting` and `useModelProvider` hooks:** These now handle the new `batch_size` property in model settings.
- **Added new Tauri commands and permissions:** `get_model_size`, `is_model_supported`, and `plan_model_load` are new commands with corresponding permissions.
- **Consolidated `ModelSupportStatus` and `KVCacheEstimate`:** These types are now defined in `src/tauri/plugins/tauri-plugin-llamacpp/src/gguf/types.rs`.

This refactoring centralizes critical model resource management logic, improving consistency and maintainability, and lays the groundwork for more sophisticated model loading strategies.

* feat: refine model planner to handle more memory scenarios

This commit introduces several improvements to the `plan_model_load` function, enhancing its ability to determine a suitable model loading strategy based on system memory constraints. Specifically, it includes:

-   **VRAM calculation improvements:**  Corrects the calculation of total VRAM by iterating over GPUs and multiplying by 1024*1024, improving accuracy.
-   **Hybrid plan optimization:**  Implements a more robust hybrid plan strategy, iterating through GPU layer configurations to find the highest possible GPU usage while remaining within VRAM limits.
-   **Minimum context length enforcement:** Enforces a minimum context length for the model, ensuring that the model can be loaded and used effectively.
-   **Fallback to CPU mode:** If a hybrid plan isn't feasible, it now correctly falls back to a CPU-only mode.
-   **Improved logging:** Enhanced logging to provide more detailed information about the memory planning process, including VRAM, RAM, and GPU layers.
-   **Batch size adjustment:** Updated batch size based on the selected mode, ensuring efficient utilization of available resources.
-   **Error handling and edge cases:**  Improved error handling and edge case management to prevent unexpected failures.
-   **Constants:** Added constants for easier maintenance and understanding.
-   **Power-of-2 adjustment:** Added power of 2 adjustment for max context length to ensure correct sizing for the LLM.

These changes improve the reliability and robustness of the model planning process, allowing it to handle a wider range of hardware configurations and model sizes.

* Add log for raw GPU info from tauri-plugin-hardware

* chore: update linux runner for tauri build

* feat: Improve GPU memory calculation for unified memory

This commit improves the logic for calculating usable VRAM, particularly for systems with **unified memory** like Apple Silicon. Previously, the application would report 0 total VRAM if no dedicated GPUs were found, leading to incorrect calculations and failed model loads.

This change modifies the VRAM calculation to fall back to the total system RAM if no discrete GPUs are detected. This is a common and correct approach for unified memory architectures, where the CPU and GPU share the same memory pool.

Additionally, this commit refactors the logic for calculating usable VRAM and RAM to prevent potential underflow by checking if the total memory is greater than the reserved bytes before subtracting. This ensures the calculation remains safe and correct.

* chore: fix update migration version

* fix: enable unified memory support on model support indicator

* Use total_system_memory in bytes

---------

Co-authored-by: Minh141120 <minh.itptit@gmail.com>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-09-25 12:17:57 +05:30
Vanalite
814024982e feat: Experiment removing hardware permission 2025-09-25 00:49:14 +07:00
Vanalite
b2c5063e0b Merge remote-tracking branch 'origin/dev' into mobile/dev
# Conflicts:
#	src-tauri/src/core/server/proxy.rs
#	src-tauri/tauri.conf.json
#	web-app/src/containers/LeftPanel.tsx
#	web-app/src/containers/__tests__/ChatInput.test.tsx
#	web-app/src/lib/platform/const.ts
#	yarn.lock
2025-09-24 16:01:33 +07:00
Minh141120
9568ff12e8 feat: add cleanup logic for windows installer 2025-09-24 15:47:46 +07:00
Vanalite
6747a0e9e2 Merge branch 'mobile/dev' into enhancement/layout-mobile
# Conflicts:
#	package.json
2025-09-24 15:07:49 +07:00
Vanalite
b8da32c14e fix: Add frontendDist to ios configuration 2025-09-24 14:31:13 +07:00
Nguyen Ngoc Minh
c46e13b8b1
Merge pull request #6545 from menloresearch/chore/standardize-build-windows
chore: use default nsis template
2025-09-23 22:23:17 +07:00
Minh141120
f1d97ac834 chore: install vc_redist.x64 from script 2025-09-23 20:42:34 +07:00
Roushan Kumar Singh
3f51c35229
feat: support .zip archives for manual backend install (#6534)
* feat(llamacpp): support .zip archives for manual backend install

* Update Lock Files
2025-09-23 18:02:06 +05:30
Minh141120
6dc38f18cf chore: standardize build process on windows 2025-09-23 19:00:39 +07:00
Minh141120
50b66eff74 chore: update hooks to install vcredist.exe and update path for dll and license file 2025-09-23 19:00:39 +07:00
Minh141120
7257eb4ae6 chore: add LICENSE and vulkan-1.dll at instdir 2025-09-23 19:00:39 +07:00
Minh141120
8951123557 chore: add libvulkan for windows 2025-09-23 19:00:39 +07:00
Minh141120
0151274659 chore: update installerIcon on nsis config 2025-09-23 19:00:39 +07:00
Minh141120
53707a5083 chore: use default nsis template 2025-09-23 19:00:39 +07:00
Louis
7f09c36a92
fix: LocalAPI server trusted host should accept asterisk (#6551) 2025-09-23 17:45:37 +07:00
Nghia Doan
6f827872fb
fix: Catch local API server various errors (#6548)
* fix: Catch local API server various errors

* chore: Add tests to cover error catches
2025-09-23 17:40:16 +07:00
Louis
568ee857d5
fix: custom fetch for all providers (#6538)
* fix: custom fetch for all providers

* fix: run in development should use built-in fetch
2025-09-23 09:55:36 +07:00
Akarshan Biswas
885da29f28
feat: add getTokensCount method to compute token usage (#6467)
* feat: add getTokensCount method to compute token usage

Implemented a new async `getTokensCount` function in the LLaMA.cpp extension.
The method validates the model session, checks process health, applies the request template, and tokenizes the resulting prompt to return the token count. Includes detailed error handling for crashed models and API failures, enabling callers to assess token usage before sending completions.

* Fix: typos

* chore: update ui token usage

* chore: remove unused code

* feat: add image token handling for multimodal LlamaCPP models

Implemented support for counting image tokens when using vision-enabled models:
- Extended `SessionInfo` with optional `mmprojPath` to store the multimodal project file.
- Propagated `mmproj_path` from the Tauri plugin into the session info.
- Added import of `chatCompletionRequestMessage` and enhanced token calculation logic in the LlamaCPP extension:
- Detects image content in messages.
- Reads GGUF metadata from `mmprojPath` to compute accurate image token counts.
- Provides a fallback estimation if metadata reading fails.
- Returns the sum of text and image tokens.
- Introduced helper methods `calculateImageTokens` and `estimateImageTokensFallback`.
- Minor clean‑ups such as comment capitalization and debug logging.

* chore: update FE send params message include content type image_url

* fix mmproj path from session info and num tokens calculation

* fix: Correct image token estimation calculation in llamacpp extension

This commit addresses an inaccurate token count for images in the llama.cpp extension.

The previous logic incorrectly calculated the token count based on image patch size and dimensions. This has been replaced with a more precise method that uses the clip.vision.projection_dim value from the model metadata.

Additionally, unnecessary debug logging was removed, and a new log was added to show the mmproj metadata for improved visibility.

* fix per image calc

* fix: crash due to force unwrap

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
Co-authored-by: Louis <louis@jan.ai>
2025-09-23 07:52:19 +05:30
Faisal Amir
47b7509d2e remove gen android 2025-09-22 20:59:35 +07:00
Vanalite
8e1654961a chore: Configure iOS to use the same build mechanic to remove unnecessary plugin 2025-09-22 11:28:01 +07:00
Vanalite
003598204e Merge remote-tracking branch 'origin/dev' into mobile/dev
# Conflicts:
#	src-tauri/.cargo/config.toml
#	src-tauri/Cargo.toml
#	src-tauri/src/lib.rs
#	web-app/src/containers/__tests__/ChatInput.test.tsx
#	web-app/src/routeTree.gen.ts
#	web-app/src/routes/index.tsx
#	web-app/src/routes/threads/$threadId.tsx
#	yarn.lock
2025-09-22 11:24:20 +07:00
Faisal Amir
29862435ab
Merge pull request #6526 from github-roushan/fix-number-input
fix(number-input): preserve '0.0x' format when typing (#6520)
2025-09-19 23:25:22 +07:00