18 Commits

Author SHA1 Message Date
Roushan Kumar Singh
247db95bad
resolve TypeScript and Rust warnings (#6612)
* chore: fix warnings

* fix: add missing scrollContainerRef dependencies to React hooks

* fix: typo

* fix: remove unsupported fetch option and enable AsyncIterable types

- Removed `connectTimeout` from fetch init (not supported in RequestInit)
- Updated tsconfig to target ES2018

* chore: refactor rename

* fix(hooks): update dependency arrays for useThreadScrolling effects

* Add type.d.ts to extend requestinit with connectionTimeout

* remove commentd unused import
2025-10-01 16:06:41 +07:00
Vanalite
43d20e2a32 fix: revert the modification of vulkan 2025-09-30 14:50:54 +07:00
Vanalite
549c962248 fix: Fix nvidia and vulkan after upgrade to be compatible with mobile compiling too 2025-09-30 09:44:21 +07:00
Vanalite
5e57caee43 Merge remote-tracking branch 'origin/dev' into mobile/dev
# Conflicts:
#	extensions/yarn.lock
#	package.json
#	src-tauri/plugins/tauri-plugin-hardware/src/vendor/vulkan.rs
#	src-tauri/src/lib.rs
#	yarn.lock
2025-09-29 22:22:00 +07:00
Louis
5fd249c72d
refactor: deprecate Vulkan external binaries (#6638)
* refactor: deprecate vulkan binary

refactor: clean up vulkan lib

chore: cleanup

chore: clean up

chore: clean up

fix: build

* fix: skip binaries download env

* Update src-tauri/utils/src/system.rs

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update src-tauri/utils/src/system.rs

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-09-29 17:47:59 +07:00
Vanalite
a0aa0074f4 Merge remote-tracking branch 'origin/dev' into mobile/dev
# Conflicts:
#	web-app/src/routeTree.gen.ts
#	web-app/src/routes/index.tsx
2025-09-26 11:09:50 +07:00
Akarshan Biswas
11b3a60675
fix: refactor, fix and move gguf support utilities to backend (#6584)
* feat: move estimateKVCacheSize to BE

* feat: Migrate model planning to backend

This commit migrates the model load planning logic from the frontend to the Tauri backend. This refactors the `planModelLoad` and `isModelSupported` methods into the `tauri-plugin-llamacpp` plugin, making them directly callable from the Rust core.

The model planning now incorporates a more robust and accurate memory estimation, considering both VRAM and system RAM, and introduces a `batch_size` parameter to the model plan.

**Key changes:**

- **Moved `planModelLoad` to `tauri-plugin-llamacpp`:** The core logic for determining GPU layers, context length, and memory offloading is now in Rust for better performance and accuracy.
- **Moved `isModelSupported` to `tauri-plugin-llamacpp`:** The model support check is also now handled by the backend.
- **Removed `getChatClient` from `AIEngine`:** This optional method was not implemented and has been removed from the abstract class.
- **Improved KV Cache estimation:** The `estimate_kv_cache_internal` function in Rust now accounts for `attention.key_length` and `attention.value_length` if available, and considers sliding window attention for more precise estimates.
- **Introduced `batch_size` in ModelPlan:** The model plan now includes a `batch_size` property, which will be automatically adjusted based on the determined `ModelMode` (e.g., lower for CPU/Hybrid modes).
- **Updated `llamacpp-extension`:** The frontend extension now calls the new Tauri commands for model planning and support checks.
- **Removed `batch_size` from `llamacpp-extension/settings.json`:** The batch size is now dynamically determined by the planning logic and will be set as a model setting directly.
- **Updated `ModelSetting` and `useModelProvider` hooks:** These now handle the new `batch_size` property in model settings.
- **Added new Tauri commands and permissions:** `get_model_size`, `is_model_supported`, and `plan_model_load` are new commands with corresponding permissions.
- **Consolidated `ModelSupportStatus` and `KVCacheEstimate`:** These types are now defined in `src/tauri/plugins/tauri-plugin-llamacpp/src/gguf/types.rs`.

This refactoring centralizes critical model resource management logic, improving consistency and maintainability, and lays the groundwork for more sophisticated model loading strategies.

* feat: refine model planner to handle more memory scenarios

This commit introduces several improvements to the `plan_model_load` function, enhancing its ability to determine a suitable model loading strategy based on system memory constraints. Specifically, it includes:

-   **VRAM calculation improvements:**  Corrects the calculation of total VRAM by iterating over GPUs and multiplying by 1024*1024, improving accuracy.
-   **Hybrid plan optimization:**  Implements a more robust hybrid plan strategy, iterating through GPU layer configurations to find the highest possible GPU usage while remaining within VRAM limits.
-   **Minimum context length enforcement:** Enforces a minimum context length for the model, ensuring that the model can be loaded and used effectively.
-   **Fallback to CPU mode:** If a hybrid plan isn't feasible, it now correctly falls back to a CPU-only mode.
-   **Improved logging:** Enhanced logging to provide more detailed information about the memory planning process, including VRAM, RAM, and GPU layers.
-   **Batch size adjustment:** Updated batch size based on the selected mode, ensuring efficient utilization of available resources.
-   **Error handling and edge cases:**  Improved error handling and edge case management to prevent unexpected failures.
-   **Constants:** Added constants for easier maintenance and understanding.
-   **Power-of-2 adjustment:** Added power of 2 adjustment for max context length to ensure correct sizing for the LLM.

These changes improve the reliability and robustness of the model planning process, allowing it to handle a wider range of hardware configurations and model sizes.

* Add log for raw GPU info from tauri-plugin-hardware

* chore: update linux runner for tauri build

* feat: Improve GPU memory calculation for unified memory

This commit improves the logic for calculating usable VRAM, particularly for systems with **unified memory** like Apple Silicon. Previously, the application would report 0 total VRAM if no dedicated GPUs were found, leading to incorrect calculations and failed model loads.

This change modifies the VRAM calculation to fall back to the total system RAM if no discrete GPUs are detected. This is a common and correct approach for unified memory architectures, where the CPU and GPU share the same memory pool.

Additionally, this commit refactors the logic for calculating usable VRAM and RAM to prevent potential underflow by checking if the total memory is greater than the reserved bytes before subtracting. This ensures the calculation remains safe and correct.

* chore: fix update migration version

* fix: enable unified memory support on model support indicator

* Use total_system_memory in bytes

---------

Co-authored-by: Minh141120 <minh.itptit@gmail.com>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-09-25 12:17:57 +05:30
Vanalite
814024982e feat: Experiment removing hardware permission 2025-09-25 00:49:14 +07:00
Vanalite
15d56e8e7e chore: Shrink the Android app size to minimal, release type 2025-09-18 13:35:50 +07:00
Vanalite
224bee5c66 feat: Adjust UI for mobile res
Feature:
- Adjust homecreen and chatscreen for mobile device
- Fix tests for both FE and BE
Self-test:
- Confirm runnable on both Android and iOS
- Confirm runnable on desktop app
- All test suites passed
- Working with ChatGPT API
2025-09-16 20:38:56 +07:00
Vanalite
fd046a2d08 fix: Fix parsing datatype inconsistent across platforms 2025-09-16 20:38:56 +07:00
Vanalite
633a6ac032 fix: Reconfigure and add toolchain to wake up Android app 2025-09-16 20:38:56 +07:00
Louis
51a9021994
fix: test 2025-08-21 11:30:48 +07:00
Louis
973a8dd8cc
fix: simplify cpu arch detection 2025-08-21 10:47:39 +07:00
Louis
2398c0ab33
Update src-tauri/plugins/tauri-plugin-hardware/src/tests.rs
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-08-20 22:22:25 +07:00
Louis
ebae86f3e6
feat: detect cpu arch in runtime 2025-08-20 21:37:34 +07:00
Louis
13a1969150
feat: MCP - State update 2025-08-15 10:02:06 +07:00
Dinh Long Nguyen
e1c8d98bf2
Backend Architecture Refactoring (#6094) (#6162)
* add llamacpp plugin

* Refactor llamacpp plugin

* add utils plugin

* remove utils folder

* add hardware implementation

* add utils folder + move utils function

* organize cargo files

* refactor utils src

* refactor util

* apply fmt

* fmt

* Update gguf + reformat

* add permission for gguf commands

* fix cargo test windows

* revert yarn lock

* remove cargo.lock for hardware plugin

* ignore cargo.lock file

* Fix hardware invoke + refactor hardware + refactor tests, constants

* use api wrapper in extension to invoke hardware call + api wrapper build integration

* add newline at EOF (per Akarshan)

* add vi mock for getSystemInfo
2025-08-15 08:59:01 +07:00