* feat: Adjust RAM/VRAM calculation for unified memory systems
This commit refactors the logic for calculating **total RAM** and **total VRAM** in `is_model_supported` and `plan_model_load` commands, specifically targeting systems with **unified memory** (like modern macOS devices where the GPU list may be empty).
The changes are as follows:
* **Total RAM Calculation:** If no GPUs are detected (`sys_info.gpus.is_empty()` is true), **total RAM** is now set to $0$. This avoids confusing total system memory with dedicated GPU memory when planning model placement.
* **Total VRAM Calculation:** If no GPUs are detected, **total VRAM** is still calculated as the system's **total memory (RAM)**, as this shared memory acts as VRAM on unified memory architectures.
This adjustment improves the accuracy of memory availability checks and model planning on unified memory systems.
* fix: total usable memory in case there is no system vram reported
* chore: temporarily change to self-hosted runner mac
* ci: revert back to github hosted runner macos
---------
Co-authored-by: Louis <louis@jan.ai>
Co-authored-by: Minh141120 <minh.itptit@gmail.com>
The KV cache size calculation in estimate_kv_cache_internal now includes a fallback mechanism for models that do not explicitly define key_length and value_length in the GGUF metadata.
If these attention keys are missing, the head dimension (and thus key/value length) is calculated using the formula embedding_length / total_heads. This improves robustness and compatibility with GGUF models that don't have the proper keys in metadata.
Also adds logging of the full model metadata for easier debugging of the estimation process.
* feat: add field edit model name
* fix: update model
* chore: updaet UI form with save button, and handle edit capabilities and rename folder will need save button
* fix: relocate model
* chore: update and refresh list model provider also update test case
* chore: state loader
* fix: model path
* fix: model config update
* chore: fix remove depencies provider on edit model dialog
* chore: avoid shifted model name or id
---------
Co-authored-by: Louis <louis@jan.ai>
* feat: move estimateKVCacheSize to BE
* feat: Migrate model planning to backend
This commit migrates the model load planning logic from the frontend to the Tauri backend. This refactors the `planModelLoad` and `isModelSupported` methods into the `tauri-plugin-llamacpp` plugin, making them directly callable from the Rust core.
The model planning now incorporates a more robust and accurate memory estimation, considering both VRAM and system RAM, and introduces a `batch_size` parameter to the model plan.
**Key changes:**
- **Moved `planModelLoad` to `tauri-plugin-llamacpp`:** The core logic for determining GPU layers, context length, and memory offloading is now in Rust for better performance and accuracy.
- **Moved `isModelSupported` to `tauri-plugin-llamacpp`:** The model support check is also now handled by the backend.
- **Removed `getChatClient` from `AIEngine`:** This optional method was not implemented and has been removed from the abstract class.
- **Improved KV Cache estimation:** The `estimate_kv_cache_internal` function in Rust now accounts for `attention.key_length` and `attention.value_length` if available, and considers sliding window attention for more precise estimates.
- **Introduced `batch_size` in ModelPlan:** The model plan now includes a `batch_size` property, which will be automatically adjusted based on the determined `ModelMode` (e.g., lower for CPU/Hybrid modes).
- **Updated `llamacpp-extension`:** The frontend extension now calls the new Tauri commands for model planning and support checks.
- **Removed `batch_size` from `llamacpp-extension/settings.json`:** The batch size is now dynamically determined by the planning logic and will be set as a model setting directly.
- **Updated `ModelSetting` and `useModelProvider` hooks:** These now handle the new `batch_size` property in model settings.
- **Added new Tauri commands and permissions:** `get_model_size`, `is_model_supported`, and `plan_model_load` are new commands with corresponding permissions.
- **Consolidated `ModelSupportStatus` and `KVCacheEstimate`:** These types are now defined in `src/tauri/plugins/tauri-plugin-llamacpp/src/gguf/types.rs`.
This refactoring centralizes critical model resource management logic, improving consistency and maintainability, and lays the groundwork for more sophisticated model loading strategies.
* feat: refine model planner to handle more memory scenarios
This commit introduces several improvements to the `plan_model_load` function, enhancing its ability to determine a suitable model loading strategy based on system memory constraints. Specifically, it includes:
- **VRAM calculation improvements:** Corrects the calculation of total VRAM by iterating over GPUs and multiplying by 1024*1024, improving accuracy.
- **Hybrid plan optimization:** Implements a more robust hybrid plan strategy, iterating through GPU layer configurations to find the highest possible GPU usage while remaining within VRAM limits.
- **Minimum context length enforcement:** Enforces a minimum context length for the model, ensuring that the model can be loaded and used effectively.
- **Fallback to CPU mode:** If a hybrid plan isn't feasible, it now correctly falls back to a CPU-only mode.
- **Improved logging:** Enhanced logging to provide more detailed information about the memory planning process, including VRAM, RAM, and GPU layers.
- **Batch size adjustment:** Updated batch size based on the selected mode, ensuring efficient utilization of available resources.
- **Error handling and edge cases:** Improved error handling and edge case management to prevent unexpected failures.
- **Constants:** Added constants for easier maintenance and understanding.
- **Power-of-2 adjustment:** Added power of 2 adjustment for max context length to ensure correct sizing for the LLM.
These changes improve the reliability and robustness of the model planning process, allowing it to handle a wider range of hardware configurations and model sizes.
* Add log for raw GPU info from tauri-plugin-hardware
* chore: update linux runner for tauri build
* feat: Improve GPU memory calculation for unified memory
This commit improves the logic for calculating usable VRAM, particularly for systems with **unified memory** like Apple Silicon. Previously, the application would report 0 total VRAM if no dedicated GPUs were found, leading to incorrect calculations and failed model loads.
This change modifies the VRAM calculation to fall back to the total system RAM if no discrete GPUs are detected. This is a common and correct approach for unified memory architectures, where the CPU and GPU share the same memory pool.
Additionally, this commit refactors the logic for calculating usable VRAM and RAM to prevent potential underflow by checking if the total memory is greater than the reserved bytes before subtracting. This ensures the calculation remains safe and correct.
* chore: fix update migration version
* fix: enable unified memory support on model support indicator
* Use total_system_memory in bytes
---------
Co-authored-by: Minh141120 <minh.itptit@gmail.com>
Co-authored-by: Faisal Amir <urmauur@gmail.com>