Implemented `isAbsolutePath` helper to correctly identify POSIX, Windows drive‑letter, and UNC absolute paths. Updated `planModelLoad` to automatically resolve relative model and mmproj paths against the Jan data folder, enhancing usability for users supplying non‑absolute paths. Also refined minor formatting for readability.
The flag `noOffloadMmproj` was misleading – it actually indicates when the mmproj file **is** offloaded to VRAM. Renaming it to `offloadMmproj` clarifies its purpose and aligns the naming with the surrounding code.
Additionally, the `planModelLoad` signature has been reordered to place `mmprojPath` before `requestedCtx`, improving readability and making the optional parameters more intuitive. All related logic, calculations, and log messages have been updated to use the new flag name.
The original calculation used only the `block_count` from the model metadata, which excludes the final LM head and the embedding layer. This caused an underestimation of the total number of layers and consequently an incorrect `layerSize` value. Adding `+2` accounts for these two missing layers, ensuring accurate model size metrics.
Added a GPU memory check using `getSystemInfo` to ensure Vulkan is selected only on systems with at least 6 GB of VRAM.
* Made `determineBestBackend` asynchronous and updated all callers to `await` it.
* Adjusted backend priority list to include or demote Vulkan based on the memory check.
* Updated Vulkan support detection in `backend.ts` to rely solely on API version (memory check moved to selection logic).
* Imported `getSystemInfo` and refined file‑existence validation.
These changes prevent sub‑optimal Vulkan usage on low‑memory GPUs and improve backend selection reliability.
- Add `src-tauri/resources/` to `.gitignore`.
- Introduced utilities to read locally installed backends (`getLocalInstalledBackends`) and fetch remote supported backends (`fetchRemoteSupportedBackends`).
- Refactored `listSupportedBackends` to merge remote and local entries with deduplication and proper sorting.
- Exported `getBackendDir` and integrated it into the extension.
- Added helper `parseBackendVersion` and new method `checkBackendForUpdates` to detect newer backend versions.
- Implemented `installBackend` for manual backend archive installation, including platform‑specific binary path handling.
- Updated command‑line argument logic for `--flash-attn` to respect version‑specific defaults.
- Modified Tauri filesystem `decompress` command to remove overly strict path validation.
* feat: Smart model management
* **New UI option** – `memory_util` added to `settings.json` with a dropdown (high / medium / low) to let users control how aggressively the engine uses system memory.
* **Configuration updates** – `LlamacppConfig` now includes `memory_util`; the extension class stores it in a new `memoryMode` property and handles updates through `updateConfig`.
* **System memory handling**
* Introduced `SystemMemory` interface and `getTotalSystemMemory()` to report combined VRAM + RAM.
* Added helper methods `getKVCachePerToken`, `getLayerSize`, and a new `ModelPlan` type.
* **Smart model‑load planner** – `planModelLoad()` computes:
* Number of GPU layers that can fit in usable VRAM.
* Maximum context length based on KV‑cache size and the selected memory utilization mode (high/medium/low).
* Whether KV‑cache must be off‑loaded to CPU and the overall loading mode (GPU, Hybrid, CPU, Unsupported).
* Detailed logging of the planning decision.
* **Improved support check** – `isModelSupported()` now:
* Uses the combined VRAM/RAM totals from `getTotalSystemMemory()`.
* Applies an 80% usable‑memory heuristic.
* Returns **GREEN** only when both weights and KV‑cache fit in VRAM, **YELLOW** when they fit only in total memory or require CPU off‑load, and **RED** when the model cannot fit at all.
* **Cleanup** – Removed unused `GgufMetadata` import; updated imports and type definitions accordingly.
* **Documentation/comments** – Added explanatory JSDoc comments for the new methods and clarified the return semantics of `isModelSupported`.
* chore: migrate no_kv_offload from llamacpp setting to model setting
* chore: add UI auto optimize model setting
* feat: improve model loading planner with mmproj support and smarter memory budgeting
* Extend `ModelPlan` with optional `noOffloadMmproj` flag to indicate when a multimodal projector can stay in VRAM.
* Add `mmprojPath` parameter to `planModelLoad` and calculate its size, attempting to keep it on GPU when possible.
* Refactor system memory detection:
* Use `used_memory` (actual free RAM) instead of total RAM for budgeting.
* Introduced `usableRAM` placeholder for future use.
* Rewrite KV‑cache size calculation:
* Properly handle GQA models via `attention.head_count_kv`.
* Compute bytes per token as `nHeadKV * headDim * 2 * 2 * nLayer`.
* Replace the old 70 % VRAM heuristic with a more flexible budget:
* Reserve a fixed VRAM amount and apply an overhead factor.
* Derive usable system RAM from total memory minus VRAM.
* Implement a robust allocation algorithm:
* Prioritize placing the mmproj in VRAM.
* Search for the best balance of GPU layers and context length.
* Fallback strategies for hybrid and pure‑CPU modes with detailed safety checks.
* Add extensive validation of model size, KV‑cache size, layer size, and memory mode.
* Improve logging throughout the planning process for easier debugging.
* Adjust final plan return shape to include the new `noOffloadMmproj` field.
* remove unused variable
---------
Co-authored-by: Faisal Amir <urmauur@gmail.com>
* fix: Use 80% total memory for compatibility check
* refactor: extract usable memory percentage to named constant
Extract the hardcoded 0.8 multiplier into a named constant
USABLE_MEMORY_PERCENTAGE for better readability and maintainability.
* feat: Add model compatibility check and memory estimation
This commit introduces a new feature to check if a given model is supported based on available device memory.
The change includes:
- A new `estimateKVCache` method that calculates the required memory for the model's KV cache. It uses GGUF metadata such as `block_count`, `head_count`, `key_length`, and `value_length` to perform the calculation.
- An `isModelSupported` method that combines the model file size and the estimated KV cache size to determine the total memory required. It then checks if any available device has sufficient free memory to load the model.
- An updated error message for the `version_backend` check to be more user-friendly, suggesting a stable internet connection as a potential solution for backend setup failures.
This functionality helps prevent the application from attempting to load models that would exceed the device's memory capacity, leading to more stable and predictable behavior.
fixes: #5505
* Update extensions/llamacpp-extension/src/index.ts
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
* Update extensions/llamacpp-extension/src/index.ts
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
* Extend this to available system RAM if GGML device is not available
* fix: Improve model metadata and memory checks
This commit refactors the logic for checking if a model is supported by a system's available memory.
**Key changes:**
- **Remote model support**: The `read_gguf_metadata` function can now fetch metadata from a remote URL by reading the file in chunks.
- **Improved KV cache size calculation**: The KV cache size is now estimated more accurately by using `attention.key_length` and `attention.value_length` from the GGUF metadata, with a fallback to `embedding_length`.
- **Granular memory check statuses**: The `isModelSupported` function now returns a more specific status (`'RED'`, `'YELLOW'`, `'GREEN'`) to indicate whether the model weights or the KV cache are too large for the available memory.
- **Consolidated logic**: The logic for checking local and remote models has been consolidated into a single `isModelSupported` function, improving code clarity and maintainability.
These changes provide more robust and informative model compatibility checks, especially for models hosted on remote servers.
* Update extensions/llamacpp-extension/src/index.ts
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
* Make ctx_size optional and use sum free memory across ggml devices
* feat: hub and dropdown model selection handle model compatibility
* feat: update bage model info color
* chore: enable detail page to get compatibility model
* chore: update copy
* chore: update shrink indicator UI
---------
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
This commit improves the clarity of the llama.cpp extension.
- Corrected a placeholder example from `GGML_VK_VISIBLE_DEVICES='0,1'` to `GGML_VK_VISIBLE_DEVICES=0,1` for better accuracy.
- Changed an ambiguous error message from `"Failed to load llama-server: ${error}"` to the more specific `"Failed to load llamacpp backend"`.
This commit adds a new setting `llamacpp_env` to the llama.cpp extension, allowing users to specify custom environment variables. These variables are passed to the backend process when it starts.
A new function `parseEnvFromString` is introduced to handle the parsing of the semicolon-separated key-value pairs from the user input. The environment variables are then used in the `load` function and when listing available devices. This enables more flexible configuration of the llama.cpp backend, such as specifying visible GPUs for Vulkan.
This change also updates the Tauri command `get_devices` to accept environment variables, ensuring that device discovery respects the user's settings.
This commit adds a GGUF validation check for both the main model file and the `mmproj` file (if present) before they are loaded. This prevents the extension from crashing if an invalid GGUF file is provided.
The `GgufMetadata` interface and `loadMetadata` function were removed as the `readGgufMetadata` is now invoked directly. The code has also been refactored to be more readable, with clearer variable names and more descriptive comments.
This change modifies how the API key is passed to the llama-server process. Previously, it was sent as a command line argument (--api-key). This approach has been updated to pass the key via an environment variable (LLAMA_API_KEY).
This improves security by preventing the API key from being visible in the process list (ps aux on Linux, Task Manager on Windows, etc.), where it could potentially be exposed to other users or processes on the same system.
The commit also updates the Rust backend to read the API key from the environment variable instead of parsing it from the command line arguments.
This commit introduces a new configuration option offload_mmproj to the llamacpp extension.
The offload_mmproj setting allows users to control whether the multimodal projector model is offloaded to the GPU. By default, it's offloaded for better performance. If set to false, the projector model will remain on the CPU, which can be useful in low GPU memory scenarios, though image processing might take longer.
Additionally, this commit adds validate_mmproj_path to ensure the provided --mmproj path is valid and accessible, preventing issues during model loading.
This change also refactors some invoke calls for improved readability.
* feat: Add GGUF metadata reading functionality
This commit introduces a new Tauri command and a corresponding function to read metadata from GGUF model files.
The new read_gguf_metadata command in the Rust backend uses the byteorder crate to parse the GGUF file format and extract key metadata. This information, including the file's version, tensor count, and a key-value map of other metadata, is then made available to the TypeScript frontend.
This functionality is a foundational step toward providing users with more detailed information about their loaded models directly within the application.
This will be refactored later.
fixes: #6001
* loadMetadata() should return
* Properly throw eror to FE
* Use BufReader to improve performance
* fix: Improve error message for invalid version/backend format
This commit changes the error message displayed when the `version_backend` configuration is invalid. The new message is more user-friendly and suggests a simple solution, such as restarting the application, which is more helpful to the user than the previous technical error message.
* fix typo