* feat: Introduce structured error handling for llamacpp extension
This commit introduces a structured error handling system for the `llamacpp` extension. Instead of returning simple string errors, we now use a custom `LlamacppError` struct with a specific `ErrorCode` enum. This allows the frontend to display more user-friendly and actionable error messages based on the code, rather than raw debug logs.
The changes include:
- A new `ErrorCode` enum to categorize errors (e.g., `OutOfMemory`, `ModelArchNotSupported`, `BinaryNotFound`).
- A `LlamacppError` struct to encapsulate the code, a user-facing message, and optional detailed logs.
- A static method `from_stderr` that intelligently parses llama.cpp's standard error output to identify and map common issues like Out of Memory errors to a specific error code.
- Refactored `ServerError` enum to wrap the new `LlamacppError` and provide a consistent serialization format for the Tauri frontend.
- Updated all relevant functions (`load_llama_model`, `get_devices`) to return the new structured error type, ensuring a more robust and predictable error flow.
- A reduced timeout for model loading from 300 to 180 seconds.
This work lays the groundwork for a more intuitive and helpful user experience, as the application can now provide clear guidance to users when a model fails to load.
* Update src-tauri/src/core/utils/extensions/inference_llamacpp_extension/server.rs
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
* Update src-tauri/src/core/utils/extensions/inference_llamacpp_extension/server.rs
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
* chore: update FE handle error object from extension
* chore: fix property type
---------
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
* feat: Add support for overriding tensor buffer type
This commit introduces a new configuration option, `override_tensor_buffer_t`, which allows users to specify a regex for matching tensor names to override their buffer type. This is an advanced setting primarily useful for optimizing the performance of large models, particularly Mixture of Experts (MoE) models.
By overriding the tensor buffer type, users can keep critical parts of the model, like the attention layers, on the GPU while offloading other parts, such as the expert feed-forward networks, to the CPU. This can lead to significant speed improvements for massive models.
Additionally, this change refines the error message to be more specific when a model fails to load. The previous message "Failed to load llama-server" has been updated to "Failed to load model" to be more accurate.
* chore: update FE to suppoer override-tensor
---------
Co-authored-by: Faisal Amir <urmauur@gmail.com>
* fix: generate a response button should appear when an incomplete tool call message is present
* fix: wording
* fix: do not send duplicate messages on regenerating
* fix: tests
* fix: assistant with last used and fix metadata
* chore: revert instruction and desc
* chore: fix current assistant state
* chore: updae metadata message assistant
* chore: update test case
* fix: migrate app settings to the new version
* fix: edge cases
* fix: migrate HF import model on Windows
* fix hardware page broken after downgraded
* test: correct test
* fix: backward compatible hardware info
* feat: Enhance Llama.cpp backend management with persistence
This commit introduces significant improvements to how the Llama.cpp extension manages and updates its backend installations, focusing on user preference persistence and smarter auto-updates.
Key changes include:
* **Persistent Backend Type Preference:** The extension now stores the user's preferred backend type (e.g., `cuda`, `cpu`, `metal`) in `localStorage`. This ensures that even after updates or restarts, the system attempts to use the user's previously selected backend type, if available.
* **Intelligent Auto-Update:** The auto-update mechanism has been refined to prioritize updating to the **latest version of the *currently selected backend type*** rather than always defaulting to the "best available" backend (which might change). This respects user choice while keeping the chosen backend type up-to-date.
* **Improved Initial Installation/Configuration:** For fresh installations or cases where the `version_backend` setting is invalid, the system now intelligently determines and installs the best available backend, then persists its type.
* **Refined Old Backend Cleanup:** The `removeOldBackends` function has been renamed to `removeOldBackend` and modified to specifically clean up *older versions of the currently selected backend type*, preventing the accumulation of unnecessary files while preserving other backend types the user might switch to.
* **Robust Local Storage Handling:** New private methods (`getStoredBackendType`, `setStoredBackendType`, `clearStoredBackendType`) are introduced to safely interact with `localStorage`, including error handling for potential `localStorage` access issues.
* **Version Filtering Utility:** A new utility `findLatestVersionForBackend` helps in identifying the latest available version for a specific backend type from a list of supported backends.
These changes provide a more stable, user-friendly, and maintainable backend management experience for the Llama.cpp extension.
Fixes: #5883
* fix: cortex models migration should be done once
* feat: Optimize Llama.cpp backend preference storage and UI updates
This commit refines the Llama.cpp extension's backend management by:
* **Optimizing `localStorage` Writes:** The system now only writes the backend type preference to `localStorage` if the new value is different from the currently stored one. This reduces unnecessary `localStorage` operations.
* **Ensuring UI Consistency on Initial Setup:** When a fresh installation or an invalid backend configuration is detected, the UI settings are now explicitly updated to reflect the newly determined `effectiveBackendString`, ensuring the displayed setting matches the active configuration.
These changes improve performance by reducing redundant storage operations and enhance user experience by maintaining UI synchronization with the backend state.
* Revert "fix: provider settings should be refreshed on page load (#5887)"
This reverts commit ce6af62c7df4a7e7ea8c0896f307309d6bf38771.
* fix: add loader version backend llamacpp
* fix: wrong key name
* fix: model setting issues
* fix: virtual dom hub
* chore: cleanup
* chore: hide device ofload setting
---------
Co-authored-by: Louis <louis@jan.ai>
Co-authored-by: Faisal Amir <urmauur@gmail.com>