* refactor: Improve Llama.cpp backend management and auto-update
This commit refactors the Llama.cpp extension to enhance backend management and streamline the auto-update process.
Key changes include:
Refactored configureBackends: The logic for determining the best available backend and populating settings is now more modular, preventing duplicate executions.
Dedicated Auto-update Handling: Introduced a handleAutoUpdate method to encapsulate the auto-update logic, including downloading the latest available backend and updating the internal configuration and settings.
Robust Old Backend Cleanup: The removeOldBackends method is improved to ensure only the currently used backend version and type are kept, effectively managing disk space. A delay is added for Windows to prevent file conflicts during cleanup.
Final Installation Check: A ensureFinalBackendInstallation method is added to guarantee the selected backend is installed, acting as a final safeguard after auto-update or if auto-update is disabled.
Minor Fixes:
Added console.log for save_path during decompression for better debugging.
Ensured the output directory exists before decompression in the Rust backend.
Removed extraneous console log for session info.
Updated Cargo.toml and tauri.conf.json versions.
These changes lead to a more reliable and efficient Llama.cpp backend experience within the application, particularly for users with auto-update enabled.
* fix isBackendInstalled parameters
* Address bot's comments
* Address bot comments of using try finally block
* fix: Prevent spamming /health endpoint and improve startup and resolve compiler warnings
This commit introduces a delay and improved logic to the /health endpoint checks in the llamacpp extension, preventing excessive requests during model loading.
Additionally, it addresses several Rust compiler warnings by:
- Commenting out an unused `handle_app_quit` function in `src/core/mcp.rs`.
- Explicitly declaring `target_port`, `session_api_key`, and `buffered_body` as mutable in `src/core/server.rs`.
- Commenting out unused `tokio` imports in `src/core/setup.rs`.
- Enhancing the `load_llama_model` function in `src/core/utils/extensions/inference_llamacpp_extension/server.rs` to better monitor stdout/stderr for readiness and errors, and handle timeouts.
- Commenting out an unused `std::path::Prefix` import and adjusting `normalize_path` in `src/core/utils/mod.rs`.
- Updating the application version to 0.6.904 in `tauri.conf.json`.
* fix grammar!
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
* fix grammar 2
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
* reimport prefix but only on Windows
* remove instead of commenting
* remove redundant check
* sync app version in cargo.toml with tauri.conf
---------
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
* wip
* update
* add download logic
* add decompress. support delete file
* download backend upon selecting setting
* add some logging and nootes
* add note on race condition
* remove then catch
* default to none backend. only download if it's not installed
* merge version and backend. fetch version from GH
* restrict scope of output_dir
* add note on unpack
* add pull and abortPull
* add model import (download only)
* write model.yaml. support local model import
* remove cortex-related command
* add TODO
* remove cortex-related command