* refactor: move session management & port allocation to backend
- Remove the in‑process `activeSessions` map and its cleanup logic from the TypeScript side.
- Introduce new Tauri commands in Rust:
- `get_random_port` – picks an unused port using a seeded RNG and checks availability.
- `find_session_by_model` – returns the `SessionInfo` for a given model ID.
- `get_loaded_models` – returns a list of currently loaded model IDs.
- Update the extension’s TypeScript code to use these commands via `invoke`:
- `findSessionByModel`, `load`, `unload`, `chat`, `getLoadedModels`, and `embed` now operate asynchronously and query the backend.
- Remove the old `is_port_available` command and the custom port‑checking loop.
- Simplify `onUnload` – session termination is now handled by the backend.
- Drop unused helpers (`sleep`, `waitForModelLoad`) and related port‑availability code.
- Add missing Rust imports (`rand::{StdRng,Rng,SeedableRng}`, `HashSet`) and improve error handling.
- Register the new commands in `src-tauri/src/lib.rs` (replace `is_port_available` with the three new commands).
This refactor centralises session state and port allocation in the Rust backend, eliminates duplicated logic, and resolves race conditions around model loading and session cleanup.
* Use String(e) for error
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
---------
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
* Improve Llama.cpp model path handling and validation
This commit refactors the load_llama_model function to improve how it handles and validates the model path.
Previously, the function extracted the model path but did not perform any validation. This change adds the following improvements:
It now checks for the presence of the -m flag.
It verifies that a path is provided after the -m flag.
It validates that the specified model path actually exists on the filesystem.
It ensures that the SessionInfo struct stores the canonical display path of the model, which is a more robust approach.
These changes make the model loading process more reliable and provide better error handling for invalid or missing model paths.
* Exp: Use short path on Windows
* Fix: Remove error channel and handling in llama.cpp server loading
The previous implementation used a channel to receive error messages from the llama.cpp server's stdout. However, this proved unreliable as the path names can contain 'errors strings' that we use to check even during normal operation. This commit removes the error channel and associated error handling logic.
The server readiness is still determined by checking for the "server is listening" message in stdout. Errors are now handled by relying on the process exit code and capturing the full stderr output if the process fails to start or exits unexpectedly. This approach provides a more robust and accurate error detection mechanism.
* Add else block in Windows path handling
* Add some path related tests
* Fix windows tests
* feat: Improve llama.cpp argument handling and add device parsing tests
This commit refactors how arguments are passed to llama.cpp,
specifically by only adding arguments when their values differ from
their defaults. This reduces the verbosity of the command and prevents
potential conflicts or errors when llama.cpp's default behavior aligns
with the desired setting.
Additionally, new tests have been added for parsing device output from
llama.cpp, ensuring the accurate extraction of GPU information (ID,
name, total memory, and free memory). This improves the robustness of
device detection.
The following changes were made:
* **Remove redundant `--ctx-size` argument:** The `--ctx-size`
argument is now only explicitly added if `cfg.ctx_size` is greater
than 0.
* **Conditional argument adding for default values:**
* `--split-mode` is only added if `cfg.split_mode` is not empty
and not 'layer'.
* `--main-gpu` is only added if `cfg.main_gpu` is not undefined
and not 0.
* `--cache-type-k` is only added if `cfg.cache_type_k` is not 'f16'.
* `--cache-type-v` is only added if `cfg.cache_type_v` is not 'f16'
(when `flash_attn` is enabled) or not 'f32' (otherwise). This
also corrects the `flash_attn` condition.
* `--defrag-thold` is only added if `cfg.defrag_thold` is not 0.1.
* `--rope-scaling` is only added if `cfg.rope_scaling` is not
'none'.
* `--rope-scale` is only added if `cfg.rope_scale` is not 1.
* `--rope-freq-base` is only added if `cfg.rope_freq_base` is not 0.
* `--rope-freq-scale` is only added if `cfg.rope_freq_scale` is
not 1.
* **Add `parse_device_output` tests:** Comprehensive unit tests were
added to `src-tauri/src/core/utils/extensions/inference_llamacpp_extension/server.rs`
to validate the parsing of llama.cpp device output under various
scenarios, including multiple devices, single devices, different
backends (CUDA, Vulkan, SYCL), complex GPU names, and error
conditions.
* fixup cache_type_v comparision
* Fix: Llama.cpp server hangs on model load
Resolves an issue where the llama.cpp server would hang indefinitely when loading certain models, as described in the attached ticket. The server's readiness message was not being correctly detected, causing the application to stall.
The previous implementation used a line-buffered reader (BufReader::lines()) to process the stderr stream. This method proved to be unreliable for the specific output of the llama.cpp server.
This commit refactors the stderr handling logic to use a more robust, chunk-based approach (read_until(b'\n', ...)). This ensures that the output is processed as it arrives, reliably capturing critical status messages and preventing the application from hanging during model initialization.
Fixes: #6021
* Handle error gracefully with ServerError
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
* Revert "Handle error gracefully with ServerError"
This reverts commit 267a8a8a3262fbe36a445a30b8b3ba9a39697643.
* Revert "Fix: Llama.cpp server hangs on model load"
This reverts commit 44e5447f82f0ae32b6db7ffb213025f130d655c4.
* Add more guards, refactor and fix error sending to FE
---------
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
* fix: remove CREATE_NEW_PROCESS_GROUP flag for proper Ctrl-C handling
CREATE_NEW_PROCESS_GROUP prevented GenerateConsoleCtrlEvent from working,
causing graceful shutdown failures. Removed to enable proper signal handling.
* Revert "fix: remove CREATE_NEW_PROCESS_GROUP flag for proper Ctrl-C handling"
This reverts commit 82ace3e72e4bf7338f422d5c79bdd6a0f8a2440e.
* fix: use direct process termination instead of console events
Simplified Windows process cleanup by removing console attachment logic
and using direct child.kill() method. More reliable for headless processes.
* Fix missing imports
* switch to tokio::time
* Don't wait while forcefully terminate process using kill API on Windows
Disabled use of windows-sys crate as graceful shutdown on Windows is unreliable in this context.
Updated cleanup.rs and server.rs to directly call child.kill().await for terminating processes on Windows.
Improved logging for process termination and error handling during kill and wait.
Removed timeout-based graceful shutdown attempt on Windows since TerminateProcess is inherently forceful and immediate.
This ensures more predictable process cleanup behavior on Windows platforms.
* final cleanups
This change improves the robustness of the llama.cpp extension's server port selection.
Previously, the `getRandomPort()` method only checked for ports already in use by active sessions, which could lead to model load failures if the chosen port was occupied by another external process.
This change introduces a new Tauri command, `is_port_available`, which performs a system-level check to ensure the randomly selected port is truly free before attempting to start the llama-server. It also adds a retry mechanism with a maximum number of attempts (20,000) to find an available port, throwing an error if no suitable port is found within the specified range after all attempts.
This enhancement prevents port conflicts and improves the reliability and user experience of the llama.cpp extension within Jan.
Closes#5965
* feat: add support for querying available backend devices
This change introduces a new `get_devices` method to the `llamacpp_extension` engine that allows the frontend to query and display a list of available devices (e.g., Vulkan, CUDA, SYCL) from the compiled `llama-server` binary.
* Added `DeviceList` interface to represent GPU/device metadata.
* Implemented `getDevices(): Promise<DeviceList[]>` method.
* Splits `version/backend`, ensures backend is ready.
* Invokes the new Tauri command `get_devices`.
* Introduced a new `get_devices` Tauri command.
* Parses `llama-server --list-devices` output to extract available devices with memory info.
* Introduced `DeviceInfo` struct (`id`, `name`, `mem`, `free`) and exposed it via serialization.
* Robust parsing logic using string processing (non-regex) to locate memory stats.
* Registered the new command in the `tauri::Builder` in `lib.rs`.
* Fixed logic to correctly parse multiple devices from the llama-server output.
* Handles common failure modes: binary not found, malformed memory info, etc.
This sets the foundation for device selection, memory-aware model loading, and improved diagnostics in Jan AI engine setup flows.
* Update extensions/llamacpp-extension/src/index.ts
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
---------
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
* Fix: Windows llamacpp not picking up dlls from lib repo
* Fix lib path on Windows
* Add debug info about lib_path
* Normalize lib_path for Windows
* fix window lib path normalization
* fix: missing cuda dll files on windows
* throw backend setup errors to UI
* Fix format
* Update extensions/llamacpp-extension/src/index.ts
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
* feat: add logger to llamacpp-extension
* fix: platform check
---------
Co-authored-by: Louis <louis@jan.ai>
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
* refactor: Improve Llama.cpp backend management and auto-update
This commit refactors the Llama.cpp extension to enhance backend management and streamline the auto-update process.
Key changes include:
Refactored configureBackends: The logic for determining the best available backend and populating settings is now more modular, preventing duplicate executions.
Dedicated Auto-update Handling: Introduced a handleAutoUpdate method to encapsulate the auto-update logic, including downloading the latest available backend and updating the internal configuration and settings.
Robust Old Backend Cleanup: The removeOldBackends method is improved to ensure only the currently used backend version and type are kept, effectively managing disk space. A delay is added for Windows to prevent file conflicts during cleanup.
Final Installation Check: A ensureFinalBackendInstallation method is added to guarantee the selected backend is installed, acting as a final safeguard after auto-update or if auto-update is disabled.
Minor Fixes:
Added console.log for save_path during decompression for better debugging.
Ensured the output directory exists before decompression in the Rust backend.
Removed extraneous console log for session info.
Updated Cargo.toml and tauri.conf.json versions.
These changes lead to a more reliable and efficient Llama.cpp backend experience within the application, particularly for users with auto-update enabled.
* fix isBackendInstalled parameters
* Address bot's comments
* Address bot comments of using try finally block
On Windows, spawning the llamacpp server was causing an unwanted terminal window
to appear. This is now fixed by combining `CREATE_NO_WINDOW` with
`CREATE_NEW_PROCESS_GROUP` using `.creation_flags(...)`, ensuring that the
process runs in the background without a console window.
This change only applies to 64-bit Windows builds.
* fix: Prevent spamming /health endpoint and improve startup and resolve compiler warnings
This commit introduces a delay and improved logic to the /health endpoint checks in the llamacpp extension, preventing excessive requests during model loading.
Additionally, it addresses several Rust compiler warnings by:
- Commenting out an unused `handle_app_quit` function in `src/core/mcp.rs`.
- Explicitly declaring `target_port`, `session_api_key`, and `buffered_body` as mutable in `src/core/server.rs`.
- Commenting out unused `tokio` imports in `src/core/setup.rs`.
- Enhancing the `load_llama_model` function in `src/core/utils/extensions/inference_llamacpp_extension/server.rs` to better monitor stdout/stderr for readiness and errors, and handle timeouts.
- Commenting out an unused `std::path::Prefix` import and adjusting `normalize_path` in `src/core/utils/mod.rs`.
- Updating the application version to 0.6.904 in `tauri.conf.json`.
* fix grammar!
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
* fix grammar 2
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
* reimport prefix but only on Windows
* remove instead of commenting
* remove redundant check
* sync app version in cargo.toml with tauri.conf
---------
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
* feat: Improve llamacpp server error reporting and model load stability
This commit introduces significant improvements to how the llamacpp server
process is managed and how its errors are reported.
Key changes:
- **Enhanced Error Reporting:** The llamacpp server's stdout and stderr
are now piped and captured. If the llamacpp process exits prematurely
or fails to start, its stderr output is captured and returned as a
`LlamacppError`. This provides much more specific and actionable
diagnostic information for users and developers.
- **Increased Model Load Timeout:** The `waitForModelLoad` timeout has
been increased from 30 seconds to 240 seconds (4 minutes). This
addresses issues where larger models or slower systems would
prematurely time out during the model loading phase.
- **API Secret Update:** The internal API secret for the llamacpp
extension has been updated from 'Jan' to 'JustAskNow'.
- **Version Bump:** The application version in `tauri.conf.json` has
been incremented to `0.6.901`.
* fix: should not spam load requests
* test: add test to cover the fix
* refactor: clean up
* test: add more test case
---------
Co-authored-by: Louis <louis@jan.ai>
Things to ponder:
- Now, the v1/models endpoint of the API server will return an empty
list if no models are loaded
- Streaming v1/chat/completion routing works as well as v1/models; needs
further testing
- Changed `pid` field in `SessionInfo` from `string` to `number`/`i32` in TypeScript and Rust.
- Updated `activeSessions` map key from `string` to `number` to align with new PID type.
- Adjusted process monitoring logic to correctly handle numeric PIDs.
- Removed fallback UUID-based PID generation in favor of numeric fallback (-1).
- Added PID cleanup logic in `is_process_running` when the process is no longer alive.
- Bumped application version from 0.5.16 to 0.6.900 in `tauri.conf.json`.
The current implementation of Ctrl-C handling was not properly tested on Windows x86_64 architectures. To address this, the code has been modified to use `i32` instead of `BOOL` to handle the result of the `GenerateConsoleCtrlEvent` function, ensuring that the return value is correctly checked across different platforms.
This change updates the dependencies of the Cargo.toml file on Windows to include additional features from the `windows-sys` crate. The `CreateProcess flags like CREATE_NEW_PROCESS_GROUP` feature is now enabled to allow for proper process management.
The code now properly sends Ctrl+C to the llama process on Windows, and also includes error handling for when the Ctrl+C command fails. Additionally, it now uses the `Windows` API to kill the process when it times out, and properly handles the wait for the process to exit.
Updated Rust code to apply Windows-specific logic only on x86_64 targets using #[cfg(all(windows, target_arch = "x86_64"))]. Modified dev:tauri script in package.json to remove CLEAN=true and added CLEAN=true to beforeDevCommand in tauri.conf.json for consistency. Minor formatting changes in tauri.conf.json.
Rename variable, struct, and enum names from camelCase to snake_case throughout the llamacpp extension codebase to align with Rust naming conventions. This change improves readability and consistency without altering functionality.
Change the llama_server_process state from an Option<Child> to a HashMap<String, Child> to support managing multiple server instances by PID. This allows precise process tracking and termination, replacing the previous single-process limitation.
Previously, only one server process could be tracked at a time. Now, each process is stored with its PID as the key, enabling:
- Accurate session matching during unloading
- Proper termination of specific processes
- Better error handling for mismatched PIDs
The load_llama_model function now inserts processes into the map, and unload_llama_model removes them by PID.