228 Commits

Author SHA1 Message Date
Louis
4f5d9b8222
Merge pull request #6089 from menloresearch/fix/clean-up-unused-apis
refactor: clean up unused hardware apis
2025-08-11 00:02:31 +07:00
Louis
59afafba0e fix: test command 2025-08-10 23:36:14 +07:00
Louis
f0a9080ef7 fix: cargo test on windows 2025-08-10 22:46:44 +07:00
Akarshan Biswas
0cfc745954
feat: Introduce structured error handling for llamacpp extension (#6087)
* feat: Introduce structured error handling for llamacpp extension

This commit introduces a structured error handling system for the `llamacpp` extension. Instead of returning simple string errors, we now use a custom `LlamacppError` struct with a specific `ErrorCode` enum. This allows the frontend to display more user-friendly and actionable error messages based on the code, rather than raw debug logs.

The changes include:
- A new `ErrorCode` enum to categorize errors (e.g., `OutOfMemory`, `ModelArchNotSupported`, `BinaryNotFound`).
- A `LlamacppError` struct to encapsulate the code, a user-facing message, and optional detailed logs.
- A static method `from_stderr` that intelligently parses llama.cpp's standard error output to identify and map common issues like Out of Memory errors to a specific error code.
- Refactored `ServerError` enum to wrap the new `LlamacppError` and provide a consistent serialization format for the Tauri frontend.
- Updated all relevant functions (`load_llama_model`, `get_devices`) to return the new structured error type, ensuring a more robust and predictable error flow.
- A reduced timeout for model loading from 300 to 180 seconds.

This work lays the groundwork for a more intuitive and helpful user experience, as the application can now provide clear guidance to users when a model fails to load.

* Update src-tauri/src/core/utils/extensions/inference_llamacpp_extension/server.rs

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* Update src-tauri/src/core/utils/extensions/inference_llamacpp_extension/server.rs

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* chore: update FE handle error object from extension

* chore: fix property type

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-08-07 23:28:25 +05:30
Louis
fc7d8a7a9c
fix: test 2025-08-07 23:47:51 +07:00
Louis
9285714345
fix: tests 2025-08-07 22:38:28 +07:00
Akarshan
bdec0af791
fix windows test 2025-08-07 20:37:33 +05:30
Akarshan
9482c0a6b9
Revert "fix import on Windows"
This reverts commit b0e7030939a82baec5f12c44639d0eb6c3c1cf43.
2025-08-07 20:35:13 +05:30
Akarshan
b0e7030939
fix import on Windows 2025-08-07 20:29:05 +05:30
Akarshan
dc82fd6051
fix windows test for short path 2025-08-07 20:16:43 +05:30
Louis
b8f5fd510a
test: fix failed tests 2025-08-07 20:54:00 +07:00
Louis
c1668a4e4a
refactor: clean up unused hardware apis 2025-08-07 20:04:23 +07:00
Akarshan Biswas
469d787888
refactor: Use more precise terminology in API server logs (#6085)
* refactor: Use more precise terminology in API server logs and error messages

This commit refactors several log and error messages to use more accurate and consistent terminology.

-   Replaced "backend servers" and "backend model servers" with "models" or "sessions" to better reflect the service's internal structure.
-   Changed "Proxy server" to "Jan API server" to more accurately describe the server's function.
-   Removed a redundant debug log message.

These changes are cosmetic and improve the readability and consistency of the logging output.

* Update src-tauri/src/core/server.rs

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-08-07 17:48:33 +05:30
Akarshan Biswas
6a699d8004
refactor: move session management & port allocation to backend (#6083)
* refactor: move session management & port allocation to backend

- Remove the in‑process `activeSessions` map and its cleanup logic from the TypeScript side.
- Introduce new Tauri commands in Rust:
  - `get_random_port` – picks an unused port using a seeded RNG and checks availability.
  - `find_session_by_model` – returns the `SessionInfo` for a given model ID.
  - `get_loaded_models` – returns a list of currently loaded model IDs.
- Update the extension’s TypeScript code to use these commands via `invoke`:
  - `findSessionByModel`, `load`, `unload`, `chat`, `getLoadedModels`, and `embed` now operate asynchronously and query the backend.
  - Remove the old `is_port_available` command and the custom port‑checking loop.
  - Simplify `onUnload` – session termination is now handled by the backend.
- Drop unused helpers (`sleep`, `waitForModelLoad`) and related port‑availability code.
- Add missing Rust imports (`rand::{StdRng,Rng,SeedableRng}`, `HashSet`) and improve error handling.
- Register the new commands in `src-tauri/src/lib.rs` (replace `is_port_available` with the three new commands).

This refactor centralises session state and port allocation in the Rust backend, eliminates duplicated logic, and resolves race conditions around model loading and session cleanup.

* Use String(e) for error

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-08-07 13:06:21 +05:30
Louis
e74601443f chore: add deep_link register_all 2025-08-06 12:24:21 +10:00
Akarshan Biswas
dcffa4fa0a Fix: Improve Llama.cpp model path handling and error handling (#6045)
* Improve Llama.cpp model path handling and validation

This commit refactors the load_llama_model function to improve how it handles and validates the model path.

Previously, the function extracted the model path but did not perform any validation. This change adds the following improvements:

It now checks for the presence of the -m flag.

It verifies that a path is provided after the -m flag.

It validates that the specified model path actually exists on the filesystem.

It ensures that the SessionInfo struct stores the canonical display path of the model, which is a more robust approach.

These changes make the model loading process more reliable and provide better error handling for invalid or missing model paths.

* Exp: Use short path on Windows

* Fix: Remove error channel and handling in llama.cpp server loading

The previous implementation used a channel to receive error messages from the llama.cpp server's stdout. However, this proved unreliable as the path names can contain 'errors strings' that we use to check even during normal operation. This commit removes the error channel and associated error handling logic.
The server readiness is still determined by checking for the "server is listening" message in stdout. Errors are now handled by relying on the process exit code and capturing the full stderr output if the process fails to start or exits unexpectedly. This approach provides a more robust and accurate error detection mechanism.

* Add else block in Windows path handling

* Add some path related tests

* Fix windows tests
2025-08-06 12:24:21 +10:00
Louis
eb13189d07 fix: run dev should reinstall extensions 2025-08-06 12:24:21 +10:00
Sherzod Mutalov
5f06a35f4e fix: use attributes to check the feature existence 2025-08-06 12:24:21 +10:00
Sherzod Mutalov
280ea1aa9f chore: extracted macos avx2 check code to the utility function 2025-08-06 12:23:18 +10:00
Sherzod Mutalov
ad9c4854a9 chore: added comments 2025-08-06 12:20:30 +10:00
Sherzod Mutalov
49c8334e40 chore: replaced with macros call to remove warning 2025-08-06 12:20:30 +10:00
Sherzod Mutalov
f1dd42de9e fix: use system npx on old mac's 2025-08-06 12:20:30 +10:00
Akarshan Biswas
5e533bdedc
feat: Improve llama.cpp argument handling and add device parsing tests (#6041)
* feat: Improve llama.cpp argument handling and add device parsing tests

This commit refactors how arguments are passed to llama.cpp,
specifically by only adding arguments when their values differ from
their defaults. This reduces the verbosity of the command and prevents
potential conflicts or errors when llama.cpp's default behavior aligns
with the desired setting.

Additionally, new tests have been added for parsing device output from
llama.cpp, ensuring the accurate extraction of GPU information (ID,
name, total memory, and free memory). This improves the robustness of
device detection.

The following changes were made:

* **Remove redundant `--ctx-size` argument:** The `--ctx-size`
    argument is now only explicitly added if `cfg.ctx_size` is greater
    than 0.
* **Conditional argument adding for default values:**
    * `--split-mode` is only added if `cfg.split_mode` is not empty
        and not 'layer'.
    * `--main-gpu` is only added if `cfg.main_gpu` is not undefined
        and not 0.
    * `--cache-type-k` is only added if `cfg.cache_type_k` is not 'f16'.
    * `--cache-type-v` is only added if `cfg.cache_type_v` is not 'f16'
        (when `flash_attn` is enabled) or not 'f32' (otherwise). This
        also corrects the `flash_attn` condition.
    * `--defrag-thold` is only added if `cfg.defrag_thold` is not 0.1.
    * `--rope-scaling` is only added if `cfg.rope_scaling` is not
        'none'.
    * `--rope-scale` is only added if `cfg.rope_scale` is not 1.
    * `--rope-freq-base` is only added if `cfg.rope_freq_base` is not 0.
    * `--rope-freq-scale` is only added if `cfg.rope_freq_scale` is
        not 1.
* **Add `parse_device_output` tests:** Comprehensive unit tests were
    added to `src-tauri/src/core/utils/extensions/inference_llamacpp_extension/server.rs`
    to validate the parsing of llama.cpp device output under various
    scenarios, including multiple devices, single devices, different
    backends (CUDA, Vulkan, SYCL), complex GPU names, and error
    conditions.

* fixup cache_type_v comparision
2025-08-04 19:47:04 +05:30
Akarshan Biswas
b1984a452e
Fix: Llama.cpp server hangs on model load (#6030)
* Fix: Llama.cpp server hangs on model load

Resolves an issue where the llama.cpp server would hang indefinitely when loading certain models, as described in the attached ticket. The server's readiness message was not being correctly detected, causing the application to stall.

The previous implementation used a line-buffered reader (BufReader::lines()) to process the stderr stream. This method proved to be unreliable for the specific output of the llama.cpp server.

This commit refactors the stderr handling logic to use a more robust, chunk-based approach (read_until(b'\n', ...)). This ensures that the output is processed as it arrives, reliably capturing critical status messages and preventing the application from hanging during model initialization.

Fixes: #6021

* Handle error gracefully with ServerError

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* Revert "Handle error gracefully with ServerError"

This reverts commit 267a8a8a3262fbe36a445a30b8b3ba9a39697643.

* Revert "Fix: Llama.cpp server hangs on model load"

This reverts commit 44e5447f82f0ae32b6db7ffb213025f130d655c4.

* Add more guards, refactor and fix error sending to FE

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-08-02 21:50:07 +05:30
Louis
9c0d09c487
refactor: clean up cortex (#6003)
* refactor: clean up cortex

* chore: clean up

* refactor: clean up
2025-07-31 21:58:12 +07:00
Akarshan
e76d207718
Fixup: tauri::WindowEvent 2025-07-31 15:41:43 +07:00
Akarshan
b3e8201481
Add RunEvent::Exit event to tauri to handle macos context menu exit 2025-07-31 15:41:43 +07:00
Akarshan Biswas
0aaaca05a4
fix: use direct process termination instead of console events on Windows (#5972)
* fix: remove CREATE_NEW_PROCESS_GROUP flag for proper Ctrl-C handling

CREATE_NEW_PROCESS_GROUP prevented GenerateConsoleCtrlEvent from working,
causing graceful shutdown failures. Removed to enable proper signal handling.

* Revert "fix: remove CREATE_NEW_PROCESS_GROUP flag for proper Ctrl-C handling"

This reverts commit 82ace3e72e4bf7338f422d5c79bdd6a0f8a2440e.

* fix: use direct process termination instead of console events

Simplified Windows process cleanup by removing console attachment logic
and using direct child.kill() method. More reliable for headless processes.

* Fix missing imports

* switch to tokio::time

* Don't wait while forcefully terminate process using kill API on Windows

Disabled use of windows-sys crate as graceful shutdown on Windows is unreliable in this context.

Updated cleanup.rs and server.rs to directly call child.kill().await for terminating processes on Windows.

Improved logging for process termination and error handling during kill and wait.

Removed timeout-based graceful shutdown attempt on Windows since TerminateProcess is inherently forceful and immediate.

This ensures more predictable process cleanup behavior on Windows platforms.

* final cleanups
2025-07-30 10:09:20 +05:30
Akarshan Biswas
f61ce886a0
feat: Enhance port selection with availability check (#5966)
This change improves the robustness of the llama.cpp extension's server port selection.

Previously, the `getRandomPort()` method only checked for ports already in use by active sessions, which could lead to model load failures if the chosen port was occupied by another external process.

This change introduces a new Tauri command, `is_port_available`, which performs a system-level check to ensure the randomly selected port is truly free before attempting to start the llama-server. It also adds a retry mechanism with a maximum number of attempts (20,000) to find an available port, throwing an error if no suitable port is found within the specified range after all attempts.

This enhancement prevents port conflicts and improves the reliability and user experience of the llama.cpp extension within Jan.

Closes #5965
2025-07-29 18:01:52 +05:30
Louis
812a8082b8
fix: factory reset fail with access denied error (#5952)
* fix: factory reset fail due to access denied error

* fix: unused import

* fix: tests
2025-07-28 23:20:45 +07:00
Akarshan Biswas
1d0bb53f2a
feat: add support for querying available backend devices (#5877)
* feat: add support for querying available backend devices

This change introduces a new `get_devices` method to the `llamacpp_extension` engine that allows the frontend to query and display a list of available devices (e.g., Vulkan, CUDA, SYCL) from the compiled `llama-server` binary.

* Added `DeviceList` interface to represent GPU/device metadata.
* Implemented `getDevices(): Promise<DeviceList[]>` method.

  * Splits `version/backend`, ensures backend is ready.
  * Invokes the new Tauri command `get_devices`.

* Introduced a new `get_devices` Tauri command.
* Parses `llama-server --list-devices` output to extract available devices with memory info.
* Introduced `DeviceInfo` struct (`id`, `name`, `mem`, `free`) and exposed it via serialization.
* Robust parsing logic using string processing (non-regex) to locate memory stats.
* Registered the new command in the `tauri::Builder` in `lib.rs`.

* Fixed logic to correctly parse multiple devices from the llama-server output.
* Handles common failure modes: binary not found, malformed memory info, etc.

This sets the foundation for device selection, memory-aware model loading, and improved diagnostics in Jan AI engine setup flows.

* Update extensions/llamacpp-extension/src/index.ts

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-07-23 19:20:12 +05:30
Louis
3afdd0fa1d
fix: tmp download file should be removed on cancel (#5849) 2025-07-23 12:52:34 +07:00
Akarshan Biswas
1eaec5e4f6
Fix: engine unable to find dlls on when running on Windows (#5863)
* Fix: Windows llamacpp not picking up dlls from lib repo

* Fix lib path on Windows

* Add debug info about lib_path

* Normalize lib_path for Windows

* fix window lib path normalization

* fix: missing cuda dll files on windows

* throw backend setup errors to UI

* Fix format

* Update extensions/llamacpp-extension/src/index.ts

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* feat: add logger to llamacpp-extension

* fix: platform check

---------

Co-authored-by: Louis <louis@jan.ai>
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-07-22 20:05:24 +05:30
Akarshan Biswas
f59739d2b0
refactor: Improve Llama.cpp backend management and auto-update (#5845)
* refactor: Improve Llama.cpp backend management and auto-update

This commit refactors the Llama.cpp extension to enhance backend management and streamline the auto-update process.

Key changes include:

Refactored configureBackends: The logic for determining the best available backend and populating settings is now more modular, preventing duplicate executions.

Dedicated Auto-update Handling: Introduced a handleAutoUpdate method to encapsulate the auto-update logic, including downloading the latest available backend and updating the internal configuration and settings.

Robust Old Backend Cleanup: The removeOldBackends method is improved to ensure only the currently used backend version and type are kept, effectively managing disk space. A delay is added for Windows to prevent file conflicts during cleanup.

Final Installation Check: A ensureFinalBackendInstallation method is added to guarantee the selected backend is installed, acting as a final safeguard after auto-update or if auto-update is disabled.

Minor Fixes:

Added console.log for save_path during decompression for better debugging.

Ensured the output directory exists before decompression in the Rust backend.

Removed extraneous console log for session info.

Updated Cargo.toml and tauri.conf.json versions.

These changes lead to a more reliable and efficient Llama.cpp backend experience within the application, particularly for users with auto-update enabled.

* fix isBackendInstalled parameters

* Address bot's comments

* Address bot comments of using try finally block
2025-07-22 14:35:34 +05:30
Akarshan Biswas
08de0fa42d
fix: prevent terminal window from opening on model load on WindowsOS (#5837)
On Windows, spawning the llamacpp server was causing an unwanted terminal window
to appear. This is now fixed by combining `CREATE_NO_WINDOW` with
`CREATE_NEW_PROCESS_GROUP` using `.creation_flags(...)`, ensuring that the
process runs in the background without a console window.

This change only applies to 64-bit Windows builds.
2025-07-21 13:24:31 +05:30
Louis
19cb1c96e0
fix: llama.cpp backend download on windows (#5813)
* fix: llama.cpp backend download on windows

* test: add missing cases

* clean: linter

* fix: build
2025-07-20 16:58:09 +07:00
Louis
c550f6cf0d
Merge pull request #5809 from menloresearch/refactor/simplify-proxy-settings
refactor: simplify proxy settings by removing unused SSL verification options
2025-07-19 16:34:37 +07:00
Louis
8d84c3b884
feat: add model load error handling to improve UX (#5802)
* feat: model load error handling

* chore: clean up

* test: add tests

* fix: provider name
2025-07-18 08:25:54 +05:30
Louis
8ca507c01c
feat: proxy support for the new downloader (#5795)
* feat: proxy support for the new downloader

* test: remove outdated test

* ci: clean up
2025-07-17 23:10:21 +07:00
Akarshan Biswas
b736d09168
fix: Prevent spamming /health endpoint and improve startup and resolve compiler warnings (#5784)
* fix: Prevent spamming /health endpoint and improve startup and resolve compiler warnings

This commit introduces a delay and improved logic to the /health endpoint checks in the llamacpp extension, preventing excessive requests during model loading.

Additionally, it addresses several Rust compiler warnings by:
- Commenting out an unused `handle_app_quit` function in `src/core/mcp.rs`.
- Explicitly declaring `target_port`, `session_api_key`, and `buffered_body` as mutable in `src/core/server.rs`.
- Commenting out unused `tokio` imports in `src/core/setup.rs`.
- Enhancing the `load_llama_model` function in `src/core/utils/extensions/inference_llamacpp_extension/server.rs` to better monitor stdout/stderr for readiness and errors, and handle timeouts.
- Commenting out an unused `std::path::Prefix` import and adjusting `normalize_path` in `src/core/utils/mod.rs`.
- Updating the application version to 0.6.904 in `tauri.conf.json`.

* fix grammar!

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* fix grammar 2

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* reimport prefix but only on Windows

* remove instead of commenting

* remove redundant check

* sync app version in cargo.toml with tauri.conf

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-07-16 18:18:11 +05:30
Sam Hoang Van
9a76c94e22
update rmcp to fix issues (#5290) 2025-07-14 16:49:27 +07:00
Akarshan Biswas
dee98f41d1
Feat: Improved llamacpp Server Stability and Diagnostics (#5761)
* feat: Improve llamacpp server error reporting and model load stability

This commit introduces significant improvements to how the llamacpp server
process is managed and how its errors are reported.

Key changes:
- **Enhanced Error Reporting:** The llamacpp server's stdout and stderr
  are now piped and captured. If the llamacpp process exits prematurely
  or fails to start, its stderr output is captured and returned as a
  `LlamacppError`. This provides much more specific and actionable
  diagnostic information for users and developers.
- **Increased Model Load Timeout:** The `waitForModelLoad` timeout has
  been increased from 30 seconds to 240 seconds (4 minutes). This
  addresses issues where larger models or slower systems would
  prematurely time out during the model loading phase.
- **API Secret Update:** The internal API secret for the llamacpp
  extension has been updated from 'Jan' to 'JustAskNow'.
- **Version Bump:** The application version in `tauri.conf.json` has
  been incremented to `0.6.901`.

* fix: should not spam load requests

* test: add test to cover the fix

* refactor: clean up

* test: add more test case

---------

Co-authored-by: Louis <louis@jan.ai>
2025-07-14 11:55:44 +05:30
Louis
a8ed759a06 fix: model download - windows path issue 2025-07-10 09:42:36 +07:00
Louis
2f02a228cc
fix: download on windows 2025-07-08 15:41:17 +07:00
Akarshan
d5ffc6a476
feat: Migrate Jan's API server to llamacpp-extension
Things to ponder:
- Now, the v1/models endpoint of the API server will return an empty
  list if no models are loaded
- Streaming v1/chat/completion routing works as well as v1/models; needs
  further testing
2025-07-07 20:52:00 +05:30
Akarshan
d4a3d6a0d6
Refactor session PID types from string to number across backend and extension
- Changed `pid` field in `SessionInfo` from `string` to `number`/`i32` in TypeScript and Rust.
- Updated `activeSessions` map key from `string` to `number` to align with new PID type.
- Adjusted process monitoring logic to correctly handle numeric PIDs.
- Removed fallback UUID-based PID generation in favor of numeric fallback (-1).
- Added PID cleanup logic in `is_process_running` when the process is no longer alive.
- Bumped application version from 0.5.16 to 0.6.900 in `tauri.conf.json`.
2025-07-04 21:40:54 +05:30
Akarshan
dbdc031583
chore: store session_info in backend as well for API server(WIP) 2025-07-04 20:31:30 +05:30
Akarshan
03f0c5aad6
fix: remove unsupported BOOL for windows_sys in cleanup to fix windows build(attempt 3) 2025-07-03 18:35:13 +05:30
Akarshan
11db1ecaed
fix: server-side Ctrl-C handling for Windows x86_64 targets (attempt 2)
The current implementation of Ctrl-C handling was not properly tested on Windows x86_64 architectures. To address this, the code has been modified to use `i32` instead of `BOOL` to handle the result of the `GenerateConsoleCtrlEvent` function, ensuring that the return value is correctly checked across different platforms.
2025-07-03 14:13:56 +05:30
Akarshan
6ab7d37a08
fix: Update Cargo.toml dependencies on Windows & fix Ctrl+C handling on Windows
This change updates the dependencies of the Cargo.toml file on Windows to include additional features from the `windows-sys` crate. The `CreateProcess flags like CREATE_NEW_PROCESS_GROUP` feature is now enabled to allow for proper process management.
The code now properly sends Ctrl+C to the llama process on Windows, and also includes error handling for when the Ctrl+C command fails. Additionally, it now uses the `Windows` API to kill the process when it times out, and properly handles the wait for the process to exit.
2025-07-03 13:51:59 +05:30