243 Commits

Author SHA1 Message Date
Louis
9ed98614fe fix: factory reset process got blocked 2025-08-11 19:42:59 +07:00
Louis
f3dd26e499
fix: uvx and npx dirs should be not be relocated 2025-08-11 14:33:58 +07:00
Louis
b924156a15
fix: bring back GPU detection 2025-08-11 13:52:20 +07:00
Louis
4f5d9b8222
Merge pull request #6089 from menloresearch/fix/clean-up-unused-apis
refactor: clean up unused hardware apis
2025-08-11 00:02:31 +07:00
Louis
59afafba0e fix: test command 2025-08-10 23:36:14 +07:00
Louis
f0a9080ef7 fix: cargo test on windows 2025-08-10 22:46:44 +07:00
Akarshan Biswas
0cfc745954
feat: Introduce structured error handling for llamacpp extension (#6087)
* feat: Introduce structured error handling for llamacpp extension

This commit introduces a structured error handling system for the `llamacpp` extension. Instead of returning simple string errors, we now use a custom `LlamacppError` struct with a specific `ErrorCode` enum. This allows the frontend to display more user-friendly and actionable error messages based on the code, rather than raw debug logs.

The changes include:
- A new `ErrorCode` enum to categorize errors (e.g., `OutOfMemory`, `ModelArchNotSupported`, `BinaryNotFound`).
- A `LlamacppError` struct to encapsulate the code, a user-facing message, and optional detailed logs.
- A static method `from_stderr` that intelligently parses llama.cpp's standard error output to identify and map common issues like Out of Memory errors to a specific error code.
- Refactored `ServerError` enum to wrap the new `LlamacppError` and provide a consistent serialization format for the Tauri frontend.
- Updated all relevant functions (`load_llama_model`, `get_devices`) to return the new structured error type, ensuring a more robust and predictable error flow.
- A reduced timeout for model loading from 300 to 180 seconds.

This work lays the groundwork for a more intuitive and helpful user experience, as the application can now provide clear guidance to users when a model fails to load.

* Update src-tauri/src/core/utils/extensions/inference_llamacpp_extension/server.rs

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* Update src-tauri/src/core/utils/extensions/inference_llamacpp_extension/server.rs

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* chore: update FE handle error object from extension

* chore: fix property type

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-08-07 23:28:25 +05:30
Louis
fc7d8a7a9c
fix: test 2025-08-07 23:47:51 +07:00
Akarshan
0b7477ea56
move nix to non windows 2025-08-07 21:21:47 +05:30
Louis
9285714345
fix: tests 2025-08-07 22:38:28 +07:00
Akarshan
bdec0af791
fix windows test 2025-08-07 20:37:33 +05:30
Akarshan
9482c0a6b9
Revert "fix import on Windows"
This reverts commit b0e7030939a82baec5f12c44639d0eb6c3c1cf43.
2025-08-07 20:35:13 +05:30
Akarshan
b0e7030939
fix import on Windows 2025-08-07 20:29:05 +05:30
Akarshan
dc82fd6051
fix windows test for short path 2025-08-07 20:16:43 +05:30
Louis
b8f5fd510a
test: fix failed tests 2025-08-07 20:54:00 +07:00
Louis
c1668a4e4a
refactor: clean up unused hardware apis 2025-08-07 20:04:23 +07:00
Akarshan Biswas
469d787888
refactor: Use more precise terminology in API server logs (#6085)
* refactor: Use more precise terminology in API server logs and error messages

This commit refactors several log and error messages to use more accurate and consistent terminology.

-   Replaced "backend servers" and "backend model servers" with "models" or "sessions" to better reflect the service's internal structure.
-   Changed "Proxy server" to "Jan API server" to more accurately describe the server's function.
-   Removed a redundant debug log message.

These changes are cosmetic and improve the readability and consistency of the logging output.

* Update src-tauri/src/core/server.rs

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-08-07 17:48:33 +05:30
Akarshan Biswas
6a699d8004
refactor: move session management & port allocation to backend (#6083)
* refactor: move session management & port allocation to backend

- Remove the in‑process `activeSessions` map and its cleanup logic from the TypeScript side.
- Introduce new Tauri commands in Rust:
  - `get_random_port` – picks an unused port using a seeded RNG and checks availability.
  - `find_session_by_model` – returns the `SessionInfo` for a given model ID.
  - `get_loaded_models` – returns a list of currently loaded model IDs.
- Update the extension’s TypeScript code to use these commands via `invoke`:
  - `findSessionByModel`, `load`, `unload`, `chat`, `getLoadedModels`, and `embed` now operate asynchronously and query the backend.
  - Remove the old `is_port_available` command and the custom port‑checking loop.
  - Simplify `onUnload` – session termination is now handled by the backend.
- Drop unused helpers (`sleep`, `waitForModelLoad`) and related port‑availability code.
- Add missing Rust imports (`rand::{StdRng,Rng,SeedableRng}`, `HashSet`) and improve error handling.
- Register the new commands in `src-tauri/src/lib.rs` (replace `is_port_available` with the three new commands).

This refactor centralises session state and port allocation in the Rust backend, eliminates duplicated logic, and resolves race conditions around model loading and session cleanup.

* Use String(e) for error

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-08-07 13:06:21 +05:30
Louis
e74601443f chore: add deep_link register_all 2025-08-06 12:24:21 +10:00
Akarshan Biswas
dcffa4fa0a Fix: Improve Llama.cpp model path handling and error handling (#6045)
* Improve Llama.cpp model path handling and validation

This commit refactors the load_llama_model function to improve how it handles and validates the model path.

Previously, the function extracted the model path but did not perform any validation. This change adds the following improvements:

It now checks for the presence of the -m flag.

It verifies that a path is provided after the -m flag.

It validates that the specified model path actually exists on the filesystem.

It ensures that the SessionInfo struct stores the canonical display path of the model, which is a more robust approach.

These changes make the model loading process more reliable and provide better error handling for invalid or missing model paths.

* Exp: Use short path on Windows

* Fix: Remove error channel and handling in llama.cpp server loading

The previous implementation used a channel to receive error messages from the llama.cpp server's stdout. However, this proved unreliable as the path names can contain 'errors strings' that we use to check even during normal operation. This commit removes the error channel and associated error handling logic.
The server readiness is still determined by checking for the "server is listening" message in stdout. Errors are now handled by relying on the process exit code and capturing the full stderr output if the process fails to start or exits unexpectedly. This approach provides a more robust and accurate error detection mechanism.

* Add else block in Windows path handling

* Add some path related tests

* Fix windows tests
2025-08-06 12:24:21 +10:00
Louis
eb13189d07 fix: run dev should reinstall extensions 2025-08-06 12:24:21 +10:00
Sherzod Mutalov
5f06a35f4e fix: use attributes to check the feature existence 2025-08-06 12:24:21 +10:00
Sherzod Mutalov
280ea1aa9f chore: extracted macos avx2 check code to the utility function 2025-08-06 12:23:18 +10:00
Sherzod Mutalov
ad9c4854a9 chore: added comments 2025-08-06 12:20:30 +10:00
Sherzod Mutalov
49c8334e40 chore: replaced with macros call to remove warning 2025-08-06 12:20:30 +10:00
Sherzod Mutalov
f1dd42de9e fix: use system npx on old mac's 2025-08-06 12:20:30 +10:00
Akarshan Biswas
5e533bdedc
feat: Improve llama.cpp argument handling and add device parsing tests (#6041)
* feat: Improve llama.cpp argument handling and add device parsing tests

This commit refactors how arguments are passed to llama.cpp,
specifically by only adding arguments when their values differ from
their defaults. This reduces the verbosity of the command and prevents
potential conflicts or errors when llama.cpp's default behavior aligns
with the desired setting.

Additionally, new tests have been added for parsing device output from
llama.cpp, ensuring the accurate extraction of GPU information (ID,
name, total memory, and free memory). This improves the robustness of
device detection.

The following changes were made:

* **Remove redundant `--ctx-size` argument:** The `--ctx-size`
    argument is now only explicitly added if `cfg.ctx_size` is greater
    than 0.
* **Conditional argument adding for default values:**
    * `--split-mode` is only added if `cfg.split_mode` is not empty
        and not 'layer'.
    * `--main-gpu` is only added if `cfg.main_gpu` is not undefined
        and not 0.
    * `--cache-type-k` is only added if `cfg.cache_type_k` is not 'f16'.
    * `--cache-type-v` is only added if `cfg.cache_type_v` is not 'f16'
        (when `flash_attn` is enabled) or not 'f32' (otherwise). This
        also corrects the `flash_attn` condition.
    * `--defrag-thold` is only added if `cfg.defrag_thold` is not 0.1.
    * `--rope-scaling` is only added if `cfg.rope_scaling` is not
        'none'.
    * `--rope-scale` is only added if `cfg.rope_scale` is not 1.
    * `--rope-freq-base` is only added if `cfg.rope_freq_base` is not 0.
    * `--rope-freq-scale` is only added if `cfg.rope_freq_scale` is
        not 1.
* **Add `parse_device_output` tests:** Comprehensive unit tests were
    added to `src-tauri/src/core/utils/extensions/inference_llamacpp_extension/server.rs`
    to validate the parsing of llama.cpp device output under various
    scenarios, including multiple devices, single devices, different
    backends (CUDA, Vulkan, SYCL), complex GPU names, and error
    conditions.

* fixup cache_type_v comparision
2025-08-04 19:47:04 +05:30
Akarshan Biswas
b1984a452e
Fix: Llama.cpp server hangs on model load (#6030)
* Fix: Llama.cpp server hangs on model load

Resolves an issue where the llama.cpp server would hang indefinitely when loading certain models, as described in the attached ticket. The server's readiness message was not being correctly detected, causing the application to stall.

The previous implementation used a line-buffered reader (BufReader::lines()) to process the stderr stream. This method proved to be unreliable for the specific output of the llama.cpp server.

This commit refactors the stderr handling logic to use a more robust, chunk-based approach (read_until(b'\n', ...)). This ensures that the output is processed as it arrives, reliably capturing critical status messages and preventing the application from hanging during model initialization.

Fixes: #6021

* Handle error gracefully with ServerError

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* Revert "Handle error gracefully with ServerError"

This reverts commit 267a8a8a3262fbe36a445a30b8b3ba9a39697643.

* Revert "Fix: Llama.cpp server hangs on model load"

This reverts commit 44e5447f82f0ae32b6db7ffb213025f130d655c4.

* Add more guards, refactor and fix error sending to FE

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-08-02 21:50:07 +05:30
Louis
9c0d09c487
refactor: clean up cortex (#6003)
* refactor: clean up cortex

* chore: clean up

* refactor: clean up
2025-07-31 21:58:12 +07:00
Akarshan
e11b4c9449
restore extras to its original state 2025-07-31 15:41:44 +07:00
Akarshan
e76d207718
Fixup: tauri::WindowEvent 2025-07-31 15:41:43 +07:00
Akarshan
b3e8201481
Add RunEvent::Exit event to tauri to handle macos context menu exit 2025-07-31 15:41:43 +07:00
Akarshan Biswas
0aaaca05a4
fix: use direct process termination instead of console events on Windows (#5972)
* fix: remove CREATE_NEW_PROCESS_GROUP flag for proper Ctrl-C handling

CREATE_NEW_PROCESS_GROUP prevented GenerateConsoleCtrlEvent from working,
causing graceful shutdown failures. Removed to enable proper signal handling.

* Revert "fix: remove CREATE_NEW_PROCESS_GROUP flag for proper Ctrl-C handling"

This reverts commit 82ace3e72e4bf7338f422d5c79bdd6a0f8a2440e.

* fix: use direct process termination instead of console events

Simplified Windows process cleanup by removing console attachment logic
and using direct child.kill() method. More reliable for headless processes.

* Fix missing imports

* switch to tokio::time

* Don't wait while forcefully terminate process using kill API on Windows

Disabled use of windows-sys crate as graceful shutdown on Windows is unreliable in this context.

Updated cleanup.rs and server.rs to directly call child.kill().await for terminating processes on Windows.

Improved logging for process termination and error handling during kill and wait.

Removed timeout-based graceful shutdown attempt on Windows since TerminateProcess is inherently forceful and immediate.

This ensures more predictable process cleanup behavior on Windows platforms.

* final cleanups
2025-07-30 10:09:20 +05:30
Nguyen Ngoc Minh
ee582a8e52
chore: allow all HTTPS image sources in img-src directive (#5970) 2025-07-29 20:04:35 +07:00
Akarshan Biswas
f61ce886a0
feat: Enhance port selection with availability check (#5966)
This change improves the robustness of the llama.cpp extension's server port selection.

Previously, the `getRandomPort()` method only checked for ports already in use by active sessions, which could lead to model load failures if the chosen port was occupied by another external process.

This change introduces a new Tauri command, `is_port_available`, which performs a system-level check to ensure the randomly selected port is truly free before attempting to start the llama-server. It also adds a retry mechanism with a maximum number of attempts (20,000) to find an available port, throwing an error if no suitable port is found within the specified range after all attempts.

This enhancement prevents port conflicts and improves the reliability and user experience of the llama.cpp extension within Jan.

Closes #5965
2025-07-29 18:01:52 +05:30
Nguyen Ngoc Minh
eb714776ba
fix: csp including img.shields.io and cdn-uploads.huggingface.co in img-src directive (#5967)
* fix: csp including img.shields.io in img-src directive

* fix: add huggingface upload cdn to img-src directive
2025-07-29 16:30:00 +07:00
Louis
812a8082b8
fix: factory reset fail with access denied error (#5952)
* fix: factory reset fail due to access denied error

* fix: unused import

* fix: tests
2025-07-28 23:20:45 +07:00
Nguyen Ngoc Minh
a4e5973573
chore: uninstall when upgrading windows installer (#5945) 2025-07-28 14:09:13 +07:00
Nguyen Ngoc Minh
c3fa04fdd7
chore: revert back to passive mode on windows installer (#5934) 2025-07-26 22:29:58 +07:00
Akarshan Biswas
1d0bb53f2a
feat: add support for querying available backend devices (#5877)
* feat: add support for querying available backend devices

This change introduces a new `get_devices` method to the `llamacpp_extension` engine that allows the frontend to query and display a list of available devices (e.g., Vulkan, CUDA, SYCL) from the compiled `llama-server` binary.

* Added `DeviceList` interface to represent GPU/device metadata.
* Implemented `getDevices(): Promise<DeviceList[]>` method.

  * Splits `version/backend`, ensures backend is ready.
  * Invokes the new Tauri command `get_devices`.

* Introduced a new `get_devices` Tauri command.
* Parses `llama-server --list-devices` output to extract available devices with memory info.
* Introduced `DeviceInfo` struct (`id`, `name`, `mem`, `free`) and exposed it via serialization.
* Robust parsing logic using string processing (non-regex) to locate memory stats.
* Registered the new command in the `tauri::Builder` in `lib.rs`.

* Fixed logic to correctly parse multiple devices from the llama-server output.
* Handles common failure modes: binary not found, malformed memory info, etc.

This sets the foundation for device selection, memory-aware model loading, and improved diagnostics in Jan AI engine setup flows.

* Update extensions/llamacpp-extension/src/index.ts

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-07-23 19:20:12 +05:30
Louis
3afdd0fa1d
fix: tmp download file should be removed on cancel (#5849) 2025-07-23 12:52:34 +07:00
Akarshan Biswas
1eaec5e4f6
Fix: engine unable to find dlls on when running on Windows (#5863)
* Fix: Windows llamacpp not picking up dlls from lib repo

* Fix lib path on Windows

* Add debug info about lib_path

* Normalize lib_path for Windows

* fix window lib path normalization

* fix: missing cuda dll files on windows

* throw backend setup errors to UI

* Fix format

* Update extensions/llamacpp-extension/src/index.ts

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* feat: add logger to llamacpp-extension

* fix: platform check

---------

Co-authored-by: Louis <louis@jan.ai>
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-07-22 20:05:24 +05:30
Nguyen Ngoc Minh
7d3811f879
chore: update build appimage script (#5866)
* chore: update new appimage kit url

* chore: add error handling for appimagetool download
2025-07-22 21:02:25 +07:00
Akarshan Biswas
f59739d2b0
refactor: Improve Llama.cpp backend management and auto-update (#5845)
* refactor: Improve Llama.cpp backend management and auto-update

This commit refactors the Llama.cpp extension to enhance backend management and streamline the auto-update process.

Key changes include:

Refactored configureBackends: The logic for determining the best available backend and populating settings is now more modular, preventing duplicate executions.

Dedicated Auto-update Handling: Introduced a handleAutoUpdate method to encapsulate the auto-update logic, including downloading the latest available backend and updating the internal configuration and settings.

Robust Old Backend Cleanup: The removeOldBackends method is improved to ensure only the currently used backend version and type are kept, effectively managing disk space. A delay is added for Windows to prevent file conflicts during cleanup.

Final Installation Check: A ensureFinalBackendInstallation method is added to guarantee the selected backend is installed, acting as a final safeguard after auto-update or if auto-update is disabled.

Minor Fixes:

Added console.log for save_path during decompression for better debugging.

Ensured the output directory exists before decompression in the Rust backend.

Removed extraneous console log for session info.

Updated Cargo.toml and tauri.conf.json versions.

These changes lead to a more reliable and efficient Llama.cpp backend experience within the application, particularly for users with auto-update enabled.

* fix isBackendInstalled parameters

* Address bot's comments

* Address bot comments of using try finally block
2025-07-22 14:35:34 +05:30
Nguyen Ngoc Minh
fceecffed7
feat: add vcruntime for windows installer (#5852) 2025-07-22 12:38:00 +07:00
Nguyen Ngoc Minh
9ea081576b
fix: custom tauri nsis template CheckIfAppIsRunning macro (#5840)
* fix: update CheckIfAppIsRunning macro to include args
2025-07-21 20:54:06 +07:00
Akarshan Biswas
08de0fa42d
fix: prevent terminal window from opening on model load on WindowsOS (#5837)
On Windows, spawning the llamacpp server was causing an unwanted terminal window
to appear. This is now fixed by combining `CREATE_NO_WINDOW` with
`CREATE_NEW_PROCESS_GROUP` using `.creation_flags(...)`, ensuring that the
process runs in the background without a console window.

This change only applies to 64-bit Windows builds.
2025-07-21 13:24:31 +05:30
Louis
19cb1c96e0
fix: llama.cpp backend download on windows (#5813)
* fix: llama.cpp backend download on windows

* test: add missing cases

* clean: linter

* fix: build
2025-07-20 16:58:09 +07:00
Louis
c550f6cf0d
Merge pull request #5809 from menloresearch/refactor/simplify-proxy-settings
refactor: simplify proxy settings by removing unused SSL verification options
2025-07-19 16:34:37 +07:00
Akarshan
59ad2eb784
Merge branch 'dev' into release/v0.6.6 2025-07-18 18:29:20 +05:30