5428 Commits

Author SHA1 Message Date
Louis
1fc37a9349
fix: migrate app settings to the new version (#5936)
* fix: migrate app settings to the new version

* fix: edge cases

* fix: migrate HF import model on Windows

* fix hardware page broken after downgraded

* test: correct test

* fix: backward compatible hardware info
2025-07-27 21:13:05 +07:00
Akarshan Biswas
c9b44eec52
fix: Remove sInfo from activeSessions before unloading (#5938)
This commit addresses a potential race condition that could lead to "connection errors" when unloading a llamacpp model.

The issue arose because the `activeSessions` map still has the session info of the model during unload. This could lead to "connection errors" when the backend is taking time to unload while there is an ongoing request to the model.

The fix involves:

1. **Deleting the `pid` from `activeSessions` before calling backend's unload:** This ensures that the model is cleared from the map before we start unloading.
2. **Failure handling**: If somehow the backend fails to unload, the session info for that model is added back to prevent any race conditions.

This commit improves the robustness and reliability of the unloading process by preventing potential conflicts.
2025-07-27 14:37:34 +05:30
Faisal Amir
54d44ce741
fix: update default GPU toggle, and simplify state (#5937) 2025-07-27 14:36:08 +07:00
Nguyen Ngoc Minh
c3fa04fdd7
chore: revert back to passive mode on windows installer (#5934) 2025-07-26 22:29:58 +07:00
Faisal Amir
b89d9d090f
fix: update ui version_backend, mem usage hardware (#5932)
* fix: update ui version_backend, mem usage hardware

* chore: hidden gpu from system monitor on mac

* chore: fix gpus vram
2025-07-26 18:36:18 +07:00
Akarshan Biswas
8ec4a36826
fix: Frontend updates when llama.cpp backend auto-downloads (#5926) 2025-07-26 08:48:29 +07:00
Faisal Amir
2e870ad4d0
fix: calculation memory on hardware and system monitor (#5922) 2025-07-26 08:47:59 +07:00
Faisal Amir
7dec980630
fix: persist model capabilities refresh app (#5918) 2025-07-25 20:27:51 +07:00
Faisal Amir
6c15129ce8
fix: validate name assistant and improve area clickable (#5920) 2025-07-25 20:27:38 +07:00
Akarshan Biswas
3982ed4c6f
fix: Allow N-GPU Layers (NGL) to be set to 0 in llama.cpp (#5907)
* fix: Allow N-GPU Layers (NGL) to be set to 0 in llama.cpp

The `n_gpu_layers` (NGL) setting in the llama.cpp extension was incorrectly preventing users from disabling GPU layers by automatically defaulting to 100 when set to 0.

This was caused by a condition that only pushed `cfg.n_gpu_layers` if it was greater than 0 (`cfg.n_gpu_layers > 0`).

This commit updates the condition to `cfg.n_gpu_layers >= 0`, allowing 0 to be a valid and accepted value for NGL. This ensures that users can effectively disable GPU offloading when desired.

* fix: default ngl

---------

Co-authored-by: Louis <louis@jan.ai>
2025-07-25 16:24:53 +05:30
Louis
0c53ad0e16
fix: models hub should show latest data only (#5925)
* fix: models hub should show latest data only

* test: correct expected result
2025-07-25 17:34:14 +07:00
Akarshan Biswas
4d4cf896af
fix: Persist 'Auto-Unload Old Models' setting in llama.cpp (#5906)
The 'Auto-Unload Old Models' setting in the llama.cpp extension failed to persist due to a typo in its key name within `settings.json`. The key was incorrectly `auto_unload_models` instead of `auto_unload`.

This commit corrects the key name to `auto_unload`, ensuring that user-configured changes to this setting are properly saved, retrieved, and persist across application restarts.

This resolves the issue where the setting would change and remain to its previous value after being changed.
2025-07-25 11:03:15 +05:30
Akarshan Biswas
a1af70f7a9
feat: Enhance Llama.cpp backend management with persistence (#5886)
* feat: Enhance Llama.cpp backend management with persistence

This commit introduces significant improvements to how the Llama.cpp extension manages and updates its backend installations, focusing on user preference persistence and smarter auto-updates.

Key changes include:

* **Persistent Backend Type Preference:** The extension now stores the user's preferred backend type (e.g., `cuda`, `cpu`, `metal`) in `localStorage`. This ensures that even after updates or restarts, the system attempts to use the user's previously selected backend type, if available.
* **Intelligent Auto-Update:** The auto-update mechanism has been refined to prioritize updating to the **latest version of the *currently selected backend type*** rather than always defaulting to the "best available" backend (which might change). This respects user choice while keeping the chosen backend type up-to-date.
* **Improved Initial Installation/Configuration:** For fresh installations or cases where the `version_backend` setting is invalid, the system now intelligently determines and installs the best available backend, then persists its type.
* **Refined Old Backend Cleanup:** The `removeOldBackends` function has been renamed to `removeOldBackend` and modified to specifically clean up *older versions of the currently selected backend type*, preventing the accumulation of unnecessary files while preserving other backend types the user might switch to.
* **Robust Local Storage Handling:** New private methods (`getStoredBackendType`, `setStoredBackendType`, `clearStoredBackendType`) are introduced to safely interact with `localStorage`, including error handling for potential `localStorage` access issues.
* **Version Filtering Utility:** A new utility `findLatestVersionForBackend` helps in identifying the latest available version for a specific backend type from a list of supported backends.

These changes provide a more stable, user-friendly, and maintainable backend management experience for the Llama.cpp extension.

Fixes: #5883

* fix: cortex models migration should be done once

* feat: Optimize Llama.cpp backend preference storage and UI updates

This commit refines the Llama.cpp extension's backend management by:

* **Optimizing `localStorage` Writes:** The system now only writes the backend type preference to `localStorage` if the new value is different from the currently stored one. This reduces unnecessary `localStorage` operations.
* **Ensuring UI Consistency on Initial Setup:** When a fresh installation or an invalid backend configuration is detected, the UI settings are now explicitly updated to reflect the newly determined `effectiveBackendString`, ensuring the displayed setting matches the active configuration.

These changes improve performance by reducing redundant storage operations and enhance user experience by maintaining UI synchronization with the backend state.

* Revert "fix: provider settings should be refreshed on page load (#5887)"

This reverts commit ce6af62c7df4a7e7ea8c0896f307309d6bf38771.

* fix: add loader version backend llamacpp

* fix: wrong key name

* fix: model setting issues

* fix: virtual dom hub

* chore: cleanup

* chore: hide device ofload setting

---------

Co-authored-by: Louis <louis@jan.ai>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-07-24 18:33:35 +07:00
hiento09
d51f904826
chore: update cua mac runner (#5888) 2025-07-24 16:25:02 +07:00
Louis
ce6af62c7d
fix: provider settings should be refreshed on page load (#5887) 2025-07-24 14:30:33 +07:00
Faisal Amir
5d00cf652a
🐛fix: get system info and system usage (#5884) 2025-07-24 12:39:10 +07:00
Faisal Amir
399671488c
fix: gpu detected from backend version (#5882)
* fix: gpu detected from backend version

* chore: remove readonly props from dynamic field
2025-07-24 10:45:48 +07:00
Louis
6599d91660
fix: bring back HF repo ID search in Hub (#5880)
* fix: bring back HF search input

* test: fix useModelSources tests for updated addSource signature
2025-07-24 09:46:13 +07:00
Nguyen Ngoc Minh
d8b6b10870
chore: revert app artifact name for macos linux and windows builds (#5878) 2025-07-23 21:27:56 +07:00
Akarshan Biswas
1d0bb53f2a
feat: add support for querying available backend devices (#5877)
* feat: add support for querying available backend devices

This change introduces a new `get_devices` method to the `llamacpp_extension` engine that allows the frontend to query and display a list of available devices (e.g., Vulkan, CUDA, SYCL) from the compiled `llama-server` binary.

* Added `DeviceList` interface to represent GPU/device metadata.
* Implemented `getDevices(): Promise<DeviceList[]>` method.

  * Splits `version/backend`, ensures backend is ready.
  * Invokes the new Tauri command `get_devices`.

* Introduced a new `get_devices` Tauri command.
* Parses `llama-server --list-devices` output to extract available devices with memory info.
* Introduced `DeviceInfo` struct (`id`, `name`, `mem`, `free`) and exposed it via serialization.
* Robust parsing logic using string processing (non-regex) to locate memory stats.
* Registered the new command in the `tauri::Builder` in `lib.rs`.

* Fixed logic to correctly parse multiple devices from the llama-server output.
* Handles common failure modes: binary not found, malformed memory info, etc.

This sets the foundation for device selection, memory-aware model loading, and improved diagnostics in Jan AI engine setup flows.

* Update extensions/llamacpp-extension/src/index.ts

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-07-23 19:20:12 +05:30
Louis
d6ad797769
fix: llama.cpp backend shows blank list sometime (#5876) 2025-07-23 20:04:38 +07:00
Nguyen Ngoc Minh
9a511fd5fa
ci: resolve nested template expression in artifact names (#5875)
* ci: update artifact name for Linux and Windows build

* ci: enhance logic for naming convention for mac, linux and windows builds

* fix: resolve nested template expression in artifact names
2025-07-23 17:48:33 +07:00
Nguyen Ngoc Minh
3a8af3c24d
ci: autoqa github artifact (#5873)
* ci: add upload recordings and logs github artifact

* chore: update version actions upload artifact
2025-07-23 14:33:48 +07:00
Louis
af116dd7dc
fix: jan should have a general assistant instruction (#5872)
* fix: default Jan assistant prompt

* test: update tests
2025-07-23 13:55:20 +07:00
Louis
3afdd0fa1d
fix: tmp download file should be removed on cancel (#5849) 2025-07-23 12:52:34 +07:00
Faisal Amir
43b7eb6e18
🐛fix: remove sampling parameters from llamacpp extension (#5871) 2025-07-23 12:13:42 +07:00
Faisal Amir
fd26270e78
🐛fix/update vulkan active syntax (#5869) 2025-07-23 11:45:54 +07:00
Louis
3e30c61fb0
fix: app should refresh local provider models list on launch (#5868) 2025-07-23 08:36:09 +07:00
Louis
fe95031c6e
feat: migrate cortex models to llamacpp extension (#5838)
* feat: migrate cortex models to new llama.cpp extension

* test: add tests

* clean: remove duplicated import
2025-07-22 23:35:08 +07:00
Nguyen Ngoc Minh
5cbd79b525
fix: charmap encoding (#5865)
* fix: handle charmap encoding error

* enhancement: prompt template for new user flow
2025-07-22 23:33:12 +07:00
Louis
d347058d6b
fix: HuggingFace provider should be non-deletable (#5856)
* fix: HuggingFace provider should be non-deletable

* refactor: rename const folder

* test: correct test case
2025-07-22 23:32:37 +07:00
Louis
8e9cd2566b
fix: gemini tool call support (#5848) 2025-07-22 23:25:43 +07:00
Akarshan Biswas
1eaec5e4f6
Fix: engine unable to find dlls on when running on Windows (#5863)
* Fix: Windows llamacpp not picking up dlls from lib repo

* Fix lib path on Windows

* Add debug info about lib_path

* Normalize lib_path for Windows

* fix window lib path normalization

* fix: missing cuda dll files on windows

* throw backend setup errors to UI

* Fix format

* Update extensions/llamacpp-extension/src/index.ts

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* feat: add logger to llamacpp-extension

* fix: platform check

---------

Co-authored-by: Louis <louis@jan.ai>
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-07-22 20:05:24 +05:30
Nguyen Ngoc Minh
7d3811f879
chore: update build appimage script (#5866)
* chore: update new appimage kit url

* chore: add error handling for appimagetool download
2025-07-22 21:02:25 +07:00
Faisal Amir
5553096bc4
enhancement: dialog model error trigger from provider screen and improve copy button (#5858) 2025-07-22 20:36:01 +07:00
Faisal Amir
1d443e1f7d
fix: support load model configurations (#5843)
* fix: support load model configurations

* chore: remove log

* chore: sampling params add from send completion

* chore: remove comment

* chore: remove comment on predefined file

* chore: update test model service
2025-07-22 19:52:12 +07:00
Faisal Amir
7b3b6cc8be
🐛fix: delete all should not include fav thread (#5864) 2025-07-22 19:51:59 +07:00
hiento09
1dd5b810c2
Chore: enrich autoqa log (#5862)
* chore: add app log upload to reportportal
2025-07-22 16:13:00 +07:00
Akarshan Biswas
f59739d2b0
refactor: Improve Llama.cpp backend management and auto-update (#5845)
* refactor: Improve Llama.cpp backend management and auto-update

This commit refactors the Llama.cpp extension to enhance backend management and streamline the auto-update process.

Key changes include:

Refactored configureBackends: The logic for determining the best available backend and populating settings is now more modular, preventing duplicate executions.

Dedicated Auto-update Handling: Introduced a handleAutoUpdate method to encapsulate the auto-update logic, including downloading the latest available backend and updating the internal configuration and settings.

Robust Old Backend Cleanup: The removeOldBackends method is improved to ensure only the currently used backend version and type are kept, effectively managing disk space. A delay is added for Windows to prevent file conflicts during cleanup.

Final Installation Check: A ensureFinalBackendInstallation method is added to guarantee the selected backend is installed, acting as a final safeguard after auto-update or if auto-update is disabled.

Minor Fixes:

Added console.log for save_path during decompression for better debugging.

Ensured the output directory exists before decompression in the Rust backend.

Removed extraneous console log for session info.

Updated Cargo.toml and tauri.conf.json versions.

These changes lead to a more reliable and efficient Llama.cpp backend experience within the application, particularly for users with auto-update enabled.

* fix isBackendInstalled parameters

* Address bot's comments

* Address bot comments of using try finally block
2025-07-22 14:35:34 +05:30
Nguyen Ngoc Minh
e3813ab1af
fix: autoqa prompt template (#5854) 2025-07-22 13:34:43 +07:00
Louis
e424938e02
Merge branch 'dev' into release/v0.6.6
# Conflicts:
#	.github/workflows/template-tauri-build-windows-x64.yml
#	Makefile
#	extensions/engine-management-extension/engines.mjs
2025-07-22 13:18:00 +07:00
Nguyen Ngoc Minh
fceecffed7
feat: add vcruntime for windows installer (#5852) 2025-07-22 12:38:00 +07:00
Faisal Amir
25952f293c
enhancement: auto focus always allow action from tool approval dialog and add req parameters (#5836)
* enhancement: auto focus always allow action from tool approval dialog

* chore: error handling tools parameters

* chore: update test button focus cases
2025-07-22 12:17:53 +07:00
Faisal Amir
78df0a20ec
enhancement: better error page component (#5834)
* enhancement: better error page component

* chore: typo and useless space
2025-07-22 12:17:44 +07:00
Nguyen Ngoc Minh
af892428a5
chore: sync make build with dev (#5847)
* chore: sync up make build with dev

* ci: update macOS self-hosted runner
2025-07-22 11:12:14 +07:00
Nguyen Ngoc Minh
e82e5e1da9
refactor: standardize build process and remove build-tauri target (#5846) 2025-07-22 00:01:48 +07:00
Nguyen Ngoc Minh
9ea081576b
fix: custom tauri nsis template CheckIfAppIsRunning macro (#5840)
* fix: update CheckIfAppIsRunning macro to include args
2025-07-21 20:54:06 +07:00
Nguyen Ngoc Minh
275cab7538
Merge pull request #5839 from menloresearch/fix/appimage-url-with-latest-tauri-cli
fix: update @taur-apps/cli to newest verison to fix appimage download
2025-07-21 03:41:27 -07:00
Minh141120
db962b2ba6 fix: update @taur-apps/cli to newest verison to fix appimage download issue 2025-07-21 16:32:27 +07:00
Akarshan Biswas
08de0fa42d
fix: prevent terminal window from opening on model load on WindowsOS (#5837)
On Windows, spawning the llamacpp server was causing an unwanted terminal window
to appear. This is now fixed by combining `CREATE_NO_WINDOW` with
`CREATE_NEW_PROCESS_GROUP` using `.creation_flags(...)`, ensuring that the
process runs in the background without a console window.

This change only applies to 64-bit Windows builds.
2025-07-21 13:24:31 +05:30