5457 Commits

Author SHA1 Message Date
Louis
804b0f0116
fix: should not include reasoning text in the chat completion request v0.6.7 2025-08-06 17:34:34 +07:00
Faisal Amir
4727132d3c
fix: gpt-oss thinking block (#6071) 2025-08-06 16:10:45 +07:00
Faisal Amir
4f5bde4964
fix: react state loop from hooks useMediaQuery (#6031)
* fix: react state loop from hooks useMediaQuerry

* chore: update test cases hooks media query
2025-08-06 14:01:12 +07:00
Louis
4bcfa84d75
Merge pull request #6008 from menloresearch/hotfix/regression-issue-with-colon-in-model-name
hotfix: regression issue with colon in model name
v0.6.6
2025-07-31 17:55:28 +07:00
Louis
8a7edbf3a7
Merge pull request #6005 from menloresearch/fix/save_my_life
Add RunEvent::Exit event to tauri to handle macos context menu exit
2025-07-31 16:05:22 +07:00
Akarshan
e11b4c9449
restore extras to its original state 2025-07-31 15:41:44 +07:00
Akarshan
e76d207718
Fixup: tauri::WindowEvent 2025-07-31 15:41:43 +07:00
Akarshan
b3e8201481
Add RunEvent::Exit event to tauri to handle macos context menu exit 2025-07-31 15:41:43 +07:00
Faisal Amir
59a17d4a2a
fix/remove-auto-refresh-model (#6002) 2025-07-31 14:07:31 +07:00
Louis
76bcf33f80
fix: generate response button disappear on tool call (#5988)
* fix: generate a response button should appear when an incomplete tool call message is present

* fix: wording

* fix: do not send duplicate messages on regenerating

* fix: tests
2025-07-30 21:04:12 +07:00
Faisal Amir
f58d745585
fix: title tooltip MCP edit json (#5987)
* fix/title-tooltip-mcp-json

* fix: title tooltip delete mcp
2025-07-30 21:00:55 +07:00
Faisal Amir
1e7e572d4a
fix: download progress missing when left panel scrollable (#5984) 2025-07-30 18:36:42 +07:00
Louis
7a3d9d765c
fix: failed provider models list due to broken cortex import (#5983) 2025-07-30 17:37:44 +07:00
Akarshan Biswas
0aaaca05a4
fix: use direct process termination instead of console events on Windows (#5972)
* fix: remove CREATE_NEW_PROCESS_GROUP flag for proper Ctrl-C handling

CREATE_NEW_PROCESS_GROUP prevented GenerateConsoleCtrlEvent from working,
causing graceful shutdown failures. Removed to enable proper signal handling.

* Revert "fix: remove CREATE_NEW_PROCESS_GROUP flag for proper Ctrl-C handling"

This reverts commit 82ace3e72e4bf7338f422d5c79bdd6a0f8a2440e.

* fix: use direct process termination instead of console events

Simplified Windows process cleanup by removing console attachment logic
and using direct child.kill() method. More reliable for headless processes.

* Fix missing imports

* switch to tokio::time

* Don't wait while forcefully terminate process using kill API on Windows

Disabled use of windows-sys crate as graceful shutdown on Windows is unreliable in this context.

Updated cleanup.rs and server.rs to directly call child.kill().await for terminating processes on Windows.

Improved logging for process termination and error handling during kill and wait.

Removed timeout-based graceful shutdown attempt on Windows since TerminateProcess is inherently forceful and immediate.

This ensures more predictable process cleanup behavior on Windows platforms.

* final cleanups
2025-07-30 10:09:20 +05:30
Faisal Amir
079759939a
fix: rename thread dialog shows previous thread (#5963) 2025-07-30 09:18:43 +07:00
Nguyen Ngoc Minh
ee582a8e52
chore: allow all HTTPS image sources in img-src directive (#5970) 2025-07-29 20:04:35 +07:00
Akarshan Biswas
f61ce886a0
feat: Enhance port selection with availability check (#5966)
This change improves the robustness of the llama.cpp extension's server port selection.

Previously, the `getRandomPort()` method only checked for ports already in use by active sessions, which could lead to model load failures if the chosen port was occupied by another external process.

This change introduces a new Tauri command, `is_port_available`, which performs a system-level check to ensure the randomly selected port is truly free before attempting to start the llama-server. It also adds a retry mechanism with a maximum number of attempts (20,000) to find an available port, throwing an error if no suitable port is found within the specified range after all attempts.

This enhancement prevents port conflicts and improves the reliability and user experience of the llama.cpp extension within Jan.

Closes #5965
2025-07-29 18:01:52 +05:30
Nguyen Ngoc Minh
eb714776ba
fix: csp including img.shields.io and cdn-uploads.huggingface.co in img-src directive (#5967)
* fix: csp including img.shields.io in img-src directive

* fix: add huggingface upload cdn to img-src directive
2025-07-29 16:30:00 +07:00
Nguyen Ngoc Minh
210ace79d5
ci: tolerate artifact upload (#5969) 2025-07-29 15:45:32 +07:00
Faisal Amir
63cb4fbf3b
fix: assistant with last used and fix metadata (#5955)
* fix: assistant with last used and fix metadata

* chore: revert instruction and desc

* chore: fix current assistant state

* chore: updae metadata message assistant

* chore: update test case
2025-07-29 09:50:07 +07:00
Louis
160d158152
fix: search models result in hub should be sorted by weight (#5954) 2025-07-28 23:33:11 +07:00
Louis
812a8082b8
fix: factory reset fail with access denied error (#5952)
* fix: factory reset fail due to access denied error

* fix: unused import

* fix: tests
2025-07-28 23:20:45 +07:00
Akarshan Biswas
07421d7f53
fix: set autoUnload in onLoad() (#5956)
The variable was not initialized resulted in always setting true when
starting.

This change fixes it.
2025-07-28 20:54:21 +05:30
Faisal Amir
1c74bfd5ef
fix: update edge case experimental feature MCP (#5951)
* fix: update edge case experimental feature MCP

* Update web-app/src/routes/settings/mcp-servers.tsx

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-07-28 21:31:51 +07:00
Akarshan Biswas
fa896b3bf3
fix: correctly apply auto_unload setting from config (#5953)
Previously, the `autoUnload` flag was not being updated when set via config,
causing models to be auto-unloaded regardless of the intended behavior.
This patch ensures the setting is respected at runtime.
2025-07-28 19:17:29 +05:30
Akarshan Biswas
432c942330
fix: Prevent race condition with auto-unload during rapid model loading (#5947)
This commit addresses a race condition where, with "Auto-Unload Old Models" enabled, rapidly attempting to load multiple models could result in more than one model being loaded simultaneously.

Previously, the unloading logic did not account for models that were still in the process of loading when a new load operation was initiated. This allowed new models to start loading before the previous ones had fully completed their unload cycle.

To resolve this:
- A `loadingModels` map has been introduced to track promises for models currently in the loading state.
- The `load` method now checks if a model is already being loaded and, if so, returns the existing promise, preventing duplicate load operations for the same model.
- The `performLoad` method (which encapsulates the actual loading logic) now ensures that when `autoUnload` is active, it waits for any *other* models that are concurrently loading to finish before proceeding to unload all currently loaded models. This guarantees that the auto-unload mechanism properly unloads all models, including those initiated in quick succession, thereby preventing the race condition.

This fixes the issue where clicking the start button very fast on multiple models would bypass the auto-unload functionality.
2025-07-28 12:59:48 +05:30
Nguyen Ngoc Minh
a4e5973573
chore: uninstall when upgrading windows installer (#5945) 2025-07-28 14:09:13 +07:00
Louis
fdaa3b1992
fix: openrouter unselect itself (#5943)
* fix: selected openrouter model does not work

* test: add tests to cover new change
2025-07-28 10:33:23 +07:00
Faisal Amir
08af8a49aa
fix: tool approval params scrollable (#5941) 2025-07-28 09:39:34 +07:00
Louis
1fc37a9349
fix: migrate app settings to the new version (#5936)
* fix: migrate app settings to the new version

* fix: edge cases

* fix: migrate HF import model on Windows

* fix hardware page broken after downgraded

* test: correct test

* fix: backward compatible hardware info
2025-07-27 21:13:05 +07:00
Akarshan Biswas
c9b44eec52
fix: Remove sInfo from activeSessions before unloading (#5938)
This commit addresses a potential race condition that could lead to "connection errors" when unloading a llamacpp model.

The issue arose because the `activeSessions` map still has the session info of the model during unload. This could lead to "connection errors" when the backend is taking time to unload while there is an ongoing request to the model.

The fix involves:

1. **Deleting the `pid` from `activeSessions` before calling backend's unload:** This ensures that the model is cleared from the map before we start unloading.
2. **Failure handling**: If somehow the backend fails to unload, the session info for that model is added back to prevent any race conditions.

This commit improves the robustness and reliability of the unloading process by preventing potential conflicts.
2025-07-27 14:37:34 +05:30
Faisal Amir
54d44ce741
fix: update default GPU toggle, and simplify state (#5937) 2025-07-27 14:36:08 +07:00
Nguyen Ngoc Minh
c3fa04fdd7
chore: revert back to passive mode on windows installer (#5934) 2025-07-26 22:29:58 +07:00
Faisal Amir
b89d9d090f
fix: update ui version_backend, mem usage hardware (#5932)
* fix: update ui version_backend, mem usage hardware

* chore: hidden gpu from system monitor on mac

* chore: fix gpus vram
2025-07-26 18:36:18 +07:00
Akarshan Biswas
8ec4a36826
fix: Frontend updates when llama.cpp backend auto-downloads (#5926) 2025-07-26 08:48:29 +07:00
Faisal Amir
2e870ad4d0
fix: calculation memory on hardware and system monitor (#5922) 2025-07-26 08:47:59 +07:00
Faisal Amir
7dec980630
fix: persist model capabilities refresh app (#5918) 2025-07-25 20:27:51 +07:00
Faisal Amir
6c15129ce8
fix: validate name assistant and improve area clickable (#5920) 2025-07-25 20:27:38 +07:00
Akarshan Biswas
3982ed4c6f
fix: Allow N-GPU Layers (NGL) to be set to 0 in llama.cpp (#5907)
* fix: Allow N-GPU Layers (NGL) to be set to 0 in llama.cpp

The `n_gpu_layers` (NGL) setting in the llama.cpp extension was incorrectly preventing users from disabling GPU layers by automatically defaulting to 100 when set to 0.

This was caused by a condition that only pushed `cfg.n_gpu_layers` if it was greater than 0 (`cfg.n_gpu_layers > 0`).

This commit updates the condition to `cfg.n_gpu_layers >= 0`, allowing 0 to be a valid and accepted value for NGL. This ensures that users can effectively disable GPU offloading when desired.

* fix: default ngl

---------

Co-authored-by: Louis <louis@jan.ai>
2025-07-25 16:24:53 +05:30
Louis
0c53ad0e16
fix: models hub should show latest data only (#5925)
* fix: models hub should show latest data only

* test: correct expected result
2025-07-25 17:34:14 +07:00
Akarshan Biswas
4d4cf896af
fix: Persist 'Auto-Unload Old Models' setting in llama.cpp (#5906)
The 'Auto-Unload Old Models' setting in the llama.cpp extension failed to persist due to a typo in its key name within `settings.json`. The key was incorrectly `auto_unload_models` instead of `auto_unload`.

This commit corrects the key name to `auto_unload`, ensuring that user-configured changes to this setting are properly saved, retrieved, and persist across application restarts.

This resolves the issue where the setting would change and remain to its previous value after being changed.
2025-07-25 11:03:15 +05:30
Akarshan Biswas
a1af70f7a9
feat: Enhance Llama.cpp backend management with persistence (#5886)
* feat: Enhance Llama.cpp backend management with persistence

This commit introduces significant improvements to how the Llama.cpp extension manages and updates its backend installations, focusing on user preference persistence and smarter auto-updates.

Key changes include:

* **Persistent Backend Type Preference:** The extension now stores the user's preferred backend type (e.g., `cuda`, `cpu`, `metal`) in `localStorage`. This ensures that even after updates or restarts, the system attempts to use the user's previously selected backend type, if available.
* **Intelligent Auto-Update:** The auto-update mechanism has been refined to prioritize updating to the **latest version of the *currently selected backend type*** rather than always defaulting to the "best available" backend (which might change). This respects user choice while keeping the chosen backend type up-to-date.
* **Improved Initial Installation/Configuration:** For fresh installations or cases where the `version_backend` setting is invalid, the system now intelligently determines and installs the best available backend, then persists its type.
* **Refined Old Backend Cleanup:** The `removeOldBackends` function has been renamed to `removeOldBackend` and modified to specifically clean up *older versions of the currently selected backend type*, preventing the accumulation of unnecessary files while preserving other backend types the user might switch to.
* **Robust Local Storage Handling:** New private methods (`getStoredBackendType`, `setStoredBackendType`, `clearStoredBackendType`) are introduced to safely interact with `localStorage`, including error handling for potential `localStorage` access issues.
* **Version Filtering Utility:** A new utility `findLatestVersionForBackend` helps in identifying the latest available version for a specific backend type from a list of supported backends.

These changes provide a more stable, user-friendly, and maintainable backend management experience for the Llama.cpp extension.

Fixes: #5883

* fix: cortex models migration should be done once

* feat: Optimize Llama.cpp backend preference storage and UI updates

This commit refines the Llama.cpp extension's backend management by:

* **Optimizing `localStorage` Writes:** The system now only writes the backend type preference to `localStorage` if the new value is different from the currently stored one. This reduces unnecessary `localStorage` operations.
* **Ensuring UI Consistency on Initial Setup:** When a fresh installation or an invalid backend configuration is detected, the UI settings are now explicitly updated to reflect the newly determined `effectiveBackendString`, ensuring the displayed setting matches the active configuration.

These changes improve performance by reducing redundant storage operations and enhance user experience by maintaining UI synchronization with the backend state.

* Revert "fix: provider settings should be refreshed on page load (#5887)"

This reverts commit ce6af62c7df4a7e7ea8c0896f307309d6bf38771.

* fix: add loader version backend llamacpp

* fix: wrong key name

* fix: model setting issues

* fix: virtual dom hub

* chore: cleanup

* chore: hide device ofload setting

---------

Co-authored-by: Louis <louis@jan.ai>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-07-24 18:33:35 +07:00
hiento09
d51f904826
chore: update cua mac runner (#5888) 2025-07-24 16:25:02 +07:00
Louis
ce6af62c7d
fix: provider settings should be refreshed on page load (#5887) 2025-07-24 14:30:33 +07:00
Faisal Amir
5d00cf652a
🐛fix: get system info and system usage (#5884) 2025-07-24 12:39:10 +07:00
Faisal Amir
399671488c
fix: gpu detected from backend version (#5882)
* fix: gpu detected from backend version

* chore: remove readonly props from dynamic field
2025-07-24 10:45:48 +07:00
Louis
6599d91660
fix: bring back HF repo ID search in Hub (#5880)
* fix: bring back HF search input

* test: fix useModelSources tests for updated addSource signature
2025-07-24 09:46:13 +07:00
Nguyen Ngoc Minh
d8b6b10870
chore: revert app artifact name for macos linux and windows builds (#5878) 2025-07-23 21:27:56 +07:00
Akarshan Biswas
1d0bb53f2a
feat: add support for querying available backend devices (#5877)
* feat: add support for querying available backend devices

This change introduces a new `get_devices` method to the `llamacpp_extension` engine that allows the frontend to query and display a list of available devices (e.g., Vulkan, CUDA, SYCL) from the compiled `llama-server` binary.

* Added `DeviceList` interface to represent GPU/device metadata.
* Implemented `getDevices(): Promise<DeviceList[]>` method.

  * Splits `version/backend`, ensures backend is ready.
  * Invokes the new Tauri command `get_devices`.

* Introduced a new `get_devices` Tauri command.
* Parses `llama-server --list-devices` output to extract available devices with memory info.
* Introduced `DeviceInfo` struct (`id`, `name`, `mem`, `free`) and exposed it via serialization.
* Robust parsing logic using string processing (non-regex) to locate memory stats.
* Registered the new command in the `tauri::Builder` in `lib.rs`.

* Fixed logic to correctly parse multiple devices from the llama-server output.
* Handles common failure modes: binary not found, malformed memory info, etc.

This sets the foundation for device selection, memory-aware model loading, and improved diagnostics in Jan AI engine setup flows.

* Update extensions/llamacpp-extension/src/index.ts

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-07-23 19:20:12 +05:30
Louis
d6ad797769
fix: llama.cpp backend shows blank list sometime (#5876) 2025-07-23 20:04:38 +07:00