424 Commits

Author SHA1 Message Date
Faisal Amir
d8e1fef3f0
🐛fix/onboarding-loop (#6054) 2025-08-07 18:11:22 +07:00
Akarshan Biswas
1f1605bdf9
feat: Add support for overriding tensor buffer type (#6062)
* feat: Add support for overriding tensor buffer type

This commit introduces a new configuration option, `override_tensor_buffer_t`, which allows users to specify a regex for matching tensor names to override their buffer type. This is an advanced setting primarily useful for optimizing the performance of large models, particularly Mixture of Experts (MoE) models.

By overriding the tensor buffer type, users can keep critical parts of the model, like the attention layers, on the GPU while offloading other parts, such as the expert feed-forward networks, to the CPU. This can lead to significant speed improvements for massive models.

Additionally, this change refines the error message to be more specific when a model fails to load. The previous message "Failed to load llama-server" has been updated to "Failed to load model" to be more accurate.

* chore: update FE to suppoer override-tensor

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-08-07 10:31:34 +05:30
Louis
0b1b84dbf4
test: add tests for new change 2025-08-06 17:13:22 +07:00
Louis
fc815dc98e
fix: should not include reasoning text in the chat completion request 2025-08-06 17:07:32 +07:00
Faisal Amir
ffdb6829e1
fix: gpt-oss thinking block (#6071) 2025-08-06 16:10:24 +07:00
Louis
c642076ec3
Merge pull request #6024 from menloresearch/fix/jan-hub-repo-data-and-deeplink
fix: Jan hub model detail and deep link
2025-08-06 08:46:07 +07:00
Faisal Amir
5d001dfd5a
feat: jinja template customize per model instead provider level (#6053) 2025-08-05 21:21:41 +07:00
Faisal Amir
99567a1102
feat: recommended label llamacpp setting (#6052)
* feat: recommended label llamacpp

* chore: remove log
2025-08-05 13:55:33 +07:00
Louis
065a850a94 fix: test env 2025-08-05 13:44:40 +07:00
Louis
b8070f1871 chore: able to disable updater via env flag 2025-08-05 13:44:40 +07:00
Louis
90e46a2696 test: add tests 2025-08-05 13:44:40 +07:00
Louis
7f0c605651 fix: Jan hub repo detail and deep link 2025-08-05 13:44:40 +07:00
Louis
48004024ee
Merge pull request #6020 from cmppoon/fix-mcp-servers-edit-json
fix connected servers status not in sync when edit mcp json
2025-08-05 11:06:05 +07:00
Faisal Amir
641df474fd
fix: Generate A Response button does not show context size error dialog (#6029)
* fix: Generate A Response button does not show context size error dialog

* chore: remove as a child button params
2025-08-05 08:34:06 +07:00
Chaiyapruek Muangsiri
da0cf10f91 remove unnecessary try catch block 2025-08-05 08:08:59 +07:00
Chaiyapruek Muangsiri
477651e5d5 fix connected servers status not in sync when edit mcp json 2025-08-05 08:08:59 +07:00
Chaiyapruek Muangsiri
38c5911460 fix: show error toast on download error 2025-08-04 20:40:17 +08:00
Faisal Amir
787c4ee073
fix: wrong desc setting cont_batching (#6034) 2025-08-02 21:48:43 +07:00
Faisal Amir
3acb61b5ed
fix: react state loop from hooks useMediaQuery (#6031)
* fix: react state loop from hooks useMediaQuerry

* chore: update test cases hooks media query
2025-08-02 21:48:40 +07:00
Louis
9c0d09c487
refactor: clean up cortex (#6003)
* refactor: clean up cortex

* chore: clean up

* refactor: clean up
2025-07-31 21:58:12 +07:00
Louis
9573329d06
Merge pull request #6004 from menloresearch/release/v0.6.6
Sync release/v0.6.6 into dev
2025-07-31 21:34:52 +07:00
Louis
4bcfa84d75
Merge pull request #6008 from menloresearch/hotfix/regression-issue-with-colon-in-model-name
hotfix: regression issue with colon in model name
2025-07-31 17:55:28 +07:00
Faisal Amir
59a17d4a2a
fix/remove-auto-refresh-model (#6002) 2025-07-31 14:07:31 +07:00
Louis
25fa4901c2
Merge pull request #5997 from menloresearch/release/v0.6.6
Sync Release/v0.6.6 into dev
2025-07-31 10:25:09 +07:00
cmuangs
e48b8c9792
fix assistant dropdown onClick not triggered consistently (#5991) 2025-07-31 09:05:56 +07:00
Faisal Amir
5e72d210d4
fix: missing text color responsive left panel (#5989) 2025-07-30 22:23:57 +07:00
Faisal Amir
99cc2efb90
enhancement: blurry logo model provider (#5986) 2025-07-30 21:11:46 +07:00
Louis
76bcf33f80
fix: generate response button disappear on tool call (#5988)
* fix: generate a response button should appear when an incomplete tool call message is present

* fix: wording

* fix: do not send duplicate messages on regenerating

* fix: tests
2025-07-30 21:04:12 +07:00
Faisal Amir
f58d745585
fix: title tooltip MCP edit json (#5987)
* fix/title-tooltip-mcp-json

* fix: title tooltip delete mcp
2025-07-30 21:00:55 +07:00
Faisal Amir
1e7e572d4a
fix: download progress missing when left panel scrollable (#5984) 2025-07-30 18:36:42 +07:00
cmuangs
d2f99c36f5
fix thread sorting issue (#5976) 2025-07-30 18:15:29 +07:00
Faisal Amir
079759939a
fix: rename thread dialog shows previous thread (#5963) 2025-07-30 09:18:43 +07:00
Faisal Amir
63cb4fbf3b
fix: assistant with last used and fix metadata (#5955)
* fix: assistant with last used and fix metadata

* chore: revert instruction and desc

* chore: fix current assistant state

* chore: updae metadata message assistant

* chore: update test case
2025-07-29 09:50:07 +07:00
Louis
160d158152
fix: search models result in hub should be sorted by weight (#5954) 2025-07-28 23:33:11 +07:00
Louis
812a8082b8
fix: factory reset fail with access denied error (#5952)
* fix: factory reset fail due to access denied error

* fix: unused import

* fix: tests
2025-07-28 23:20:45 +07:00
Faisal Amir
1c74bfd5ef
fix: update edge case experimental feature MCP (#5951)
* fix: update edge case experimental feature MCP

* Update web-app/src/routes/settings/mcp-servers.tsx

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-07-28 21:31:51 +07:00
Louis
fdaa3b1992
fix: openrouter unselect itself (#5943)
* fix: selected openrouter model does not work

* test: add tests to cover new change
2025-07-28 10:33:23 +07:00
Faisal Amir
08af8a49aa
fix: tool approval params scrollable (#5941) 2025-07-28 09:39:34 +07:00
Louis
1fc37a9349
fix: migrate app settings to the new version (#5936)
* fix: migrate app settings to the new version

* fix: edge cases

* fix: migrate HF import model on Windows

* fix hardware page broken after downgraded

* test: correct test

* fix: backward compatible hardware info
2025-07-27 21:13:05 +07:00
Faisal Amir
54d44ce741
fix: update default GPU toggle, and simplify state (#5937) 2025-07-27 14:36:08 +07:00
Faisal Amir
b89d9d090f
fix: update ui version_backend, mem usage hardware (#5932)
* fix: update ui version_backend, mem usage hardware

* chore: hidden gpu from system monitor on mac

* chore: fix gpus vram
2025-07-26 18:36:18 +07:00
Akarshan Biswas
8ec4a36826
fix: Frontend updates when llama.cpp backend auto-downloads (#5926) 2025-07-26 08:48:29 +07:00
Faisal Amir
2e870ad4d0
fix: calculation memory on hardware and system monitor (#5922) 2025-07-26 08:47:59 +07:00
Faisal Amir
7dec980630
fix: persist model capabilities refresh app (#5918) 2025-07-25 20:27:51 +07:00
Faisal Amir
6c15129ce8
fix: validate name assistant and improve area clickable (#5920) 2025-07-25 20:27:38 +07:00
Louis
0c53ad0e16
fix: models hub should show latest data only (#5925)
* fix: models hub should show latest data only

* test: correct expected result
2025-07-25 17:34:14 +07:00
Akarshan Biswas
a1af70f7a9
feat: Enhance Llama.cpp backend management with persistence (#5886)
* feat: Enhance Llama.cpp backend management with persistence

This commit introduces significant improvements to how the Llama.cpp extension manages and updates its backend installations, focusing on user preference persistence and smarter auto-updates.

Key changes include:

* **Persistent Backend Type Preference:** The extension now stores the user's preferred backend type (e.g., `cuda`, `cpu`, `metal`) in `localStorage`. This ensures that even after updates or restarts, the system attempts to use the user's previously selected backend type, if available.
* **Intelligent Auto-Update:** The auto-update mechanism has been refined to prioritize updating to the **latest version of the *currently selected backend type*** rather than always defaulting to the "best available" backend (which might change). This respects user choice while keeping the chosen backend type up-to-date.
* **Improved Initial Installation/Configuration:** For fresh installations or cases where the `version_backend` setting is invalid, the system now intelligently determines and installs the best available backend, then persists its type.
* **Refined Old Backend Cleanup:** The `removeOldBackends` function has been renamed to `removeOldBackend` and modified to specifically clean up *older versions of the currently selected backend type*, preventing the accumulation of unnecessary files while preserving other backend types the user might switch to.
* **Robust Local Storage Handling:** New private methods (`getStoredBackendType`, `setStoredBackendType`, `clearStoredBackendType`) are introduced to safely interact with `localStorage`, including error handling for potential `localStorage` access issues.
* **Version Filtering Utility:** A new utility `findLatestVersionForBackend` helps in identifying the latest available version for a specific backend type from a list of supported backends.

These changes provide a more stable, user-friendly, and maintainable backend management experience for the Llama.cpp extension.

Fixes: #5883

* fix: cortex models migration should be done once

* feat: Optimize Llama.cpp backend preference storage and UI updates

This commit refines the Llama.cpp extension's backend management by:

* **Optimizing `localStorage` Writes:** The system now only writes the backend type preference to `localStorage` if the new value is different from the currently stored one. This reduces unnecessary `localStorage` operations.
* **Ensuring UI Consistency on Initial Setup:** When a fresh installation or an invalid backend configuration is detected, the UI settings are now explicitly updated to reflect the newly determined `effectiveBackendString`, ensuring the displayed setting matches the active configuration.

These changes improve performance by reducing redundant storage operations and enhance user experience by maintaining UI synchronization with the backend state.

* Revert "fix: provider settings should be refreshed on page load (#5887)"

This reverts commit ce6af62c7df4a7e7ea8c0896f307309d6bf38771.

* fix: add loader version backend llamacpp

* fix: wrong key name

* fix: model setting issues

* fix: virtual dom hub

* chore: cleanup

* chore: hide device ofload setting

---------

Co-authored-by: Louis <louis@jan.ai>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-07-24 18:33:35 +07:00
Louis
ce6af62c7d
fix: provider settings should be refreshed on page load (#5887) 2025-07-24 14:30:33 +07:00
Faisal Amir
5d00cf652a
🐛fix: get system info and system usage (#5884) 2025-07-24 12:39:10 +07:00
Faisal Amir
399671488c
fix: gpu detected from backend version (#5882)
* fix: gpu detected from backend version

* chore: remove readonly props from dynamic field
2025-07-24 10:45:48 +07:00