161 Commits

Author SHA1 Message Date
Faisal Amir
39df7b22b9 chore: rename key runOnStartup from hooks useLocalApiServer 2025-08-20 22:37:45 +07:00
Faisal Amir
cfa68c5500 feat: run on startup settin for local api server 2025-08-20 21:56:53 +07:00
Louis
c018713676
feat: allow user to set max_attempt for MCP to avoid looping 2025-08-20 12:42:54 +07:00
Faisal Amir
5481ee9e35
Merge pull request #6134 from menloresearch/feat/attachment-ui
feat: attachment UI
2025-08-20 10:04:32 +07:00
Louis
91f05b8f32
feat: add tool call cancellation 2025-08-19 23:27:12 +07:00
Faisal Amir
cef3e122ff chore: send attachment file when send message 2025-08-19 19:51:01 +07:00
Dinh Long Nguyen
9ea9b7d87d
handle abort properly + finally clause to resolve (#6227) 2025-08-19 14:45:57 +07:00
Dinh Long Nguyen
2d486d7b3a
feat: add support for reasoning fields (OpenRouter) (#6206)
* add support for reasoning fields (OpenRouter)

* reformat

* fix linter

* Update web-app/src/utils/reasoning.ts

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-08-18 21:59:14 +07:00
Louis
362324cb87
Merge pull request #6188 from menloresearch/feat/mcp-enhancement
feat: mcp enhancement
2025-08-18 09:55:44 +07:00
Faisal Amir
b1b2ca1987
Merge pull request #6006 from menloresearch/feat/fav-model
🚀feat: allow user mark model as favorite
2025-08-17 23:14:26 +07:00
Jasper Morgal
4ba56f1377 Fix Issue #6199
Fix Issue: Jan UI Bottlenecks Token Rendering Speed to ~300 TPS Despite Faster Cerebras API Output
2025-08-15 15:00:29 -07:00
Louis
c8d9592ab8
chore: mcp group server, action and import json 2025-08-15 11:37:21 +07:00
Louis
dcb46174ff
fix: test 2025-08-14 14:30:43 +07:00
Minh141120
aa8fb0464c Merge branch 'dev' into fix/feature-toggle-auto-updater 2025-08-14 13:42:27 +07:00
Minh141120
388959a1fe chore: gate check auto updater 2025-08-14 12:39:48 +07:00
Louis
16bfd6eafb
fix: full url search 2025-08-14 11:33:03 +07:00
Louis
526e532e2d
fix: normalize model id from source preparation 2025-08-14 10:50:50 +07:00
Faisal Amir
985a8f31ae
fix: migrations model setting (#6165) 2025-08-13 18:21:48 +07:00
Louis
8e5fac83fd
fix: deprecate addSource tests since the function was removed 2025-08-12 11:25:47 +07:00
Louis
736790473e
fix: duplicate model while searching 2025-08-12 11:17:00 +07:00
Louis
b924156a15
fix: bring back GPU detection 2025-08-11 13:52:20 +07:00
Louis
4f5d9b8222
Merge pull request #6089 from menloresearch/fix/clean-up-unused-apis
refactor: clean up unused hardware apis
2025-08-11 00:02:31 +07:00
Akarshan Biswas
0cfc745954
feat: Introduce structured error handling for llamacpp extension (#6087)
* feat: Introduce structured error handling for llamacpp extension

This commit introduces a structured error handling system for the `llamacpp` extension. Instead of returning simple string errors, we now use a custom `LlamacppError` struct with a specific `ErrorCode` enum. This allows the frontend to display more user-friendly and actionable error messages based on the code, rather than raw debug logs.

The changes include:
- A new `ErrorCode` enum to categorize errors (e.g., `OutOfMemory`, `ModelArchNotSupported`, `BinaryNotFound`).
- A `LlamacppError` struct to encapsulate the code, a user-facing message, and optional detailed logs.
- A static method `from_stderr` that intelligently parses llama.cpp's standard error output to identify and map common issues like Out of Memory errors to a specific error code.
- Refactored `ServerError` enum to wrap the new `LlamacppError` and provide a consistent serialization format for the Tauri frontend.
- Updated all relevant functions (`load_llama_model`, `get_devices`) to return the new structured error type, ensuring a more robust and predictable error flow.
- A reduced timeout for model loading from 300 to 180 seconds.

This work lays the groundwork for a more intuitive and helpful user experience, as the application can now provide clear guidance to users when a model fails to load.

* Update src-tauri/src/core/utils/extensions/inference_llamacpp_extension/server.rs

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* Update src-tauri/src/core/utils/extensions/inference_llamacpp_extension/server.rs

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* chore: update FE handle error object from extension

* chore: fix property type

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-08-07 23:28:25 +05:30
Louis
ab44faeda3
test: fix test 2025-08-07 20:09:07 +07:00
Louis
c1668a4e4a
refactor: clean up unused hardware apis 2025-08-07 20:04:23 +07:00
Faisal Amir
f58332e9b5
Merge branch 'dev' into feat/fav-model 2025-08-07 18:11:44 +07:00
Akarshan Biswas
1f1605bdf9
feat: Add support for overriding tensor buffer type (#6062)
* feat: Add support for overriding tensor buffer type

This commit introduces a new configuration option, `override_tensor_buffer_t`, which allows users to specify a regex for matching tensor names to override their buffer type. This is an advanced setting primarily useful for optimizing the performance of large models, particularly Mixture of Experts (MoE) models.

By overriding the tensor buffer type, users can keep critical parts of the model, like the attention layers, on the GPU while offloading other parts, such as the expert feed-forward networks, to the CPU. This can lead to significant speed improvements for massive models.

Additionally, this change refines the error message to be more specific when a model fails to load. The previous message "Failed to load llama-server" has been updated to "Failed to load model" to be more accurate.

* chore: update FE to suppoer override-tensor

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-08-07 10:31:34 +05:30
Faisal Amir
5d001dfd5a
feat: jinja template customize per model instead provider level (#6053) 2025-08-05 21:21:41 +07:00
Faisal Amir
e3ba37ba15 🚀feat: allow user mark model as favorite 2025-08-05 14:26:12 +07:00
Louis
48004024ee
Merge pull request #6020 from cmppoon/fix-mcp-servers-edit-json
fix connected servers status not in sync when edit mcp json
2025-08-05 11:06:05 +07:00
Faisal Amir
641df474fd
fix: Generate A Response button does not show context size error dialog (#6029)
* fix: Generate A Response button does not show context size error dialog

* chore: remove as a child button params
2025-08-05 08:34:06 +07:00
Chaiyapruek Muangsiri
477651e5d5 fix connected servers status not in sync when edit mcp json 2025-08-05 08:08:59 +07:00
Faisal Amir
787c4ee073
fix: wrong desc setting cont_batching (#6034) 2025-08-02 21:48:43 +07:00
Faisal Amir
3acb61b5ed
fix: react state loop from hooks useMediaQuery (#6031)
* fix: react state loop from hooks useMediaQuerry

* chore: update test cases hooks media query
2025-08-02 21:48:40 +07:00
Louis
9573329d06
Merge pull request #6004 from menloresearch/release/v0.6.6
Sync release/v0.6.6 into dev
2025-07-31 21:34:52 +07:00
Louis
4bcfa84d75
Merge pull request #6008 from menloresearch/hotfix/regression-issue-with-colon-in-model-name
hotfix: regression issue with colon in model name
2025-07-31 17:55:28 +07:00
Louis
25fa4901c2
Merge pull request #5997 from menloresearch/release/v0.6.6
Sync Release/v0.6.6 into dev
2025-07-31 10:25:09 +07:00
Louis
76bcf33f80
fix: generate response button disappear on tool call (#5988)
* fix: generate a response button should appear when an incomplete tool call message is present

* fix: wording

* fix: do not send duplicate messages on regenerating

* fix: tests
2025-07-30 21:04:12 +07:00
cmuangs
d2f99c36f5
fix thread sorting issue (#5976) 2025-07-30 18:15:29 +07:00
Faisal Amir
63cb4fbf3b
fix: assistant with last used and fix metadata (#5955)
* fix: assistant with last used and fix metadata

* chore: revert instruction and desc

* chore: fix current assistant state

* chore: updae metadata message assistant

* chore: update test case
2025-07-29 09:50:07 +07:00
Faisal Amir
1c74bfd5ef
fix: update edge case experimental feature MCP (#5951)
* fix: update edge case experimental feature MCP

* Update web-app/src/routes/settings/mcp-servers.tsx

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-07-28 21:31:51 +07:00
Louis
fdaa3b1992
fix: openrouter unselect itself (#5943)
* fix: selected openrouter model does not work

* test: add tests to cover new change
2025-07-28 10:33:23 +07:00
Louis
1fc37a9349
fix: migrate app settings to the new version (#5936)
* fix: migrate app settings to the new version

* fix: edge cases

* fix: migrate HF import model on Windows

* fix hardware page broken after downgraded

* test: correct test

* fix: backward compatible hardware info
2025-07-27 21:13:05 +07:00
Faisal Amir
54d44ce741
fix: update default GPU toggle, and simplify state (#5937) 2025-07-27 14:36:08 +07:00
Faisal Amir
7dec980630
fix: persist model capabilities refresh app (#5918) 2025-07-25 20:27:51 +07:00
Louis
0c53ad0e16
fix: models hub should show latest data only (#5925)
* fix: models hub should show latest data only

* test: correct expected result
2025-07-25 17:34:14 +07:00
Akarshan Biswas
a1af70f7a9
feat: Enhance Llama.cpp backend management with persistence (#5886)
* feat: Enhance Llama.cpp backend management with persistence

This commit introduces significant improvements to how the Llama.cpp extension manages and updates its backend installations, focusing on user preference persistence and smarter auto-updates.

Key changes include:

* **Persistent Backend Type Preference:** The extension now stores the user's preferred backend type (e.g., `cuda`, `cpu`, `metal`) in `localStorage`. This ensures that even after updates or restarts, the system attempts to use the user's previously selected backend type, if available.
* **Intelligent Auto-Update:** The auto-update mechanism has been refined to prioritize updating to the **latest version of the *currently selected backend type*** rather than always defaulting to the "best available" backend (which might change). This respects user choice while keeping the chosen backend type up-to-date.
* **Improved Initial Installation/Configuration:** For fresh installations or cases where the `version_backend` setting is invalid, the system now intelligently determines and installs the best available backend, then persists its type.
* **Refined Old Backend Cleanup:** The `removeOldBackends` function has been renamed to `removeOldBackend` and modified to specifically clean up *older versions of the currently selected backend type*, preventing the accumulation of unnecessary files while preserving other backend types the user might switch to.
* **Robust Local Storage Handling:** New private methods (`getStoredBackendType`, `setStoredBackendType`, `clearStoredBackendType`) are introduced to safely interact with `localStorage`, including error handling for potential `localStorage` access issues.
* **Version Filtering Utility:** A new utility `findLatestVersionForBackend` helps in identifying the latest available version for a specific backend type from a list of supported backends.

These changes provide a more stable, user-friendly, and maintainable backend management experience for the Llama.cpp extension.

Fixes: #5883

* fix: cortex models migration should be done once

* feat: Optimize Llama.cpp backend preference storage and UI updates

This commit refines the Llama.cpp extension's backend management by:

* **Optimizing `localStorage` Writes:** The system now only writes the backend type preference to `localStorage` if the new value is different from the currently stored one. This reduces unnecessary `localStorage` operations.
* **Ensuring UI Consistency on Initial Setup:** When a fresh installation or an invalid backend configuration is detected, the UI settings are now explicitly updated to reflect the newly determined `effectiveBackendString`, ensuring the displayed setting matches the active configuration.

These changes improve performance by reducing redundant storage operations and enhance user experience by maintaining UI synchronization with the backend state.

* Revert "fix: provider settings should be refreshed on page load (#5887)"

This reverts commit ce6af62c7df4a7e7ea8c0896f307309d6bf38771.

* fix: add loader version backend llamacpp

* fix: wrong key name

* fix: model setting issues

* fix: virtual dom hub

* chore: cleanup

* chore: hide device ofload setting

---------

Co-authored-by: Louis <louis@jan.ai>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-07-24 18:33:35 +07:00
Faisal Amir
399671488c
fix: gpu detected from backend version (#5882)
* fix: gpu detected from backend version

* chore: remove readonly props from dynamic field
2025-07-24 10:45:48 +07:00
Louis
6599d91660
fix: bring back HF repo ID search in Hub (#5880)
* fix: bring back HF search input

* test: fix useModelSources tests for updated addSource signature
2025-07-24 09:46:13 +07:00
Louis
d6ad797769
fix: llama.cpp backend shows blank list sometime (#5876) 2025-07-23 20:04:38 +07:00