5454 Commits

Author SHA1 Message Date
Louis
3afdd0fa1d
fix: tmp download file should be removed on cancel (#5849) 2025-07-23 12:52:34 +07:00
Faisal Amir
43b7eb6e18
🐛fix: remove sampling parameters from llamacpp extension (#5871) 2025-07-23 12:13:42 +07:00
Faisal Amir
fd26270e78
🐛fix/update vulkan active syntax (#5869) 2025-07-23 11:45:54 +07:00
Louis
3e30c61fb0
fix: app should refresh local provider models list on launch (#5868) 2025-07-23 08:36:09 +07:00
Louis
fe95031c6e
feat: migrate cortex models to llamacpp extension (#5838)
* feat: migrate cortex models to new llama.cpp extension

* test: add tests

* clean: remove duplicated import
2025-07-22 23:35:08 +07:00
Nguyen Ngoc Minh
5cbd79b525
fix: charmap encoding (#5865)
* fix: handle charmap encoding error

* enhancement: prompt template for new user flow
2025-07-22 23:33:12 +07:00
Louis
d347058d6b
fix: HuggingFace provider should be non-deletable (#5856)
* fix: HuggingFace provider should be non-deletable

* refactor: rename const folder

* test: correct test case
2025-07-22 23:32:37 +07:00
Louis
8e9cd2566b
fix: gemini tool call support (#5848) 2025-07-22 23:25:43 +07:00
Akarshan Biswas
1eaec5e4f6
Fix: engine unable to find dlls on when running on Windows (#5863)
* Fix: Windows llamacpp not picking up dlls from lib repo

* Fix lib path on Windows

* Add debug info about lib_path

* Normalize lib_path for Windows

* fix window lib path normalization

* fix: missing cuda dll files on windows

* throw backend setup errors to UI

* Fix format

* Update extensions/llamacpp-extension/src/index.ts

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* feat: add logger to llamacpp-extension

* fix: platform check

---------

Co-authored-by: Louis <louis@jan.ai>
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-07-22 20:05:24 +05:30
Nguyen Ngoc Minh
7d3811f879
chore: update build appimage script (#5866)
* chore: update new appimage kit url

* chore: add error handling for appimagetool download
2025-07-22 21:02:25 +07:00
Faisal Amir
5553096bc4
enhancement: dialog model error trigger from provider screen and improve copy button (#5858) 2025-07-22 20:36:01 +07:00
Faisal Amir
1d443e1f7d
fix: support load model configurations (#5843)
* fix: support load model configurations

* chore: remove log

* chore: sampling params add from send completion

* chore: remove comment

* chore: remove comment on predefined file

* chore: update test model service
2025-07-22 19:52:12 +07:00
Faisal Amir
7b3b6cc8be
🐛fix: delete all should not include fav thread (#5864) 2025-07-22 19:51:59 +07:00
hiento09
1dd5b810c2
Chore: enrich autoqa log (#5862)
* chore: add app log upload to reportportal
2025-07-22 16:13:00 +07:00
Akarshan Biswas
f59739d2b0
refactor: Improve Llama.cpp backend management and auto-update (#5845)
* refactor: Improve Llama.cpp backend management and auto-update

This commit refactors the Llama.cpp extension to enhance backend management and streamline the auto-update process.

Key changes include:

Refactored configureBackends: The logic for determining the best available backend and populating settings is now more modular, preventing duplicate executions.

Dedicated Auto-update Handling: Introduced a handleAutoUpdate method to encapsulate the auto-update logic, including downloading the latest available backend and updating the internal configuration and settings.

Robust Old Backend Cleanup: The removeOldBackends method is improved to ensure only the currently used backend version and type are kept, effectively managing disk space. A delay is added for Windows to prevent file conflicts during cleanup.

Final Installation Check: A ensureFinalBackendInstallation method is added to guarantee the selected backend is installed, acting as a final safeguard after auto-update or if auto-update is disabled.

Minor Fixes:

Added console.log for save_path during decompression for better debugging.

Ensured the output directory exists before decompression in the Rust backend.

Removed extraneous console log for session info.

Updated Cargo.toml and tauri.conf.json versions.

These changes lead to a more reliable and efficient Llama.cpp backend experience within the application, particularly for users with auto-update enabled.

* fix isBackendInstalled parameters

* Address bot's comments

* Address bot comments of using try finally block
2025-07-22 14:35:34 +05:30
Nguyen Ngoc Minh
e3813ab1af
fix: autoqa prompt template (#5854) 2025-07-22 13:34:43 +07:00
Louis
e424938e02
Merge branch 'dev' into release/v0.6.6
# Conflicts:
#	.github/workflows/template-tauri-build-windows-x64.yml
#	Makefile
#	extensions/engine-management-extension/engines.mjs
2025-07-22 13:18:00 +07:00
Nguyen Ngoc Minh
fceecffed7
feat: add vcruntime for windows installer (#5852) 2025-07-22 12:38:00 +07:00
Faisal Amir
25952f293c
enhancement: auto focus always allow action from tool approval dialog and add req parameters (#5836)
* enhancement: auto focus always allow action from tool approval dialog

* chore: error handling tools parameters

* chore: update test button focus cases
2025-07-22 12:17:53 +07:00
Faisal Amir
78df0a20ec
enhancement: better error page component (#5834)
* enhancement: better error page component

* chore: typo and useless space
2025-07-22 12:17:44 +07:00
Nguyen Ngoc Minh
af892428a5
chore: sync make build with dev (#5847)
* chore: sync up make build with dev

* ci: update macOS self-hosted runner
2025-07-22 11:12:14 +07:00
Nguyen Ngoc Minh
e82e5e1da9
refactor: standardize build process and remove build-tauri target (#5846) 2025-07-22 00:01:48 +07:00
Nguyen Ngoc Minh
9ea081576b
fix: custom tauri nsis template CheckIfAppIsRunning macro (#5840)
* fix: update CheckIfAppIsRunning macro to include args
2025-07-21 20:54:06 +07:00
Nguyen Ngoc Minh
275cab7538
Merge pull request #5839 from menloresearch/fix/appimage-url-with-latest-tauri-cli
fix: update @taur-apps/cli to newest verison to fix appimage download
2025-07-21 03:41:27 -07:00
Minh141120
db962b2ba6 fix: update @taur-apps/cli to newest verison to fix appimage download issue 2025-07-21 16:32:27 +07:00
Akarshan Biswas
08de0fa42d
fix: prevent terminal window from opening on model load on WindowsOS (#5837)
On Windows, spawning the llamacpp server was causing an unwanted terminal window
to appear. This is now fixed by combining `CREATE_NO_WINDOW` with
`CREATE_NEW_PROCESS_GROUP` using `.creation_flags(...)`, ensuring that the
process runs in the background without a console window.

This change only applies to 64-bit Windows builds.
2025-07-21 13:24:31 +05:30
Louis
05b9d4e9fd
feat: add claude-4 (#5829)
* feat: add claude-4

* fix: sorting order
2025-07-21 12:30:56 +07:00
Akarshan Biswas
81d6ed3785
feat: support per-model overrides in llama.cpp load() (#5820)
* feat: support per-model overrides in llama.cpp load()

Extend the `load()` method in the llama.cpp extension to accept optional
`overrideSettings`, allowing fine-grained per-model configuration.

This enables users to override provider-level settings such as `ctx_size`,
`chat_template`, `n_gpu_layers`, etc., when loading a specific model.

Fixes: #5818 (Feature Request - Jan v0.6.6)

Use cases enabled:
- Different context sizes per model (e.g., 4K vs 32K)
- Model-specific chat templates (ChatML, Alpaca, etc.)
- Performance tuning (threads, GPU layers)
- Better memory management per deployment

Maintains full backward compatibility with existing provider config.

* swap overrideSettings and isEmbedding argument
2025-07-21 08:59:50 +05:30
Louis
bc4fe52f8d
fix: llama.cpp integration model load and chat experience (#5823)
* fix: stop generating should not stop running models

* fix: ensure backend ready before loading model

* fix: backend setting should not block onLoad
2025-07-21 09:29:26 +07:00
Louis
5241557a74
test: deprecate webdriver test in favor of auto qa using CUA (#5825) 2025-07-21 00:11:16 +07:00
Louis
c03f6fcc3a
Revert "chore(deps): update rand requirement from 0.8 to 0.9 in /src-tauri (#…" (#5824)
This reverts commit 722a6881fdca47181c2184a0b62a26ec25d014d0.
2025-07-20 23:55:45 +07:00
Louis
5696e951f2
fix: Legacy threads show on top of new threads (#5696) (#5810)
* fix: #5696 - legacy threads show on top of new threads

* fix: tests
2025-07-20 16:58:22 +07:00
Louis
19cb1c96e0
fix: llama.cpp backend download on windows (#5813)
* fix: llama.cpp backend download on windows

* test: add missing cases

* clean: linter

* fix: build
2025-07-20 16:58:09 +07:00
Louis
05a5995865
fix: dependabot should just update security patch (#5814) 2025-07-20 16:55:40 +07:00
dependabot[bot]
722a6881fd
chore(deps): update rand requirement from 0.8 to 0.9 in /src-tauri (#5399)
Updates the requirements on [rand](https://github.com/rust-random/rand) to permit the latest version.
- [Release notes](https://github.com/rust-random/rand/releases)
- [Changelog](https://github.com/rust-random/rand/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-random/rand/compare/0.8.0...rand_core-0.9.1)

---
updated-dependencies:
- dependency-name: rand
  dependency-version: 0.9.1
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-20 16:11:43 +07:00
Trang Le
04f8bf0903
Update mcp.mdx (#5771)
The original instruction doesn't tell users to enable experimental features in Jan first. Without it, the MCP Servers tab won't appear.
2025-07-20 15:20:53 +07:00
dependabot[bot]
4d0b777f9f
chore(deps): bump @radix-ui/react-hover-card from 1.1.11 to 1.1.14 (#5603)
---
updated-dependencies:
- dependency-name: "@radix-ui/react-hover-card"
  dependency-version: 1.1.14
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-20 15:20:18 +07:00
hiento09
b7b3eb9d19
fix: autoqa requirements.txt (#5812) 2025-07-19 22:47:34 +07:00
Louis
c550f6cf0d
Merge pull request #5809 from menloresearch/refactor/simplify-proxy-settings
refactor: simplify proxy settings by removing unused SSL verification options
2025-07-19 16:34:37 +07:00
Louis
5fdae1259b
Merge pull request #5808 from gary149/feat/huggingface-integration
feat: Add Hugging Face as a provider
2025-07-19 14:33:44 +07:00
Victor Muštar
18dfe2b883 chore: update model descriptions in huggingface.json to match web app mock data 2025-07-18 19:53:12 +02:00
Victor Muštar
6ce26b7b6d chore: update model descriptions for clarity and accuracy 2025-07-18 19:31:07 +02:00
Victor Muštar
178d1546fe feat: integrate Hugging Face provider into web app and engine management 2025-07-18 19:10:30 +02:00
Victor Muštar
54c1bf6950 feat: add Hugging Face engine configuration and model definitions 2025-07-18 19:10:14 +02:00
Victor Muštar
7927f4ca2b feat: add Hugging Face logo asset 2025-07-18 19:09:59 +02:00
Akarshan Biswas
8f1a36c8e3
fix: Improve stream error handling and parsing (#5807)
* fix: Enhance stream error handling and parsing

This commit improves the robustness of stream processing in the llamacpp-extension.

- Adds explicit handling for 'error:' prefixed lines in the stream, parsing the contained JSON error and throwing an appropriate JavaScript Error.
- Centralizes JSON parsing of 'data:' and 'error:' lines, ensuring consistent error propagation by re-throwing parsing exceptions.
- Ensures the async iterator terminates correctly upon encountering stream errors or malformed JSON.

* Address bot comments and cleanup
2025-07-18 18:36:33 +05:30
Akarshan
59ad2eb784
Merge branch 'dev' into release/v0.6.6 2025-07-18 18:29:20 +05:30
hiento09
4d44f4324d
feat: add autoqa (#5779)
* feat: add autoqa

* chore: add auto start computer_server

* chore: add ci autoqa windows

* chore: add ci support for both windows and linux

* chore: add ci support for macos

* chore: refactor auto qa

* chore: refactor autoqa workflow

* chore: fix upload turn
2025-07-18 15:22:31 +07:00
Louis
a56e58f69b
Merge pull request #5782 from ethanova/fix/no-more-code-line-number-selection
set line number userSelect to none so that code can be copied without line number
2025-07-18 10:08:46 +07:00
Louis
8d84c3b884
feat: add model load error handling to improve UX (#5802)
* feat: model load error handling

* chore: clean up

* test: add tests

* fix: provider name
2025-07-18 08:25:54 +05:30