75 Commits

Author SHA1 Message Date
Dinh Long Nguyen
db52057030
fix ollama error (#6418) 2025-09-11 18:38:06 +07:00
Akarshan Biswas
7a174e621a
feat: Smart model management (#6390)
* feat: Smart model management

* **New UI option** – `memory_util` added to `settings.json` with a dropdown (high / medium / low) to let users control how aggressively the engine uses system memory.
* **Configuration updates** – `LlamacppConfig` now includes `memory_util`; the extension class stores it in a new `memoryMode` property and handles updates through `updateConfig`.
* **System memory handling**
  * Introduced `SystemMemory` interface and `getTotalSystemMemory()` to report combined VRAM + RAM.
  * Added helper methods `getKVCachePerToken`, `getLayerSize`, and a new `ModelPlan` type.
* **Smart model‑load planner** – `planModelLoad()` computes:
  * Number of GPU layers that can fit in usable VRAM.
  * Maximum context length based on KV‑cache size and the selected memory utilization mode (high/medium/low).
  * Whether KV‑cache must be off‑loaded to CPU and the overall loading mode (GPU, Hybrid, CPU, Unsupported).
  * Detailed logging of the planning decision.
* **Improved support check** – `isModelSupported()` now:
  * Uses the combined VRAM/RAM totals from `getTotalSystemMemory()`.
  * Applies an 80% usable‑memory heuristic.
  * Returns **GREEN** only when both weights and KV‑cache fit in VRAM, **YELLOW** when they fit only in total memory or require CPU off‑load, and **RED** when the model cannot fit at all.
* **Cleanup** – Removed unused `GgufMetadata` import; updated imports and type definitions accordingly.
* **Documentation/comments** – Added explanatory JSDoc comments for the new methods and clarified the return semantics of `isModelSupported`.

* chore: migrate no_kv_offload from llamacpp setting to model setting

* chore: add UI auto optimize model setting

* feat: improve model loading planner with mmproj support and smarter memory budgeting

* Extend `ModelPlan` with optional `noOffloadMmproj` flag to indicate when a multimodal projector can stay in VRAM.
* Add `mmprojPath` parameter to `planModelLoad` and calculate its size, attempting to keep it on GPU when possible.
* Refactor system memory detection:
  * Use `used_memory` (actual free RAM) instead of total RAM for budgeting.
  * Introduced `usableRAM` placeholder for future use.
* Rewrite KV‑cache size calculation:
  * Properly handle GQA models via `attention.head_count_kv`.
  * Compute bytes per token as `nHeadKV * headDim * 2 * 2 * nLayer`.
* Replace the old 70 % VRAM heuristic with a more flexible budget:
  * Reserve a fixed VRAM amount and apply an overhead factor.
  * Derive usable system RAM from total memory minus VRAM.
* Implement a robust allocation algorithm:
  * Prioritize placing the mmproj in VRAM.
  * Search for the best balance of GPU layers and context length.
  * Fallback strategies for hybrid and pure‑CPU modes with detailed safety checks.
* Add extensive validation of model size, KV‑cache size, layer size, and memory mode.
* Improve logging throughout the planning process for easier debugging.
* Adjust final plan return shape to include the new `noOffloadMmproj` field.

* remove unused variable

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-09-11 09:48:03 +05:30
Dinh Long Nguyen
d490174544
feat: Web use jan model (#6374)
* call jan api

* fix lint

* ci: add jan server web

* chore: add Dockerfile

* clean up ui ux and support for reasoning fields, make app spa

* add logo

* chore: update tag for preview image

* chore: update k8s service name

* chore: update image tag and image name

* fixed test

---------

Co-authored-by: Minh141120 <minh.itptit@gmail.com>
Co-authored-by: Nguyen Ngoc Minh <91668012+Minh141120@users.noreply.github.com>
2025-09-05 16:18:30 +07:00
Dinh Long Nguyen
a30eb7f968
feat: Jan Web (reusing Jan Desktop UI) (#6298)
* add platform guards

* add service management

* fix types

* move to zustand for servicehub

* update App Updater

* update tauri missing move

* update app updater

* refactor: move PlatformFeatures to separate const file

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* change tauri fetch name

* update implementation

* update extension fetch

* make web version run properly

* disabled unused web settings

* fix all tests

* fix lint

* fix tests

* add mock for extension

* fix build

* update make and mise

* fix tsconfig for web-extensions

* fix loader type

* cleanup

* fix test

* update error handling + mcp should be working

* Update mcp init

* use separate is_web_app build property

* Remove fixed model catalog url

* fix additional tests

* fix download issue (event emitter not implemented correctly)

* Update Title html

* fix app logs

* update root tsx render timing

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-05 01:47:46 +07:00
Louis
e6587844d0
Merge branch 'dev' into current-date-instruction 2025-08-21 11:41:30 +07:00
Faisal Amir
5481ee9e35
Merge pull request #6134 from menloresearch/feat/attachment-ui
feat: attachment UI
2025-08-20 10:04:32 +07:00
Louis
91f05b8f32
feat: add tool call cancellation 2025-08-19 23:27:12 +07:00
Faisal Amir
f70449fd98 chore: remove pdf attachement 2025-08-19 19:51:02 +07:00
Faisal Amir
cef3e122ff chore: send attachment file when send message 2025-08-19 19:51:01 +07:00
Kamal Fariz Mahyuddin
b77c8932a6 feat: support inserting current date into assistant prompt 2025-08-17 00:24:00 -07:00
Dinh Long Nguyen
e1c8d98bf2
Backend Architecture Refactoring (#6094) (#6162)
* add llamacpp plugin

* Refactor llamacpp plugin

* add utils plugin

* remove utils folder

* add hardware implementation

* add utils folder + move utils function

* organize cargo files

* refactor utils src

* refactor util

* apply fmt

* fmt

* Update gguf + reformat

* add permission for gguf commands

* fix cargo test windows

* revert yarn lock

* remove cargo.lock for hardware plugin

* ignore cargo.lock file

* Fix hardware invoke + refactor hardware + refactor tests, constants

* use api wrapper in extension to invoke hardware call + api wrapper build integration

* add newline at EOF (per Akarshan)

* add vi mock for getSystemInfo
2025-08-15 08:59:01 +07:00
Louis
83bb765bcc
Apply suggestion from @ellipsis-dev[bot]
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-08-14 10:55:49 +07:00
Louis
526e532e2d
fix: normalize model id from source preparation 2025-08-14 10:50:50 +07:00
Faisal Amir
ace8214d4d chore: make utils sanitize modelId 2025-08-14 09:42:47 +07:00
Akarshan Biswas
1f1605bdf9
feat: Add support for overriding tensor buffer type (#6062)
* feat: Add support for overriding tensor buffer type

This commit introduces a new configuration option, `override_tensor_buffer_t`, which allows users to specify a regex for matching tensor names to override their buffer type. This is an advanced setting primarily useful for optimizing the performance of large models, particularly Mixture of Experts (MoE) models.

By overriding the tensor buffer type, users can keep critical parts of the model, like the attention layers, on the GPU while offloading other parts, such as the expert feed-forward networks, to the CPU. This can lead to significant speed improvements for massive models.

Additionally, this change refines the error message to be more specific when a model fails to load. The previous message "Failed to load llama-server" has been updated to "Failed to load model" to be more accurate.

* chore: update FE to suppoer override-tensor

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-08-07 10:31:34 +05:30
Louis
0b1b84dbf4
test: add tests for new change 2025-08-06 17:13:22 +07:00
Louis
fc815dc98e
fix: should not include reasoning text in the chat completion request 2025-08-06 17:07:32 +07:00
Faisal Amir
5d001dfd5a
feat: jinja template customize per model instead provider level (#6053) 2025-08-05 21:21:41 +07:00
Louis
25fa4901c2
Merge pull request #5997 from menloresearch/release/v0.6.6
Sync Release/v0.6.6 into dev
2025-07-31 10:25:09 +07:00
Faisal Amir
99cc2efb90
enhancement: blurry logo model provider (#5986) 2025-07-30 21:11:46 +07:00
Louis
76bcf33f80
fix: generate response button disappear on tool call (#5988)
* fix: generate a response button should appear when an incomplete tool call message is present

* fix: wording

* fix: do not send duplicate messages on regenerating

* fix: tests
2025-07-30 21:04:12 +07:00
Louis
160d158152
fix: search models result in hub should be sorted by weight (#5954) 2025-07-28 23:33:11 +07:00
Louis
e424938e02
Merge branch 'dev' into release/v0.6.6
# Conflicts:
#	.github/workflows/template-tauri-build-windows-x64.yml
#	Makefile
#	extensions/engine-management-extension/engines.mjs
2025-07-22 13:18:00 +07:00
Faisal Amir
25952f293c
enhancement: auto focus always allow action from tool approval dialog and add req parameters (#5836)
* enhancement: auto focus always allow action from tool approval dialog

* chore: error handling tools parameters

* chore: update test button focus cases
2025-07-22 12:17:53 +07:00
Louis
bc4fe52f8d
fix: llama.cpp integration model load and chat experience (#5823)
* fix: stop generating should not stop running models

* fix: ensure backend ready before loading model

* fix: backend setting should not block onLoad
2025-07-21 09:29:26 +07:00
Louis
c550f6cf0d
Merge pull request #5809 from menloresearch/refactor/simplify-proxy-settings
refactor: simplify proxy settings by removing unused SSL verification options
2025-07-19 16:34:37 +07:00
Victor Muštar
178d1546fe feat: integrate Hugging Face provider into web app and engine management 2025-07-18 19:10:30 +02:00
Louis
9872a6e82a test: add missing unit tests 2025-07-15 22:29:28 +07:00
Louis
864ad50880
test: add missing tests 2025-07-12 21:29:51 +07:00
Louis
6e0218c084
Merge branch 'release/v0.7.0' into feat/inference-llamacpp-extension
# Conflicts:
#	.devcontainer/buildAppImage.sh
#	.github/workflows/template-tauri-build-linux-x64.yml
#	Makefile
#	core/src/node/extension/index.test.ts
#	package.json
#	src-tauri/tauri.conf.json
#	web-app/package.json
2025-07-10 15:36:41 +07:00
Zhiqiang ZHOU
9ff3cbe63f
Merge remote-tracking branch 'upstream/dev' into feat/identify-jan-on-openrouter 2025-07-06 11:54:46 -07:00
Louis
0dbfde4c80
refactor: wait for extension load 2025-07-02 12:29:02 +07:00
Louis
2bdbce2e40
refactor: clean up unused apis 2025-07-02 12:29:02 +07:00
Louis
f70bb2705d
🔧test: util and lib unit tests 2025-07-02 12:28:25 +07:00
Louis
ae58c427a5
fix: tool call params 2025-07-02 12:28:25 +07:00
Louis
8bd4a3389f
refactor: frontend uses new engine extension
# Conflicts:
#	extensions/model-extension/resources/default.json
#	web-app/src/containers/dialogs/DeleteProvider.tsx
#	web-app/src/routes/hub.tsx
2025-07-02 12:28:24 +07:00
Akarshan Biswas
a7a2dcc8d8
refactor load/unload again; move types to core and refactor AIEngine abstract class 2025-07-02 12:27:09 +07:00
Sam Hoang Van
0890de1869
feat: improve local provider connectivity with CORS bypass (#5458)
* feat: improve local provider connectivity with CORS bypass

- Add @tauri-apps/plugin-http dependency
- Implement dual fetch strategy for local vs remote providers
- Auto-detect local providers (localhost, Ollama:11434, LM Studio:1234)
- Make API key optional for local providers
- Add comprehensive test coverage for provider fetching

refactor: simplify fetchModelsFromProvider by removing preflight check logic

* feat: extend config options to include custom fetch function for CORS handling

* feat: conditionally use Tauri's fetch for openai-compatible providers to handle CORS
2025-06-25 15:42:14 +07:00
Zhiqiang ZHOU
aa7775225a
chore: change how headers affect
Signed-off-by: Zhiqiang ZHOU <im@strrl.dev>
2025-06-20 12:39:18 -07:00
Louis
4181454799
🐛fix: typo in build type check (#5297) 2025-06-16 18:45:26 +07:00
Faisal Amir
9b1f206cc6
🐛fix: showing release notes for beta and prod (#5292)
* 🐛fix: showing release notes for beta and prod

* ♻️refactor: make an utils env

* ♻️refactor: hide MCP for production

* ♻️refactor: simplify the boolean expression fetch release note
2025-06-16 17:14:38 +07:00
Louis
1e17cc6ec7
enhancement: model run improvement (#5268)
* fix: mcp tool error handling

* fix: error message

* fix: trigger download from recommend model

* fix: can't scroll hub

* fix: show progress

* enhancement: prompt users to increase context size

* enhancement: rearrange action buttons for a better UX

* 🔧chore: clean up logics

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-06-14 16:32:15 +07:00
Louis
27c4918395
fix: default settings should leave empty (#5257)
* fix: default settings should leave empty

* fix: default settings

* fix: remove some more default settings

* fix: threads and cont

* fix: data

* fix: default setting

* fix: settings

* chore: bump cortex version

* chore: bump to cortex 1.0.14

* chore: clean up

* typoe

* chore: fix dialog hang

* fix: default parameter

* chore: truncate edit model title

* chore: update default provider settings

* chore: fix typo

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-06-13 00:01:25 +07:00
Louis
50b83d7342
fix: could not add custom models (#5241)
* fix: could not add custom models

* Update web-app/src/lib/completion.ts

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* chore: remove hard coded ID string

* fix: revert suggestion change

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-06-11 17:04:00 +07:00
Faisal Amir
808fdb02a7
chore: streaming tool output (#5237)
* enhancement: tool streaming output

* chore: update memo

* fix: streaming

* chore: update stream tools arguments

* chore: update condition

* fix: style

* fix: style

* chore: fix stop button

* chore: update color accent and hide arrow button

---------

Co-authored-by: Louis <louis@jan.ai>
2025-06-11 14:35:41 +07:00
Louis
51a321219d
chore: fix model settings are not applied accordingly on change (#5231)
* chore: fix model settings are not applied accordingly on change

* chore: handle failed tool call

* chore: stop inference and model on reject
2025-06-10 16:26:42 +07:00
Faisal Amir
a88da16edc
chore: hide some model capabilities (#5189)
* chore: hide some model capabilities

* chore: update model setting description

* chore: disable vision and embedding and add tooltip

---------

Co-authored-by: Louis <louis@jan.ai>
2025-06-04 12:11:16 +07:00
Faisal Amir
6861c46ac6
feat: setting toggle vulkan (#5126)
* feat: setting toggle vulkan

* feat: add vulkan toggle setting

* chore: default flash attention disable

* chore: fix vulkan retrieval

* fix: vulkan setting does not affect engine run

* Update web-app/src/routes/settings/hardware.tsx

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

---------

Co-authored-by: Louis <louis@jan.ai>
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-06-03 13:56:23 +07:00
Faisal Amir
057accfb96
enhancement: ux tool call permission dialog and state active (#5157)
* enhancement: mcp toold dialog approval

* enhancement: update mcp tool enable or disable

* chore: add toggle mcl global permission
2025-06-01 23:58:20 +07:00
Louis
a1111033d9
chore: allow users to setting model offload (#5134)
* chore: allow users to setting model offload

* chore: apply model.yaml configurations to default model settings

* chore: fallback default value
2025-05-29 13:29:32 +07:00