207 Commits

Author SHA1 Message Date
Akarshan Biswas
885da29f28
feat: add getTokensCount method to compute token usage (#6467)
* feat: add getTokensCount method to compute token usage

Implemented a new async `getTokensCount` function in the LLaMA.cpp extension.
The method validates the model session, checks process health, applies the request template, and tokenizes the resulting prompt to return the token count. Includes detailed error handling for crashed models and API failures, enabling callers to assess token usage before sending completions.

* Fix: typos

* chore: update ui token usage

* chore: remove unused code

* feat: add image token handling for multimodal LlamaCPP models

Implemented support for counting image tokens when using vision-enabled models:
- Extended `SessionInfo` with optional `mmprojPath` to store the multimodal project file.
- Propagated `mmproj_path` from the Tauri plugin into the session info.
- Added import of `chatCompletionRequestMessage` and enhanced token calculation logic in the LlamaCPP extension:
- Detects image content in messages.
- Reads GGUF metadata from `mmprojPath` to compute accurate image token counts.
- Provides a fallback estimation if metadata reading fails.
- Returns the sum of text and image tokens.
- Introduced helper methods `calculateImageTokens` and `estimateImageTokensFallback`.
- Minor clean‑ups such as comment capitalization and debug logging.

* chore: update FE send params message include content type image_url

* fix mmproj path from session info and num tokens calculation

* fix: Correct image token estimation calculation in llamacpp extension

This commit addresses an inaccurate token count for images in the llama.cpp extension.

The previous logic incorrectly calculated the token count based on image patch size and dimensions. This has been replaced with a more precise method that uses the clip.vision.projection_dim value from the model metadata.

Additionally, unnecessary debug logging was removed, and a new log was added to show the mmproj metadata for improved visibility.

* fix per image calc

* fix: crash due to force unwrap

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
Co-authored-by: Louis <louis@jan.ai>
2025-09-23 07:52:19 +05:30
Akarshan Biswas
bf7f176741
feat: Prompt progress when streaming (#6503)
* feat: Prompt progress when streaming

- BE changes:
    - Add a `return_progress` flag to `chatCompletionRequest` and a corresponding `prompt_progress` payload in `chatCompletionChunk`. Introduce `chatCompletionPromptProgress` interface to capture cache, processed, time, and total token counts.
    - Update the Llamacpp extension to always request progress data when streaming, enabling UI components to display real‑time generation progress and leverage llama.cpp’s built‑in progress reporting.

* Make return_progress optional

* chore: update ui prompt progress before streaming content

* chore: remove log

* chore: remove progress when percentage >= 100

* chore: set timeout prompt progress

* chore: move prompt progress outside streaming content

* fix: tests

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
Co-authored-by: Louis <louis@jan.ai>
2025-09-22 20:37:27 +05:30
Louis
0d2c99a413
fix: prevent consecutive messages with same role (#6544)
* fix: prevent consecutive messages with same role

* fix: tests

* fix: first message should not be assistant

* fix: tests
2025-09-22 19:27:45 +07:00
Louis
0f349f4b8c
fix: build 2025-09-19 11:13:39 +07:00
Louis
1ec9c29df6
fix: linter 2025-09-19 11:08:37 +07:00
Louis
3d8cfbf99a
fix: tests 2025-09-19 10:56:25 +07:00
Dinh Long Nguyen
c5c39f22e6 Merge branch 'fix/thread-rerender-issue' of https://github.com/menloresearch/jan into fix/thread-rerender-issue 2025-09-18 23:21:28 +07:00
Dinh Long Nguyen
a39c38e1fd fix re render issue 2025-09-18 23:11:50 +07:00
Louis
508879e3ae fix: should not rerender thread message components when typing 2025-09-18 22:44:03 +07:00
Louis
6342956cd6 fix: reduce unnessary rerender due to current thread retrieval 2025-09-18 17:55:07 +07:00
Louis
2a2bc40dfe
fix: tests 2025-09-18 17:21:59 +07:00
Louis
707fdac2ce chore: remove duplicated block 2025-09-18 16:45:37 +07:00
Louis
e64607eb43 fix: linter 2025-09-18 16:44:16 +07:00
Louis
da69f3acec chore: uncomment irrelevant fix 2025-09-18 16:35:56 +07:00
Louis
241a90492e fix: thread rerender issue 2025-09-18 16:24:42 +07:00
Dinh Long Nguyen
0f85fce6ef
feat: add auth + google auth provider for web (#6505)
* handle google auth

* fix lint

* fix auto login button type

* update i18 language + userprofilemenu position

* minor api rename for consistency
2025-09-18 11:11:14 +07:00
Dinh Long Nguyen
491012fa87
remove assistant from web (#6468) 2025-09-15 23:53:59 +07:00
Louis
cf87313f28
Merge pull request #6384 from maxx-ukoo/mk_add_configurable_timeout_to_local_api_server
Add model response timeout for local api server as configurable value
2025-09-15 21:26:07 +07:00
Louis
e78e4e5cca
Merge pull request #6278 from lugnicca/feat/model-selector
feat: add model selector (fetch from v1/models) when user adds a provider model
2025-09-15 20:23:22 +07:00
Dinh Long Nguyen
311a451005
Always allow MCP for web (#6462)
* mcp and extension setting disabled + always allow mcp tools on web

* fix tests
2025-09-15 20:13:46 +07:00
Maksym Krasovakyi
71e2e24112 Add model response timeout for local api server as configurable value via UI 2025-09-15 14:25:09 +03:00
Louis
43431c26e7
Merge branch 'dev' into feat/model-selector 2025-09-15 12:02:25 +07:00
Faisal Amir
cbd2651a63
chore: update copy and refresh list when import from local machine 2025-09-11 09:52:09 +05:30
Faisal Amir
ba4dc6d1eb
enhancement: update ui dialog update llamacpp backend 2025-09-11 09:52:09 +05:30
Akarshan Biswas
7a174e621a
feat: Smart model management (#6390)
* feat: Smart model management

* **New UI option** – `memory_util` added to `settings.json` with a dropdown (high / medium / low) to let users control how aggressively the engine uses system memory.
* **Configuration updates** – `LlamacppConfig` now includes `memory_util`; the extension class stores it in a new `memoryMode` property and handles updates through `updateConfig`.
* **System memory handling**
  * Introduced `SystemMemory` interface and `getTotalSystemMemory()` to report combined VRAM + RAM.
  * Added helper methods `getKVCachePerToken`, `getLayerSize`, and a new `ModelPlan` type.
* **Smart model‑load planner** – `planModelLoad()` computes:
  * Number of GPU layers that can fit in usable VRAM.
  * Maximum context length based on KV‑cache size and the selected memory utilization mode (high/medium/low).
  * Whether KV‑cache must be off‑loaded to CPU and the overall loading mode (GPU, Hybrid, CPU, Unsupported).
  * Detailed logging of the planning decision.
* **Improved support check** – `isModelSupported()` now:
  * Uses the combined VRAM/RAM totals from `getTotalSystemMemory()`.
  * Applies an 80% usable‑memory heuristic.
  * Returns **GREEN** only when both weights and KV‑cache fit in VRAM, **YELLOW** when they fit only in total memory or require CPU off‑load, and **RED** when the model cannot fit at all.
* **Cleanup** – Removed unused `GgufMetadata` import; updated imports and type definitions accordingly.
* **Documentation/comments** – Added explanatory JSDoc comments for the new methods and clarified the return semantics of `isModelSupported`.

* chore: migrate no_kv_offload from llamacpp setting to model setting

* chore: add UI auto optimize model setting

* feat: improve model loading planner with mmproj support and smarter memory budgeting

* Extend `ModelPlan` with optional `noOffloadMmproj` flag to indicate when a multimodal projector can stay in VRAM.
* Add `mmprojPath` parameter to `planModelLoad` and calculate its size, attempting to keep it on GPU when possible.
* Refactor system memory detection:
  * Use `used_memory` (actual free RAM) instead of total RAM for budgeting.
  * Introduced `usableRAM` placeholder for future use.
* Rewrite KV‑cache size calculation:
  * Properly handle GQA models via `attention.head_count_kv`.
  * Compute bytes per token as `nHeadKV * headDim * 2 * 2 * nLayer`.
* Replace the old 70 % VRAM heuristic with a more flexible budget:
  * Reserve a fixed VRAM amount and apply an overhead factor.
  * Derive usable system RAM from total memory minus VRAM.
* Implement a robust allocation algorithm:
  * Prioritize placing the mmproj in VRAM.
  * Search for the best balance of GPU layers and context length.
  * Fallback strategies for hybrid and pure‑CPU modes with detailed safety checks.
* Add extensive validation of model size, KV‑cache size, layer size, and memory mode.
* Improve logging throughout the planning process for easier debugging.
* Adjust final plan return shape to include the new `noOffloadMmproj` field.

* remove unused variable

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-09-11 09:48:03 +05:30
lugnicca
2db9af94fa fix: use serviceHub to fetch models and fix error message on app 2025-09-08 17:45:35 +02:00
lugnicca
66af5c7386 fix: use webprovider services to fetch models 2025-09-05 19:56:28 +02:00
lugnicca
045778406f Merge branch 'dev' into feat/model-selector 2025-09-05 19:51:17 +02:00
Dinh Long Nguyen
a30eb7f968
feat: Jan Web (reusing Jan Desktop UI) (#6298)
* add platform guards

* add service management

* fix types

* move to zustand for servicehub

* update App Updater

* update tauri missing move

* update app updater

* refactor: move PlatformFeatures to separate const file

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* change tauri fetch name

* update implementation

* update extension fetch

* make web version run properly

* disabled unused web settings

* fix all tests

* fix lint

* fix tests

* add mock for extension

* fix build

* update make and mise

* fix tsconfig for web-extensions

* fix loader type

* cleanup

* fix test

* update error handling + mcp should be working

* Update mcp init

* use separate is_web_app build property

* Remove fixed model catalog url

* fix additional tests

* fix download issue (event emitter not implemented correctly)

* Update Title html

* fix app logs

* update root tsx render timing

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-05 01:47:46 +07:00
lugnicca
3d0ce15fe8 fix: prevent stale provider model requests from polluting UI state 2025-09-02 18:19:12 +02:00
Faisal Amir
d922d7454d fix: mcp sort list 2025-08-27 18:25:13 +07:00
Faisal Amir
742f9c1a70 fix: sort list when add server 2025-08-27 18:17:55 +07:00
Faisal Amir
75d189900c fix: mcp cleanup dropodown tool availabel and sort list 2025-08-27 18:08:23 +07:00
Faisal Amir
62eb422934 chore: show model setting only for local provider 2025-08-25 11:26:56 +07:00
lugnicca
aa568e6290 fix: remove ModelProvider type 2025-08-23 15:07:42 +02:00
lugnicca
1bf5802a68 refactor: update MockModelProvider type to use ModelProvider and clean up test setup 2025-08-23 02:37:15 +02:00
lugnicca
3339629747 test: add unit tests for ModelCombobox, useProviderModels and providers 2025-08-23 02:37:14 +02:00
lugnicca
5d9c3ab462 feat: add model selector with fetching from /v1/models endpoints when adding models 2025-08-23 02:36:38 +02:00
Louis
8e7378b70f
Merge pull request #6255 from menloresearch/fix/remove-experimental-toggle
fix: remove experimental toggle
2025-08-21 12:51:25 +07:00
Faisal Amir
7b9e752301
Merge pull request #6250 from menloresearch/feat/local-api-server
feat: run on startup setting for local api server
2025-08-21 12:43:13 +07:00
Louis
8de5c1709b
fix: test 2025-08-21 12:01:45 +07:00
Louis
cfbc6b9150
fix: remove experimental toggle 2025-08-21 11:54:34 +07:00
Louis
e6587844d0
Merge branch 'dev' into current-date-instruction 2025-08-21 11:41:30 +07:00
Louis
6850dda108
feat: MCP server error handling 2025-08-20 23:42:12 +07:00
Faisal Amir
39df7b22b9 chore: rename key runOnStartup from hooks useLocalApiServer 2025-08-20 22:37:45 +07:00
Faisal Amir
cfa68c5500 feat: run on startup settin for local api server 2025-08-20 21:56:53 +07:00
Louis
c018713676
feat: allow user to set max_attempt for MCP to avoid looping 2025-08-20 12:42:54 +07:00
Faisal Amir
5481ee9e35
Merge pull request #6134 from menloresearch/feat/attachment-ui
feat: attachment UI
2025-08-20 10:04:32 +07:00
Kamal Fariz Mahyuddin
df27def9cb
Merge branch 'dev' into current-date-instruction 2025-08-19 14:40:08 -07:00
Louis
91f05b8f32
feat: add tool call cancellation 2025-08-19 23:27:12 +07:00