83 Commits

Author SHA1 Message Date
Faisal Amir
462b05e612 chore: fix conflict revert analytic 2025-10-15 10:35:36 +07:00
dinhlongviolin1
946b347f44 fix: lint 2025-10-15 00:21:10 +07:00
Dinh Long Nguyen
b23e88f078
Merge branch 'dev' into feat/file-attachment 2025-10-14 14:06:17 +07:00
Trang Le
476fdd6040
feat: Enable new prompt input while waiting for an answer (#6676)
* enable new prompt input while waiting for an answer

* correct spelling of handleSendMessage function

* remove test for disabling input while streaming content
2025-10-14 14:04:52 +07:00
Dinh Long Nguyen
fa8b3664cb
Merge branch 'dev' into feat/file-attachment 2025-10-14 14:00:10 +07:00
Faisal Amir
584daa9682 chore: revert track event posthog 2025-10-11 21:46:15 +07:00
Akarshan
31f9501d8e
feat: Optimize state updates in server and model checks
- Added shallow equality guard for `connectedServers` state to prevent redundant updates when the fetched server list hasn't changed.
- Updated error handling for server fetch to only clear the state when it actually contains data.
- Introduced `newHasActiveModels` variable and conditional updater for `hasActiveModels` to avoid unnecessary state changes.
- Adjusted error handling for active model fetch to only set `hasActiveModels` to `false` when the current state differs.

These changes reduce needless re‑renders and improve component performance.
2025-10-10 20:25:17 +05:30
Dinh Long Nguyen
fc784620e0 fix tests 2025-10-09 04:28:08 +07:00
Dinh Long Nguyen
340042682a ui ux enhancement 2025-10-09 03:48:51 +07:00
Dinh Long Nguyen
ff93dc3c5c Merge branch 'dev' into feat/file-attachment 2025-10-08 16:34:45 +07:00
Dinh Long Nguyen
510c4a5188 working attachments 2025-10-08 16:08:40 +07:00
Faisal Amir
e5be683a97
Merge pull request #6755 from menloresearch/chore/analytic-model-used
chore: create event to track model provider and id model
2025-10-07 20:36:42 +07:00
Faisal Amir
7d615b4163 chore: type fixed 2025-10-07 18:10:25 +07:00
Faisal Amir
61c3fd4b5a
Merge pull request #6727 from menloresearch/fix/prompt-token
fix: prompt token
2025-10-07 18:05:29 +07:00
Faisal Amir
310ca7cb23 chore: create message_sent event to track model provider and id model 2025-10-07 18:04:58 +07:00
Dinh Long Nguyen
d5110de67b
feat: improve projects (#6698)
* decouple successfully

* only show movable projects for project items

* handle delete covnersations when projects is removed

* fix leftpanel assignemtn

* fix lint
2025-10-01 22:47:38 +07:00
Dinh Long Nguyen
f33c2c205a
feat: web add search button for extension (#6671)
* add search button for web extension

* change button color and behavior

* Update extensions-web/src/mcp-web/components/WebSearchButton.tsx

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-09-30 21:39:08 +07:00
Faisal Amir
e7a1a06395 feat: thread organization folder 2025-09-25 10:12:08 +07:00
Alexey Haidamaka
5adc0d9d46
add full-width model names (#6350) 2025-09-23 10:14:21 +07:00
Akarshan Biswas
885da29f28
feat: add getTokensCount method to compute token usage (#6467)
* feat: add getTokensCount method to compute token usage

Implemented a new async `getTokensCount` function in the LLaMA.cpp extension.
The method validates the model session, checks process health, applies the request template, and tokenizes the resulting prompt to return the token count. Includes detailed error handling for crashed models and API failures, enabling callers to assess token usage before sending completions.

* Fix: typos

* chore: update ui token usage

* chore: remove unused code

* feat: add image token handling for multimodal LlamaCPP models

Implemented support for counting image tokens when using vision-enabled models:
- Extended `SessionInfo` with optional `mmprojPath` to store the multimodal project file.
- Propagated `mmproj_path` from the Tauri plugin into the session info.
- Added import of `chatCompletionRequestMessage` and enhanced token calculation logic in the LlamaCPP extension:
- Detects image content in messages.
- Reads GGUF metadata from `mmprojPath` to compute accurate image token counts.
- Provides a fallback estimation if metadata reading fails.
- Returns the sum of text and image tokens.
- Introduced helper methods `calculateImageTokens` and `estimateImageTokensFallback`.
- Minor clean‑ups such as comment capitalization and debug logging.

* chore: update FE send params message include content type image_url

* fix mmproj path from session info and num tokens calculation

* fix: Correct image token estimation calculation in llamacpp extension

This commit addresses an inaccurate token count for images in the llama.cpp extension.

The previous logic incorrectly calculated the token count based on image patch size and dimensions. This has been replaced with a more precise method that uses the clip.vision.projection_dim value from the model metadata.

Additionally, unnecessary debug logging was removed, and a new log was added to show the mmproj metadata for improved visibility.

* fix per image calc

* fix: crash due to force unwrap

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
Co-authored-by: Louis <louis@jan.ai>
2025-09-23 07:52:19 +05:30
Louis
508879e3ae fix: should not rerender thread message components when typing 2025-09-18 22:44:03 +07:00
Louis
241a90492e fix: thread rerender issue 2025-09-18 16:24:42 +07:00
Faisal Amir
86dcfc10cf enhancement: rollback edit capabilities for local model 2025-09-10 19:43:44 +07:00
Faisal Amir
1b035fd2f1 feat: allow user import model include mmproj file 2025-09-08 00:00:46 +07:00
Dinh Long Nguyen
a30eb7f968
feat: Jan Web (reusing Jan Desktop UI) (#6298)
* add platform guards

* add service management

* fix types

* move to zustand for servicehub

* update App Updater

* update tauri missing move

* update app updater

* refactor: move PlatformFeatures to separate const file

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* change tauri fetch name

* update implementation

* update extension fetch

* make web version run properly

* disabled unused web settings

* fix all tests

* fix lint

* fix tests

* add mock for extension

* fix build

* update make and mise

* fix tsconfig for web-extensions

* fix loader type

* cleanup

* fix test

* update error handling + mcp should be working

* Update mcp init

* use separate is_web_app build property

* Remove fixed model catalog url

* fix additional tests

* fix download issue (event emitter not implemented correctly)

* Update Title html

* fix app logs

* update root tsx render timing

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-05 01:47:46 +07:00
Faisal Amir
12118192ef chore: fix id codeblock for avoid duplicate same state 2025-09-01 13:02:53 +07:00
Faisal Amir
b93f77a9f5 fix: handle copy image from browser in linux 2025-08-26 21:37:32 +07:00
Faisal Amir
b915f1f674 fix: handle paste image on linux 2025-08-26 20:44:23 +07:00
Faisal Amir
e73a710c06 fix/update-ui-info 2025-08-25 16:45:59 +07:00
Faisal Amir
8d06c3addf chore: add tooltip visions 2025-08-25 10:47:18 +07:00
Faisal Amir
45ba949d96 fix: toggle vision for remote model 2025-08-25 10:28:18 +07:00
Akarshan Biswas
510c70bdf7
feat: Add model compatibility check and memory estimation (#6243)
* feat: Add model compatibility check and memory estimation

This commit introduces a new feature to check if a given model is supported based on available device memory.

The change includes:
- A new `estimateKVCache` method that calculates the required memory for the model's KV cache. It uses GGUF metadata such as `block_count`, `head_count`, `key_length`, and `value_length` to perform the calculation.
- An `isModelSupported` method that combines the model file size and the estimated KV cache size to determine the total memory required. It then checks if any available device has sufficient free memory to load the model.
- An updated error message for the `version_backend` check to be more user-friendly, suggesting a stable internet connection as a potential solution for backend setup failures.

This functionality helps prevent the application from attempting to load models that would exceed the device's memory capacity, leading to more stable and predictable behavior.

fixes: #5505

* Update extensions/llamacpp-extension/src/index.ts

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* Update extensions/llamacpp-extension/src/index.ts

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* Extend this to available system RAM if GGML device is not available

* fix: Improve model metadata and memory checks

This commit refactors the logic for checking if a model is supported by a system's available memory.

**Key changes:**
- **Remote model support**: The `read_gguf_metadata` function can now fetch metadata from a remote URL by reading the file in chunks.
- **Improved KV cache size calculation**: The KV cache size is now estimated more accurately by using `attention.key_length` and `attention.value_length` from the GGUF metadata, with a fallback to `embedding_length`.
- **Granular memory check statuses**: The `isModelSupported` function now returns a more specific status (`'RED'`, `'YELLOW'`, `'GREEN'`) to indicate whether the model weights or the KV cache are too large for the available memory.
- **Consolidated logic**: The logic for checking local and remote models has been consolidated into a single `isModelSupported` function, improving code clarity and maintainability.

These changes provide more robust and informative model compatibility checks, especially for models hosted on remote servers.

* Update extensions/llamacpp-extension/src/index.ts

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* Make ctx_size optional and use sum free memory across ggml devices

* feat: hub and dropdown model selection handle model compatibility

* feat: update bage model info color

* chore: enable detail page to get compatibility model

* chore: update copy

* chore: update shrink indicator UI

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-08-21 16:13:50 +05:30
Louis
cfbc6b9150
fix: remove experimental toggle 2025-08-21 11:54:34 +07:00
Faisal Amir
5481ee9e35
Merge pull request #6134 from menloresearch/feat/attachment-ui
feat: attachment UI
2025-08-20 10:04:32 +07:00
Faisal Amir
87af59b65d chore: update icon image instead paperclip 2025-08-20 09:46:57 +07:00
Louis
91f05b8f32
feat: add tool call cancellation 2025-08-19 23:27:12 +07:00
Faisal Amir
5155f19c9b chore: update data test id chat input 2025-08-19 22:56:16 +07:00
Faisal Amir
07b1101736 chore: enable attachment icon for remote provider 2025-08-19 22:46:45 +07:00
Faisal Amir
80dc491f9d chore: conditianal attachment and drag file to chat input 2025-08-19 22:46:45 +07:00
Faisal Amir
e3eb8e909b chore: attachment icon conditional 2025-08-19 22:46:45 +07:00
Faisal Amir
067f8b5447 chore: update attachment icon alignment 2025-08-19 20:00:47 +07:00
Faisal Amir
9f39f0cdb8 fix: paste image chat input 2025-08-19 19:51:32 +07:00
Faisal Amir
e04bb86171 chore: prevent drag image to replace the window, and enable shortcut copy and paste image 2025-08-19 19:51:02 +07:00
Faisal Amir
f70449fd98 chore: remove pdf attachement 2025-08-19 19:51:02 +07:00
Faisal Amir
ec9925ed5a chore: update copy validation format type 2025-08-19 19:51:01 +07:00
Faisal Amir
f95e17c047 chore: disable pdf for local model 2025-08-19 19:51:01 +07:00
Faisal Amir
cef3e122ff chore: send attachment file when send message 2025-08-19 19:51:01 +07:00
Faisal Amir
e9bd0f0bec chore: update alignment 2025-08-19 19:51:01 +07:00
Faisal Amir
5f1cb67ffc feat: enable attachment UI 2025-08-19 19:51:01 +07:00
Louis
c8d9592ab8
chore: mcp group server, action and import json 2025-08-15 11:37:21 +07:00