353 Commits

Author SHA1 Message Date
Louis
c9d165e65c
Merge branch 'dev' into fix/thread-rerender-issue 2025-09-19 10:34:08 +07:00
Louis
ebb6837437 chore: sync latest 2025-09-19 10:30:03 +07:00
Louis
508879e3ae fix: should not rerender thread message components when typing 2025-09-18 22:44:03 +07:00
Dinh Long Nguyen
359dd8f41e
Merge pull request #6514 from menloresearch/feat/web-gtag
feat: Add GA Measurement and change keyboard bindings on web
2025-09-18 20:45:41 +07:00
Dinh Long Nguyen
645548e931
Merge pull request #6516 from menloresearch/release/v0.6.10 2025-09-18 19:15:54 +07:00
Louis
f237936b0c
clean: unused import 2025-09-18 18:49:12 +07:00
Louis
5f6a68d844
fix: avoid the entire app layout re render on route change 2025-09-18 18:44:21 +07:00
Louis
be83395f69 fix: reduce app layout rerender due to router state update 2025-09-18 18:26:03 +07:00
Louis
6342956cd6 fix: reduce unnessary rerender due to current thread retrieval 2025-09-18 17:55:07 +07:00
Louis
f271e8fe9c chore: clean up console log 2025-09-18 16:31:19 +07:00
Louis
241a90492e fix: thread rerender issue 2025-09-18 16:24:42 +07:00
Vanalite
21d0943aa4 chore: Separate configuration for android build in release mode 2025-09-18 11:32:52 +07:00
Dinh Long Nguyen
0f85fce6ef
feat: add auth + google auth provider for web (#6505)
* handle google auth

* fix lint

* fix auto login button type

* update i18 language + userprofilemenu position

* minor api rename for consistency
2025-09-18 11:11:14 +07:00
Vanalite
adfcb35ca6 Merge remote-tracking branch 'origin/dev' into mobile/init-mobile-app 2025-09-17 11:22:57 +07:00
Nghia Doan
aae1936620
Update web-app/src/routes/index.tsx
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-09-17 08:18:52 +07:00
Vanalite
8fa79aa394 chore: Adjust paddings to save some space for the top nav bar 2025-09-16 21:35:50 +07:00
Vanalite
224bee5c66 feat: Adjust UI for mobile res
Feature:
- Adjust homecreen and chatscreen for mobile device
- Fix tests for both FE and BE
Self-test:
- Confirm runnable on both Android and iOS
- Confirm runnable on desktop app
- All test suites passed
- Working with ChatGPT API
2025-09-16 20:38:56 +07:00
Faisal Amir
0945eaedcd fix: loader when importing 2025-09-16 16:53:47 +07:00
Faisal Amir
0e972646e8
Merge pull request #6465 from menloresearch/fix/attachment-edit-message
fix: attachment edit message
2025-09-16 11:17:17 +07:00
Dinh Long Nguyen
491012fa87
remove assistant from web (#6468) 2025-09-15 23:53:59 +07:00
Faisal Amir
7b9b9666cb fix: imporove edit message with attachment image 2025-09-15 21:48:19 +07:00
Faisal Amir
3b22f0b7c0 fix: imporove edit message with attachment image 2025-09-15 21:48:01 +07:00
Louis
cf87313f28
Merge pull request #6384 from maxx-ukoo/mk_add_configurable_timeout_to_local_api_server
Add model response timeout for local api server as configurable value
2025-09-15 21:26:07 +07:00
Dinh Long Nguyen
311a451005
Always allow MCP for web (#6462)
* mcp and extension setting disabled + always allow mcp tools on web

* fix tests
2025-09-15 20:13:46 +07:00
Maksym Krasovakyi
71e2e24112 Add model response timeout for local api server as configurable value via UI 2025-09-15 14:25:09 +03:00
Faisal Amir
18114c0a15 fix: pathname file install BE 2025-09-15 18:05:11 +07:00
Dinh Long Nguyen
0771b998a5
Fix: Web Services Improvement
Fix: Web Services Improvement
2025-09-15 09:08:30 +07:00
Dinh Long Nguyen
b5b6e1dc19
add mcp for web (#6411)
* add mcp for web

* update /jan/v1 endpoint to /v1

* update mise and makefile

* update yarn lock

* use mcp oauth properly
2025-09-12 12:14:10 +07:00
Faisal Amir
6067ffe107
chore: fix conflict 2025-09-11 09:52:09 +05:30
Faisal Amir
cbd2651a63
chore: update copy and refresh list when import from local machine 2025-09-11 09:52:09 +05:30
Faisal Amir
ba4dc6d1eb
enhancement: update ui dialog update llamacpp backend 2025-09-11 09:52:09 +05:30
Akarshan Biswas
7a174e621a
feat: Smart model management (#6390)
* feat: Smart model management

* **New UI option** – `memory_util` added to `settings.json` with a dropdown (high / medium / low) to let users control how aggressively the engine uses system memory.
* **Configuration updates** – `LlamacppConfig` now includes `memory_util`; the extension class stores it in a new `memoryMode` property and handles updates through `updateConfig`.
* **System memory handling**
  * Introduced `SystemMemory` interface and `getTotalSystemMemory()` to report combined VRAM + RAM.
  * Added helper methods `getKVCachePerToken`, `getLayerSize`, and a new `ModelPlan` type.
* **Smart model‑load planner** – `planModelLoad()` computes:
  * Number of GPU layers that can fit in usable VRAM.
  * Maximum context length based on KV‑cache size and the selected memory utilization mode (high/medium/low).
  * Whether KV‑cache must be off‑loaded to CPU and the overall loading mode (GPU, Hybrid, CPU, Unsupported).
  * Detailed logging of the planning decision.
* **Improved support check** – `isModelSupported()` now:
  * Uses the combined VRAM/RAM totals from `getTotalSystemMemory()`.
  * Applies an 80% usable‑memory heuristic.
  * Returns **GREEN** only when both weights and KV‑cache fit in VRAM, **YELLOW** when they fit only in total memory or require CPU off‑load, and **RED** when the model cannot fit at all.
* **Cleanup** – Removed unused `GgufMetadata` import; updated imports and type definitions accordingly.
* **Documentation/comments** – Added explanatory JSDoc comments for the new methods and clarified the return semantics of `isModelSupported`.

* chore: migrate no_kv_offload from llamacpp setting to model setting

* chore: add UI auto optimize model setting

* feat: improve model loading planner with mmproj support and smarter memory budgeting

* Extend `ModelPlan` with optional `noOffloadMmproj` flag to indicate when a multimodal projector can stay in VRAM.
* Add `mmprojPath` parameter to `planModelLoad` and calculate its size, attempting to keep it on GPU when possible.
* Refactor system memory detection:
  * Use `used_memory` (actual free RAM) instead of total RAM for budgeting.
  * Introduced `usableRAM` placeholder for future use.
* Rewrite KV‑cache size calculation:
  * Properly handle GQA models via `attention.head_count_kv`.
  * Compute bytes per token as `nHeadKV * headDim * 2 * 2 * nLayer`.
* Replace the old 70 % VRAM heuristic with a more flexible budget:
  * Reserve a fixed VRAM amount and apply an overhead factor.
  * Derive usable system RAM from total memory minus VRAM.
* Implement a robust allocation algorithm:
  * Prioritize placing the mmproj in VRAM.
  * Search for the best balance of GPU layers and context length.
  * Fallback strategies for hybrid and pure‑CPU modes with detailed safety checks.
* Add extensive validation of model size, KV‑cache size, layer size, and memory mode.
* Improve logging throughout the planning process for easier debugging.
* Adjust final plan return shape to include the new `noOffloadMmproj` field.

* remove unused variable

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-09-11 09:48:03 +05:30
Faisal Amir
86dcfc10cf enhancement: rollback edit capabilities for local model 2025-09-10 19:43:44 +07:00
Faisal Amir
5e30e10bf4
Merge pull request #6388 from menloresearch/feat/import-vision-model
feat: allow user import model include mmproj file
2025-09-09 09:41:58 +07:00
Faisal Amir
836990b7d9 chore: update fn check mmproj file 2025-09-08 11:10:00 +07:00
Faisal Amir
1b035fd2f1 feat: allow user import model include mmproj file 2025-09-08 00:00:46 +07:00
Faisal Amir
a49008e02d enhancement: responsive dialog modals 2025-09-06 21:48:09 +07:00
Dinh Long Nguyen
d490174544
feat: Web use jan model (#6374)
* call jan api

* fix lint

* ci: add jan server web

* chore: add Dockerfile

* clean up ui ux and support for reasoning fields, make app spa

* add logo

* chore: update tag for preview image

* chore: update k8s service name

* chore: update image tag and image name

* fixed test

---------

Co-authored-by: Minh141120 <minh.itptit@gmail.com>
Co-authored-by: Nguyen Ngoc Minh <91668012+Minh141120@users.noreply.github.com>
2025-09-05 16:18:30 +07:00
Dinh Long Nguyen
a30eb7f968
feat: Jan Web (reusing Jan Desktop UI) (#6298)
* add platform guards

* add service management

* fix types

* move to zustand for servicehub

* update App Updater

* update tauri missing move

* update app updater

* refactor: move PlatformFeatures to separate const file

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* change tauri fetch name

* update implementation

* update extension fetch

* make web version run properly

* disabled unused web settings

* fix all tests

* fix lint

* fix tests

* add mock for extension

* fix build

* update make and mise

* fix tsconfig for web-extensions

* fix loader type

* cleanup

* fix test

* update error handling + mcp should be working

* Update mcp init

* use separate is_web_app build property

* Remove fixed model catalog url

* fix additional tests

* fix download issue (event emitter not implemented correctly)

* Update Title html

* fix app logs

* update root tsx render timing

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-05 01:47:46 +07:00
Faisal Amir
b2c4e89402
Merge pull request #6364 from menloresearch/feat/local-api-server
feat: allow see Apikey when server local status running
2025-09-03 20:05:57 +07:00
Faisal Amir
38629afc89 chore: avoid duplicated fn 2025-09-03 18:37:49 +07:00
Faisal Amir
5f9f766965 fix: search hgf repo and downloaded filter 2025-09-03 18:31:07 +07:00
Faisal Amir
cb4641e4ad feat: allow see apikey when server local status running 2025-09-03 17:55:52 +07:00
Faisal Amir
328d680f73 chore: fix status model id 2025-08-28 13:15:58 +07:00
Faisal Amir
75d189900c fix: mcp cleanup dropodown tool availabel and sort list 2025-08-27 18:08:23 +07:00
Faisal Amir
e376314315 chore: update filter hub while searching 2025-08-25 16:51:30 +07:00
Faisal Amir
e73a710c06 fix/update-ui-info 2025-08-25 16:45:59 +07:00
Akarshan Biswas
510c70bdf7
feat: Add model compatibility check and memory estimation (#6243)
* feat: Add model compatibility check and memory estimation

This commit introduces a new feature to check if a given model is supported based on available device memory.

The change includes:
- A new `estimateKVCache` method that calculates the required memory for the model's KV cache. It uses GGUF metadata such as `block_count`, `head_count`, `key_length`, and `value_length` to perform the calculation.
- An `isModelSupported` method that combines the model file size and the estimated KV cache size to determine the total memory required. It then checks if any available device has sufficient free memory to load the model.
- An updated error message for the `version_backend` check to be more user-friendly, suggesting a stable internet connection as a potential solution for backend setup failures.

This functionality helps prevent the application from attempting to load models that would exceed the device's memory capacity, leading to more stable and predictable behavior.

fixes: #5505

* Update extensions/llamacpp-extension/src/index.ts

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* Update extensions/llamacpp-extension/src/index.ts

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* Extend this to available system RAM if GGML device is not available

* fix: Improve model metadata and memory checks

This commit refactors the logic for checking if a model is supported by a system's available memory.

**Key changes:**
- **Remote model support**: The `read_gguf_metadata` function can now fetch metadata from a remote URL by reading the file in chunks.
- **Improved KV cache size calculation**: The KV cache size is now estimated more accurately by using `attention.key_length` and `attention.value_length` from the GGUF metadata, with a fallback to `embedding_length`.
- **Granular memory check statuses**: The `isModelSupported` function now returns a more specific status (`'RED'`, `'YELLOW'`, `'GREEN'`) to indicate whether the model weights or the KV cache are too large for the available memory.
- **Consolidated logic**: The logic for checking local and remote models has been consolidated into a single `isModelSupported` function, improving code clarity and maintainability.

These changes provide more robust and informative model compatibility checks, especially for models hosted on remote servers.

* Update extensions/llamacpp-extension/src/index.ts

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* Make ctx_size optional and use sum free memory across ggml devices

* feat: hub and dropdown model selection handle model compatibility

* feat: update bage model info color

* chore: enable detail page to get compatibility model

* chore: update copy

* chore: update shrink indicator UI

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-08-21 16:13:50 +05:30
Louis
5c4deff215
Merge pull request #6260 from menloresearch/fix/bring-back-manual-model-capability-edit
fix: bring back manual model capability edit modal
2025-08-21 16:31:17 +07:00
Dinh Long Nguyen
32a2ca95b6
feat: gguf file size + hash validation (#5266) (#6259)
* feat: gguf file size + hash validation

* fix tests fe

* update cargo tests

* handle asyn download for both models and mmproj

* move progress tracker to models

* handle file download cancelled

* add cancellation mid hash run
2025-08-21 16:17:58 +07:00