116 Commits

Author SHA1 Message Date
Louis
ce9c8fe1cf
Update web-app/src/lib/completion.ts
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-08 20:59:36 +07:00
Louis
38bdfd51ba
Update web-app/src/lib/completion.ts
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-08 10:10:42 +07:00
Louis
cf62729cd0
Merge branch 'dev' into feat/encrypt-api-key 2025-10-08 10:09:10 +07:00
Louis
583a9bb7af
Update web-app/src/lib/completion.ts
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-08 10:08:28 +07:00
Akarshan Biswas
706dad2687
feat: Add support for llamacpp MoE offloading setting (#6748)
* feat: Add support for llamacpp MoE offloading setting

Introduces the n_cpu_moe configuration setting for the llamacpp provider. This allows users to specify the number of Mixture of Experts (MoE) layers whose weights should be offloaded to the CPU via the --n-cpu-moe flag in llama.cpp.

This is useful for running large MoE models by balancing resource usage, for example, by keeping attention on the GPU and offloading expert FFNs to the CPU.

The changes include:

 - Updating the llamacpp-extension to accept and pass the --n-cpu-moe argument.

 - Adding the input field to the Model Settings UI (ModelSetting.tsx).

 - Including model setting migration logic and bumping the store version to 4.

* remove unused import

* feat: add cpu-moe boolean flag

* chore: remove unused migration cont_batching

* chore: fix migration delete old key and add new one

* chore: fix migration

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-10-07 19:37:58 +05:30
Louis
fe2c2a8687 Merge branch 'dev' into release/v0.7.0
# Conflicts:
#	web-app/src/containers/DropdownModelProvider.tsx
#	web-app/src/containers/ThreadList.tsx
#	web-app/src/containers/__tests__/DropdownModelProvider.displayName.test.tsx
#	web-app/src/hooks/__tests__/useModelProvider.test.ts
#	web-app/src/hooks/useChat.ts
#	web-app/src/lib/utils.ts
2025-10-06 20:42:05 +07:00
Vanalite
41a93690a1 Merge remote-tracking branch 'origin/dev' into mobile/persistence_store 2025-10-04 12:28:11 +07:00
Vanalite
24ff36d424 fix: Extract model capabilities correctly for various providers on various platforms 2025-10-02 21:48:07 +07:00
Vanalite
9720ad368e feat: use sql for mobile storage 2025-10-02 18:09:33 +07:00
Louis
c63604ac55
fix: remove fallback key 2025-10-02 16:02:37 +07:00
Dinh Long Nguyen
df145d63a9
fix gg tag (#6702) 2025-10-02 00:47:38 +07:00
Louis
e0ab77cb24
fix: token count error (#6680) 2025-10-01 14:07:32 +07:00
Vanalite
262a1a9544 Merge remote-tracking branch 'origin/dev' into mobile/dev
# Conflicts:
#	src-tauri/src/core/setup.rs
#	src-tauri/src/lib.rs
#	web-app/src/hooks/useChat.ts
2025-10-01 09:52:01 +07:00
Nghia Doan
c5a5968bf8
Merge pull request #6643 from menloresearch/fix/model-name-change
fix: Apply model name change correctly
2025-09-30 22:41:05 +07:00
Dinh Long Nguyen
e6bc1182a6
Merge branch 'dev' into feat/sync-release=to-dev 2025-09-30 22:04:27 +07:00
Dinh Long Nguyen
82d29e7a7d
add eof new line missing (#6673) 2025-09-30 21:48:38 +07:00
Faisal Amir
334b160012
Merge pull request #6656 from menloresearch/fix/thinking-block
fix: remove thinking tag on projects list message history
2025-09-30 13:48:34 +07:00
Nguyen Ngoc Minh
d315522c5a
Merge pull request #6618 from github-roushan/show-supported-files
Show supported files
2025-09-30 12:19:22 +07:00
Vanalite
c53d8c09c4 Merge remote-tracking branch 'origin/dev' into mobile/dev
# Conflicts:
#	web-app/src/containers/HeaderPage.tsx
#	web-app/src/lib/platform/const.ts
#	web-app/src/routes/index.tsx
2025-09-30 11:21:36 +07:00
Faisal Amir
1c9890649d fix: remove thinking tag on projects list message history 2025-09-30 11:08:31 +07:00
Dinh Long Nguyen
2101242530
Feat: web temporary chat (#6650)
* temporray chat stage1

* temporary page in root

* temporary chat

* handle redirection properly
`

* temporary chat header

* Update extensions-web/src/conversational-web/extension.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* update routetree

* better error handling

* fix strecthed assitant on desktop

* update yarn link to workspace for better link consistency

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-09-30 10:42:21 +07:00
Vanalite
5e57caee43 Merge remote-tracking branch 'origin/dev' into mobile/dev
# Conflicts:
#	extensions/yarn.lock
#	package.json
#	src-tauri/plugins/tauri-plugin-hardware/src/vendor/vulkan.rs
#	src-tauri/src/lib.rs
#	yarn.lock
2025-09-29 22:22:00 +07:00
Vanalite
987063fede feat: Add tests for the model displayName modification 2025-09-29 17:59:15 +07:00
Vanalite
03ee9c14a3 fix: Apply model name change correctly
# Conflicts:
#	web-app/src/lib/utils.ts
2025-09-29 16:39:32 +07:00
Louis
5363022634
feat: encrypt API Key
# Conflicts:
#	web-app/package.json
2025-09-29 10:04:20 +07:00
Roushan Singh
c6be66e595 refactor(utils): add helper to remove extensions from file paths 2025-09-26 15:02:23 +05:30
Vanalite
a0aa0074f4 Merge remote-tracking branch 'origin/dev' into mobile/dev
# Conflicts:
#	web-app/src/routeTree.gen.ts
#	web-app/src/routes/index.tsx
2025-09-26 11:09:50 +07:00
Akarshan Biswas
11b3a60675
fix: refactor, fix and move gguf support utilities to backend (#6584)
* feat: move estimateKVCacheSize to BE

* feat: Migrate model planning to backend

This commit migrates the model load planning logic from the frontend to the Tauri backend. This refactors the `planModelLoad` and `isModelSupported` methods into the `tauri-plugin-llamacpp` plugin, making them directly callable from the Rust core.

The model planning now incorporates a more robust and accurate memory estimation, considering both VRAM and system RAM, and introduces a `batch_size` parameter to the model plan.

**Key changes:**

- **Moved `planModelLoad` to `tauri-plugin-llamacpp`:** The core logic for determining GPU layers, context length, and memory offloading is now in Rust for better performance and accuracy.
- **Moved `isModelSupported` to `tauri-plugin-llamacpp`:** The model support check is also now handled by the backend.
- **Removed `getChatClient` from `AIEngine`:** This optional method was not implemented and has been removed from the abstract class.
- **Improved KV Cache estimation:** The `estimate_kv_cache_internal` function in Rust now accounts for `attention.key_length` and `attention.value_length` if available, and considers sliding window attention for more precise estimates.
- **Introduced `batch_size` in ModelPlan:** The model plan now includes a `batch_size` property, which will be automatically adjusted based on the determined `ModelMode` (e.g., lower for CPU/Hybrid modes).
- **Updated `llamacpp-extension`:** The frontend extension now calls the new Tauri commands for model planning and support checks.
- **Removed `batch_size` from `llamacpp-extension/settings.json`:** The batch size is now dynamically determined by the planning logic and will be set as a model setting directly.
- **Updated `ModelSetting` and `useModelProvider` hooks:** These now handle the new `batch_size` property in model settings.
- **Added new Tauri commands and permissions:** `get_model_size`, `is_model_supported`, and `plan_model_load` are new commands with corresponding permissions.
- **Consolidated `ModelSupportStatus` and `KVCacheEstimate`:** These types are now defined in `src/tauri/plugins/tauri-plugin-llamacpp/src/gguf/types.rs`.

This refactoring centralizes critical model resource management logic, improving consistency and maintainability, and lays the groundwork for more sophisticated model loading strategies.

* feat: refine model planner to handle more memory scenarios

This commit introduces several improvements to the `plan_model_load` function, enhancing its ability to determine a suitable model loading strategy based on system memory constraints. Specifically, it includes:

-   **VRAM calculation improvements:**  Corrects the calculation of total VRAM by iterating over GPUs and multiplying by 1024*1024, improving accuracy.
-   **Hybrid plan optimization:**  Implements a more robust hybrid plan strategy, iterating through GPU layer configurations to find the highest possible GPU usage while remaining within VRAM limits.
-   **Minimum context length enforcement:** Enforces a minimum context length for the model, ensuring that the model can be loaded and used effectively.
-   **Fallback to CPU mode:** If a hybrid plan isn't feasible, it now correctly falls back to a CPU-only mode.
-   **Improved logging:** Enhanced logging to provide more detailed information about the memory planning process, including VRAM, RAM, and GPU layers.
-   **Batch size adjustment:** Updated batch size based on the selected mode, ensuring efficient utilization of available resources.
-   **Error handling and edge cases:**  Improved error handling and edge case management to prevent unexpected failures.
-   **Constants:** Added constants for easier maintenance and understanding.
-   **Power-of-2 adjustment:** Added power of 2 adjustment for max context length to ensure correct sizing for the LLM.

These changes improve the reliability and robustness of the model planning process, allowing it to handle a wider range of hardware configurations and model sizes.

* Add log for raw GPU info from tauri-plugin-hardware

* chore: update linux runner for tauri build

* feat: Improve GPU memory calculation for unified memory

This commit improves the logic for calculating usable VRAM, particularly for systems with **unified memory** like Apple Silicon. Previously, the application would report 0 total VRAM if no dedicated GPUs were found, leading to incorrect calculations and failed model loads.

This change modifies the VRAM calculation to fall back to the total system RAM if no discrete GPUs are detected. This is a common and correct approach for unified memory architectures, where the CPU and GPU share the same memory pool.

Additionally, this commit refactors the logic for calculating usable VRAM and RAM to prevent potential underflow by checking if the total memory is greater than the reserved bytes before subtracting. This ensures the calculation remains safe and correct.

* chore: fix update migration version

* fix: enable unified memory support on model support indicator

* Use total_system_memory in bytes

---------

Co-authored-by: Minh141120 <minh.itptit@gmail.com>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-09-25 12:17:57 +05:30
Vanalite
b2c5063e0b Merge remote-tracking branch 'origin/dev' into mobile/dev
# Conflicts:
#	src-tauri/src/core/server/proxy.rs
#	src-tauri/tauri.conf.json
#	web-app/src/containers/LeftPanel.tsx
#	web-app/src/containers/__tests__/ChatInput.test.tsx
#	web-app/src/lib/platform/const.ts
#	yarn.lock
2025-09-24 16:01:33 +07:00
Louis
8a51cc1656
feat: add azure as first class provider (#6555)
* feat: add azure as first class provider

* fix: deployment url
2025-09-23 16:09:06 +07:00
Dinh Long Nguyen
df61546942
feat: web remote conversation (#6554)
* feat: implement conversation endpoint

* use conversation aware endpoint

* fetch message correctly

* preserve first message

* fix logout

* fix broadcast issue locally + auth not refreshing profile on other tabs+ clean up and sync messages

* add is dev tag
2025-09-23 15:09:45 +07:00
Louis
568ee857d5
fix: custom fetch for all providers (#6538)
* fix: custom fetch for all providers

* fix: run in development should use built-in fetch
2025-09-23 09:55:36 +07:00
Faisal Amir
5c4ce729f7 chore: update checking platform using config isntead navigation agent 2025-09-22 20:32:09 +07:00
Louis
0d2c99a413
fix: prevent consecutive messages with same role (#6544)
* fix: prevent consecutive messages with same role

* fix: tests

* fix: first message should not be assistant

* fix: tests
2025-09-22 19:27:45 +07:00
Faisal Amir
f639ec70d4 enhancement: fit mobile layout 2025-09-22 15:06:11 +07:00
Dinh Long Nguyen
359dd8f41e
Merge pull request #6514 from menloresearch/feat/web-gtag
feat: Add GA Measurement and change keyboard bindings on web
2025-09-18 20:45:41 +07:00
Dinh Long Nguyen
0f85fce6ef
feat: add auth + google auth provider for web (#6505)
* handle google auth

* fix lint

* fix auto login button type

* update i18 language + userprofilemenu position

* minor api rename for consistency
2025-09-18 11:11:14 +07:00
Dinh Long Nguyen
491012fa87
remove assistant from web (#6468) 2025-09-15 23:53:59 +07:00
Dinh Long Nguyen
311a451005
Always allow MCP for web (#6462)
* mcp and extension setting disabled + always allow mcp tools on web

* fix tests
2025-09-15 20:13:46 +07:00
Dinh Long Nguyen
0771b998a5
Fix: Web Services Improvement
Fix: Web Services Improvement
2025-09-15 09:08:30 +07:00
Dinh Long Nguyen
b5b6e1dc19
add mcp for web (#6411)
* add mcp for web

* update /jan/v1 endpoint to /v1

* update mise and makefile

* update yarn lock

* use mcp oauth properly
2025-09-12 12:14:10 +07:00
Dinh Long Nguyen
db52057030
fix ollama error (#6418) 2025-09-11 18:38:06 +07:00
Akarshan Biswas
7a174e621a
feat: Smart model management (#6390)
* feat: Smart model management

* **New UI option** – `memory_util` added to `settings.json` with a dropdown (high / medium / low) to let users control how aggressively the engine uses system memory.
* **Configuration updates** – `LlamacppConfig` now includes `memory_util`; the extension class stores it in a new `memoryMode` property and handles updates through `updateConfig`.
* **System memory handling**
  * Introduced `SystemMemory` interface and `getTotalSystemMemory()` to report combined VRAM + RAM.
  * Added helper methods `getKVCachePerToken`, `getLayerSize`, and a new `ModelPlan` type.
* **Smart model‑load planner** – `planModelLoad()` computes:
  * Number of GPU layers that can fit in usable VRAM.
  * Maximum context length based on KV‑cache size and the selected memory utilization mode (high/medium/low).
  * Whether KV‑cache must be off‑loaded to CPU and the overall loading mode (GPU, Hybrid, CPU, Unsupported).
  * Detailed logging of the planning decision.
* **Improved support check** – `isModelSupported()` now:
  * Uses the combined VRAM/RAM totals from `getTotalSystemMemory()`.
  * Applies an 80% usable‑memory heuristic.
  * Returns **GREEN** only when both weights and KV‑cache fit in VRAM, **YELLOW** when they fit only in total memory or require CPU off‑load, and **RED** when the model cannot fit at all.
* **Cleanup** – Removed unused `GgufMetadata` import; updated imports and type definitions accordingly.
* **Documentation/comments** – Added explanatory JSDoc comments for the new methods and clarified the return semantics of `isModelSupported`.

* chore: migrate no_kv_offload from llamacpp setting to model setting

* chore: add UI auto optimize model setting

* feat: improve model loading planner with mmproj support and smarter memory budgeting

* Extend `ModelPlan` with optional `noOffloadMmproj` flag to indicate when a multimodal projector can stay in VRAM.
* Add `mmprojPath` parameter to `planModelLoad` and calculate its size, attempting to keep it on GPU when possible.
* Refactor system memory detection:
  * Use `used_memory` (actual free RAM) instead of total RAM for budgeting.
  * Introduced `usableRAM` placeholder for future use.
* Rewrite KV‑cache size calculation:
  * Properly handle GQA models via `attention.head_count_kv`.
  * Compute bytes per token as `nHeadKV * headDim * 2 * 2 * nLayer`.
* Replace the old 70 % VRAM heuristic with a more flexible budget:
  * Reserve a fixed VRAM amount and apply an overhead factor.
  * Derive usable system RAM from total memory minus VRAM.
* Implement a robust allocation algorithm:
  * Prioritize placing the mmproj in VRAM.
  * Search for the best balance of GPU layers and context length.
  * Fallback strategies for hybrid and pure‑CPU modes with detailed safety checks.
* Add extensive validation of model size, KV‑cache size, layer size, and memory mode.
* Improve logging throughout the planning process for easier debugging.
* Adjust final plan return shape to include the new `noOffloadMmproj` field.

* remove unused variable

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-09-11 09:48:03 +05:30
Dinh Long Nguyen
d490174544
feat: Web use jan model (#6374)
* call jan api

* fix lint

* ci: add jan server web

* chore: add Dockerfile

* clean up ui ux and support for reasoning fields, make app spa

* add logo

* chore: update tag for preview image

* chore: update k8s service name

* chore: update image tag and image name

* fixed test

---------

Co-authored-by: Minh141120 <minh.itptit@gmail.com>
Co-authored-by: Nguyen Ngoc Minh <91668012+Minh141120@users.noreply.github.com>
2025-09-05 16:18:30 +07:00
Dinh Long Nguyen
a30eb7f968
feat: Jan Web (reusing Jan Desktop UI) (#6298)
* add platform guards

* add service management

* fix types

* move to zustand for servicehub

* update App Updater

* update tauri missing move

* update app updater

* refactor: move PlatformFeatures to separate const file

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* change tauri fetch name

* update implementation

* update extension fetch

* make web version run properly

* disabled unused web settings

* fix all tests

* fix lint

* fix tests

* add mock for extension

* fix build

* update make and mise

* fix tsconfig for web-extensions

* fix loader type

* cleanup

* fix test

* update error handling + mcp should be working

* Update mcp init

* use separate is_web_app build property

* Remove fixed model catalog url

* fix additional tests

* fix download issue (event emitter not implemented correctly)

* Update Title html

* fix app logs

* update root tsx render timing

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-05 01:47:46 +07:00
Louis
e6587844d0
Merge branch 'dev' into current-date-instruction 2025-08-21 11:41:30 +07:00
Faisal Amir
5481ee9e35
Merge pull request #6134 from menloresearch/feat/attachment-ui
feat: attachment UI
2025-08-20 10:04:32 +07:00
Louis
91f05b8f32
feat: add tool call cancellation 2025-08-19 23:27:12 +07:00
Faisal Amir
f70449fd98 chore: remove pdf attachement 2025-08-19 19:51:02 +07:00
Faisal Amir
cef3e122ff chore: send attachment file when send message 2025-08-19 19:51:01 +07:00