263 Commits

Author SHA1 Message Date
Akarshan
8601a49ff6
feat: enhance thinking UI, support structured steps, and record total thinking time
- **ThinkingBlock**
  - Added `ThoughtStep` type and UI handling for step kinds: `thought`, `tool_call`, `tool_output`, and `done`.
  - Integrated `Check` icon for completed steps and formatted duration (seconds) display.
  - Implemented streaming paragraph extraction, fade‑in/out animation, and improved loading state handling.
  - Updated header to show dynamic titles (thinking/thought + duration) and disabled expand/collapse while loading.
  - Utilized `cn` utility for conditional class names and added relevant imports.

- **ThreadContent**
  - Defined `ToolCall` and `ThoughtStep` types for type safety.
  - Constructed `allSteps` via `useMemo`, extracting thought paragraphs, tool calls/outputs, and a final `done` step with total thinking time.
  - Passed `steps`, `loading`, and `duration` props to `ThinkingBlock`.
  - Introduced `hasReasoning` flag to conditionally render the reasoning block and avoid duplicate tool call rendering.
  - Adjusted rendering logic to hide empty reasoning and ensure tool call blocks only appear when no reasoning is present.

- **useChat**
  - Refactored `getCurrentThread` for clearer async flow while preserving temporary‑chat behavior.
  - Captured `startTime` at message creation and computed `totalThinkingTime` on completion.
  - Included `totalThinkingTime` in message metadata when appropriate.
  - Minor cleanup: improved error handling for image ingestion and formatting adjustments.

Overall, these changes provide a richer, step‑by‑step thinking UI, better state handling during streaming, and expose total thinking duration for downstream components.
2025-10-29 20:30:21 +05:30
Vanalite
22be93807d Merge remote-tracking branch 'origin/dev' into feat/proactive_mode 2025-10-28 17:56:47 +07:00
Minh141120
15c426aefc chore: update org name 2025-10-28 17:26:27 +07:00
Vanalite
a14872666a feat: Add tests for proactive mode 2025-10-28 12:19:00 +07:00
Vanalite
e9f469b623 feat: Proactively take screenshot and snapshot for every browser tool call 2025-10-28 11:48:55 +07:00
Dinh Long Nguyen
a2fbce698f fix thread scrolling 2025-10-09 04:41:18 +07:00
Dinh Long Nguyen
fc784620e0 fix tests 2025-10-09 04:28:08 +07:00
Dinh Long Nguyen
340042682a ui ux enhancement 2025-10-09 03:48:51 +07:00
Dinh Long Nguyen
ff93dc3c5c Merge branch 'dev' into feat/file-attachment 2025-10-08 16:34:45 +07:00
Dinh Long Nguyen
510c4a5188 working attachments 2025-10-08 16:08:40 +07:00
Akarshan Biswas
706dad2687
feat: Add support for llamacpp MoE offloading setting (#6748)
* feat: Add support for llamacpp MoE offloading setting

Introduces the n_cpu_moe configuration setting for the llamacpp provider. This allows users to specify the number of Mixture of Experts (MoE) layers whose weights should be offloaded to the CPU via the --n-cpu-moe flag in llama.cpp.

This is useful for running large MoE models by balancing resource usage, for example, by keeping attention on the GPU and offloading expert FFNs to the CPU.

The changes include:

 - Updating the llamacpp-extension to accept and pass the --n-cpu-moe argument.

 - Adding the input field to the Model Settings UI (ModelSetting.tsx).

 - Including model setting migration logic and bumping the store version to 4.

* remove unused import

* feat: add cpu-moe boolean flag

* chore: remove unused migration cont_batching

* chore: fix migration delete old key and add new one

* chore: fix migration

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-10-07 19:37:58 +05:30
Nguyen Ngoc Minh
4828f34fec
Merge pull request #6728 from menloresearch/fix/anthropic-model-load
Fix: Anthropic request to add models
2025-10-07 18:05:33 +07:00
Dinh Long Nguyen
a72c74dbf9 initial layout 2025-10-07 10:36:45 +07:00
Louis
fe2c2a8687 Merge branch 'dev' into release/v0.7.0
# Conflicts:
#	web-app/src/containers/DropdownModelProvider.tsx
#	web-app/src/containers/ThreadList.tsx
#	web-app/src/containers/__tests__/DropdownModelProvider.displayName.test.tsx
#	web-app/src/hooks/__tests__/useModelProvider.test.ts
#	web-app/src/hooks/useChat.ts
#	web-app/src/lib/utils.ts
2025-10-06 20:42:05 +07:00
Faisal Amir
17dced03c0 chore: check support blur on FE 2025-10-06 10:55:17 +07:00
Faisal Amir
be9a6c0254 fix: test use case appearance 2025-10-06 10:55:17 +07:00
Faisal Amir
51e7a08118 fix: new window theme 2025-10-06 10:55:17 +07:00
Faisal Amir
aa0c4b0d1b fix: theme native system and check os support blur 2025-10-06 10:55:17 +07:00
Vanalite
4da0fd1ca3 fix: yarn lint 2025-10-03 10:25:41 +07:00
Dinh Long Nguyen
1b9efee52c feat: improve projects (#6698)
* decouple successfully

* only show movable projects for project items

* handle delete covnersations when projects is removed

* fix leftpanel assignemtn

* fix lint
2025-10-01 22:53:34 +07:00
Dinh Long Nguyen
d5110de67b
feat: improve projects (#6698)
* decouple successfully

* only show movable projects for project items

* handle delete covnersations when projects is removed

* fix leftpanel assignemtn

* fix lint
2025-10-01 22:47:38 +07:00
Roushan Kumar Singh
247db95bad
resolve TypeScript and Rust warnings (#6612)
* chore: fix warnings

* fix: add missing scrollContainerRef dependencies to React hooks

* fix: typo

* fix: remove unsupported fetch option and enable AsyncIterable types

- Removed `connectTimeout` from fetch init (not supported in RequestInit)
- Updated tsconfig to target ES2018

* chore: refactor rename

* fix(hooks): update dependency arrays for useThreadScrolling effects

* Add type.d.ts to extend requestinit with connectionTimeout

* remove commentd unused import
2025-10-01 16:06:41 +07:00
Louis
157ecacb20
fix: chat completion usage - token speed (#6675) 2025-10-01 14:19:21 +07:00
Louis
e0ab77cb24
fix: token count error (#6680) 2025-10-01 14:07:32 +07:00
Vanalite
262a1a9544 Merge remote-tracking branch 'origin/dev' into mobile/dev
# Conflicts:
#	src-tauri/src/core/setup.rs
#	src-tauri/src/lib.rs
#	web-app/src/hooks/useChat.ts
2025-10-01 09:52:01 +07:00
Dinh Long Nguyen
4cb3c46f89
feat: disable all web mcp by default (new users) (#6677) 2025-10-01 09:35:09 +07:00
Nghia Doan
c5a5968bf8
Merge pull request #6643 from menloresearch/fix/model-name-change
fix: Apply model name change correctly
2025-09-30 22:41:05 +07:00
Dinh Long Nguyen
191e6f9714 fix lint issue 2025-09-30 22:24:31 +07:00
Dinh Long Nguyen
e6bc1182a6
Merge branch 'dev' into feat/sync-release=to-dev 2025-09-30 22:04:27 +07:00
Dinh Long Nguyen
82d29e7a7d
add eof new line missing (#6673) 2025-09-30 21:48:38 +07:00
Vanalite
a62852f384 fix: Restore default permission on desktop build
Restore desktop capabilities
Restore linter correctness
Restore different capabilities on each platform
2025-09-30 17:01:09 +07:00
Vanalite
4718203960 fix: Fix linter and tests 2025-09-30 15:39:45 +07:00
Vanalite
c53d8c09c4 Merge remote-tracking branch 'origin/dev' into mobile/dev
# Conflicts:
#	web-app/src/containers/HeaderPage.tsx
#	web-app/src/lib/platform/const.ts
#	web-app/src/routes/index.tsx
2025-09-30 11:21:36 +07:00
Dinh Long Nguyen
2101242530
Feat: web temporary chat (#6650)
* temporray chat stage1

* temporary page in root

* temporary chat

* handle redirection properly
`

* temporary chat header

* Update extensions-web/src/conversational-web/extension.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* update routetree

* better error handling

* fix strecthed assitant on desktop

* update yarn link to workspace for better link consistency

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-09-30 10:42:21 +07:00
Vanalite
5e57caee43 Merge remote-tracking branch 'origin/dev' into mobile/dev
# Conflicts:
#	extensions/yarn.lock
#	package.json
#	src-tauri/plugins/tauri-plugin-hardware/src/vendor/vulkan.rs
#	src-tauri/src/lib.rs
#	yarn.lock
2025-09-29 22:22:00 +07:00
Nghia Doan
70ac13e536
Merge pull request #6643 from menloresearch/fix/model-name-change
fix: Apply model name change correctly
2025-09-29 22:15:13 +07:00
Vanalite
987063fede feat: Add tests for the model displayName modification 2025-09-29 17:59:15 +07:00
Dinh Long Nguyen
0dbf3b1652 fix: scroll issue padding not re render correctly (#6639) 2025-09-29 17:11:08 +07:00
Dinh Long Nguyen
f2a3861410
fix: scroll issue padding not re render correctly (#6639) 2025-09-29 16:45:40 +07:00
Vanalite
03ee9c14a3 fix: Apply model name change correctly
# Conflicts:
#	web-app/src/lib/utils.ts
2025-09-29 16:39:32 +07:00
Faisal Amir
b4df56e0d8 chore: placement menu leftpanel, and ux create add projects 2025-09-29 13:44:56 +07:00
Dinh Long Nguyen
b422970369
feat: scrolling behaves like chatgpt with padding (#6598)
* scroll like chatgpt with padding

* minor refactor
2025-09-26 15:53:05 +07:00
Louis
55c42ba526
fix: lock all of the dependencies (#6561)
* fix: pin web app dependencies

* fix: pin extension versions

* fix: pin extensions-web dependencies

* fix: pin extensions lockfile

* fix: remove unnecessary semicolon
2025-09-26 13:07:29 +07:00
Vanalite
a0aa0074f4 Merge remote-tracking branch 'origin/dev' into mobile/dev
# Conflicts:
#	web-app/src/routeTree.gen.ts
#	web-app/src/routes/index.tsx
2025-09-26 11:09:50 +07:00
Vanalite
fdf9f40aef fix: Android releasable build 2025-09-26 09:42:00 +07:00
Faisal Amir
d806c4719e
Merge pull request #6586 from menloresearch/feat/thread-project-org
feat: thread organization folder
2025-09-25 20:21:02 +07:00
Akarshan Biswas
11b3a60675
fix: refactor, fix and move gguf support utilities to backend (#6584)
* feat: move estimateKVCacheSize to BE

* feat: Migrate model planning to backend

This commit migrates the model load planning logic from the frontend to the Tauri backend. This refactors the `planModelLoad` and `isModelSupported` methods into the `tauri-plugin-llamacpp` plugin, making them directly callable from the Rust core.

The model planning now incorporates a more robust and accurate memory estimation, considering both VRAM and system RAM, and introduces a `batch_size` parameter to the model plan.

**Key changes:**

- **Moved `planModelLoad` to `tauri-plugin-llamacpp`:** The core logic for determining GPU layers, context length, and memory offloading is now in Rust for better performance and accuracy.
- **Moved `isModelSupported` to `tauri-plugin-llamacpp`:** The model support check is also now handled by the backend.
- **Removed `getChatClient` from `AIEngine`:** This optional method was not implemented and has been removed from the abstract class.
- **Improved KV Cache estimation:** The `estimate_kv_cache_internal` function in Rust now accounts for `attention.key_length` and `attention.value_length` if available, and considers sliding window attention for more precise estimates.
- **Introduced `batch_size` in ModelPlan:** The model plan now includes a `batch_size` property, which will be automatically adjusted based on the determined `ModelMode` (e.g., lower for CPU/Hybrid modes).
- **Updated `llamacpp-extension`:** The frontend extension now calls the new Tauri commands for model planning and support checks.
- **Removed `batch_size` from `llamacpp-extension/settings.json`:** The batch size is now dynamically determined by the planning logic and will be set as a model setting directly.
- **Updated `ModelSetting` and `useModelProvider` hooks:** These now handle the new `batch_size` property in model settings.
- **Added new Tauri commands and permissions:** `get_model_size`, `is_model_supported`, and `plan_model_load` are new commands with corresponding permissions.
- **Consolidated `ModelSupportStatus` and `KVCacheEstimate`:** These types are now defined in `src/tauri/plugins/tauri-plugin-llamacpp/src/gguf/types.rs`.

This refactoring centralizes critical model resource management logic, improving consistency and maintainability, and lays the groundwork for more sophisticated model loading strategies.

* feat: refine model planner to handle more memory scenarios

This commit introduces several improvements to the `plan_model_load` function, enhancing its ability to determine a suitable model loading strategy based on system memory constraints. Specifically, it includes:

-   **VRAM calculation improvements:**  Corrects the calculation of total VRAM by iterating over GPUs and multiplying by 1024*1024, improving accuracy.
-   **Hybrid plan optimization:**  Implements a more robust hybrid plan strategy, iterating through GPU layer configurations to find the highest possible GPU usage while remaining within VRAM limits.
-   **Minimum context length enforcement:** Enforces a minimum context length for the model, ensuring that the model can be loaded and used effectively.
-   **Fallback to CPU mode:** If a hybrid plan isn't feasible, it now correctly falls back to a CPU-only mode.
-   **Improved logging:** Enhanced logging to provide more detailed information about the memory planning process, including VRAM, RAM, and GPU layers.
-   **Batch size adjustment:** Updated batch size based on the selected mode, ensuring efficient utilization of available resources.
-   **Error handling and edge cases:**  Improved error handling and edge case management to prevent unexpected failures.
-   **Constants:** Added constants for easier maintenance and understanding.
-   **Power-of-2 adjustment:** Added power of 2 adjustment for max context length to ensure correct sizing for the LLM.

These changes improve the reliability and robustness of the model planning process, allowing it to handle a wider range of hardware configurations and model sizes.

* Add log for raw GPU info from tauri-plugin-hardware

* chore: update linux runner for tauri build

* feat: Improve GPU memory calculation for unified memory

This commit improves the logic for calculating usable VRAM, particularly for systems with **unified memory** like Apple Silicon. Previously, the application would report 0 total VRAM if no dedicated GPUs were found, leading to incorrect calculations and failed model loads.

This change modifies the VRAM calculation to fall back to the total system RAM if no discrete GPUs are detected. This is a common and correct approach for unified memory architectures, where the CPU and GPU share the same memory pool.

Additionally, this commit refactors the logic for calculating usable VRAM and RAM to prevent potential underflow by checking if the total memory is greater than the reserved bytes before subtracting. This ensures the calculation remains safe and correct.

* chore: fix update migration version

* fix: enable unified memory support on model support indicator

* Use total_system_memory in bytes

---------

Co-authored-by: Minh141120 <minh.itptit@gmail.com>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-09-25 12:17:57 +05:30
Faisal Amir
e7a1a06395 feat: thread organization folder 2025-09-25 10:12:08 +07:00
Vanalite
b2c5063e0b Merge remote-tracking branch 'origin/dev' into mobile/dev
# Conflicts:
#	src-tauri/src/core/server/proxy.rs
#	src-tauri/tauri.conf.json
#	web-app/src/containers/LeftPanel.tsx
#	web-app/src/containers/__tests__/ChatInput.test.tsx
#	web-app/src/lib/platform/const.ts
#	yarn.lock
2025-09-24 16:01:33 +07:00
Dinh Long Nguyen
b322c7649b
Merge pull request #6563 from menloresearch/feat/web-minor-ui-tweak-login
feat: tweak login UI
2025-09-23 21:09:58 +07:00