63 Commits

Author SHA1 Message Date
4e92884d51 feat: add Claude-style artifacts for persistent workspace documents
Implement a comprehensive artifacts system that allows users to create, edit,
and manage persistent documents alongside conversations, inspired by Claude's
artifacts feature.

Key Features:
- AI model integration with system prompts teaching artifact usage
- Inline preview cards in chat messages with collapsible previews
- On-demand side panel overlay (replaces split view)
- Preview/Code toggle for rendered markdown vs raw content
- Artifacts sidebar for managing multiple artifacts per thread
- Monaco editor integration for code artifacts
- Autosave with debounced writes and conflict detection
- Diff preview system for proposed updates
- Keyboard shortcuts and quick switcher
- Export/import functionality

Backend (Rust):
- New artifacts module in src-tauri/src/core/artifacts/
- 12 Tauri commands for CRUD, proposals, and export/import
- Atomic file writes with SHA256 hashing
- Sharded history storage with pruning policy
- Path traversal validation and UTF-8 enforcement

Frontend (TypeScript/React):
- ArtifactSidePanel: Claude-style overlay panel from right
- InlineArtifactCard: Preview cards embedded in chat
- ArtifactsSidebar: Floating list for artifact switching
- Enhanced ArtifactActionMessage: Parses AI metadata from content
- useArtifacts: Zustand store with autosave and conflict resolution

Types:
- Extended ThreadMessage.metadata with ArtifactAction
- ProposedContentRef supports inline and temp storage
- MIME-like content_type values for extensibility

Platform:
- Fixed platform detection to check window.__TAURI__
- Web service stubs for browser mode compatibility
- Updated assistant system prompts in both extension and web-app

This implements the complete workflow studied from Claude:
1. AI only creates artifacts when explicitly requested
2. Inline cards appear in chat with preview buttons
3. Side panel opens on demand, not automatic split
4. Users can toggle Preview/Code views and edit content
5. Autosave and version management prevent data loss
2025-11-02 12:19:36 -07:00
Minh141120
15c426aefc chore: update org name 2025-10-28 17:26:27 +07:00
Dinh Long Nguyen
340042682a ui ux enhancement 2025-10-09 03:48:51 +07:00
Dinh Long Nguyen
ff93dc3c5c Merge branch 'dev' into feat/file-attachment 2025-10-08 16:34:45 +07:00
Dinh Long Nguyen
510c4a5188 working attachments 2025-10-08 16:08:40 +07:00
Vanalite
fa61163350 fix: Fix openssl issue on mobile after merging 2025-10-05 14:40:39 +07:00
Vanalite
9720ad368e feat: use sql for mobile storage 2025-10-02 18:09:33 +07:00
Vanalite
6bd623c020 fix: Fix cargo test 2025-09-30 17:19:58 +07:00
Vanalite
814024982e feat: Experiment removing hardware permission 2025-09-25 00:49:14 +07:00
Vanalite
b2c5063e0b Merge remote-tracking branch 'origin/dev' into mobile/dev
# Conflicts:
#	src-tauri/src/core/server/proxy.rs
#	src-tauri/tauri.conf.json
#	web-app/src/containers/LeftPanel.tsx
#	web-app/src/containers/__tests__/ChatInput.test.tsx
#	web-app/src/lib/platform/const.ts
#	yarn.lock
2025-09-24 16:01:33 +07:00
Roushan Kumar Singh
3f51c35229
feat: support .zip archives for manual backend install (#6534)
* feat(llamacpp): support .zip archives for manual backend install

* Update Lock Files
2025-09-23 18:02:06 +05:30
Vanalite
8e1654961a chore: Configure iOS to use the same build mechanic to remove unnecessary plugin 2025-09-22 11:28:01 +07:00
Vanalite
003598204e Merge remote-tracking branch 'origin/dev' into mobile/dev
# Conflicts:
#	src-tauri/.cargo/config.toml
#	src-tauri/Cargo.toml
#	src-tauri/src/lib.rs
#	web-app/src/containers/__tests__/ChatInput.test.tsx
#	web-app/src/routeTree.gen.ts
#	web-app/src/routes/index.tsx
#	web-app/src/routes/threads/$threadId.tsx
#	yarn.lock
2025-09-22 11:24:20 +07:00
Louis
86a92ead85
Merge pull request #6469 from menloresearch/fix/deeplink-not-work-on-windows
fix: deeplink issue on Windows
2025-09-18 17:47:00 +07:00
Vanalite
15d56e8e7e chore: Shrink the Android app size to minimal, release type 2025-09-18 13:35:50 +07:00
Vanalite
21d0943aa4 chore: Separate configuration for android build in release mode 2025-09-18 11:32:52 +07:00
Louis
a85ffa270e
fix: missing tray feature for testing 2025-09-17 19:48:30 +07:00
Louis
15164fc0be
feat: system tray icon build flag 2025-09-17 15:54:20 +07:00
Vanalite
633a6ac032 fix: Reconfigure and add toolchain to wake up Android app 2025-09-16 20:38:56 +07:00
Louis
6850dda108
feat: MCP server error handling 2025-08-20 23:42:12 +07:00
Louis
13a1969150
feat: MCP - State update 2025-08-15 10:02:06 +07:00
Dinh Long Nguyen
e1c8d98bf2
Backend Architecture Refactoring (#6094) (#6162)
* add llamacpp plugin

* Refactor llamacpp plugin

* add utils plugin

* remove utils folder

* add hardware implementation

* add utils folder + move utils function

* organize cargo files

* refactor utils src

* refactor util

* apply fmt

* fmt

* Update gguf + reformat

* add permission for gguf commands

* fix cargo test windows

* revert yarn lock

* remove cargo.lock for hardware plugin

* ignore cargo.lock file

* Fix hardware invoke + refactor hardware + refactor tests, constants

* use api wrapper in extension to invoke hardware call + api wrapper build integration

* add newline at EOF (per Akarshan)

* add vi mock for getSystemInfo
2025-08-15 08:59:01 +07:00
Akarshan Biswas
f4661912b0
feat: Add GGUF metadata reading functionality (#6120)
* feat: Add GGUF metadata reading functionality

This commit introduces a new Tauri command and a corresponding function to read metadata from GGUF model files.

The new read_gguf_metadata command in the Rust backend uses the byteorder crate to parse the GGUF file format and extract key metadata. This information, including the file's version, tensor count, and a key-value map of other metadata, is then made available to the TypeScript frontend.

This functionality is a foundational step toward providing users with more detailed information about their loaded models directly within the application.

This will be refactored later.

fixes: #6001

* loadMetadata() should return

* Properly throw eror to FE

* Use BufReader to improve performance
2025-08-13 22:54:20 +05:30
Louis
59afafba0e fix: test command 2025-08-10 23:36:14 +07:00
Louis
f0a9080ef7 fix: cargo test on windows 2025-08-10 22:46:44 +07:00
Akarshan
0b7477ea56
move nix to non windows 2025-08-07 21:21:47 +05:30
Akarshan Biswas
088b9d7f25
Fix: Improve Llama.cpp model path handling and error handling (#6045)
* Improve Llama.cpp model path handling and validation

This commit refactors the load_llama_model function to improve how it handles and validates the model path.

Previously, the function extracted the model path but did not perform any validation. This change adds the following improvements:

It now checks for the presence of the -m flag.

It verifies that a path is provided after the -m flag.

It validates that the specified model path actually exists on the filesystem.

It ensures that the SessionInfo struct stores the canonical display path of the model, which is a more robust approach.

These changes make the model loading process more reliable and provide better error handling for invalid or missing model paths.

* Exp: Use short path on Windows

* Fix: Remove error channel and handling in llama.cpp server loading

The previous implementation used a channel to receive error messages from the llama.cpp server's stdout. However, this proved unreliable as the path names can contain 'errors strings' that we use to check even during normal operation. This commit removes the error channel and associated error handling logic.
The server readiness is still determined by checking for the "server is listening" message in stdout. Errors are now handled by relying on the process exit code and capturing the full stderr output if the process fails to start or exits unexpectedly. This approach provides a more robust and accurate error detection mechanism.

* Add else block in Windows path handling

* Add some path related tests

* Fix windows tests
2025-08-05 14:17:19 +05:30
Akarshan Biswas
0aaaca05a4
fix: use direct process termination instead of console events on Windows (#5972)
* fix: remove CREATE_NEW_PROCESS_GROUP flag for proper Ctrl-C handling

CREATE_NEW_PROCESS_GROUP prevented GenerateConsoleCtrlEvent from working,
causing graceful shutdown failures. Removed to enable proper signal handling.

* Revert "fix: remove CREATE_NEW_PROCESS_GROUP flag for proper Ctrl-C handling"

This reverts commit 82ace3e72e4bf7338f422d5c79bdd6a0f8a2440e.

* fix: use direct process termination instead of console events

Simplified Windows process cleanup by removing console attachment logic
and using direct child.kill() method. More reliable for headless processes.

* Fix missing imports

* switch to tokio::time

* Don't wait while forcefully terminate process using kill API on Windows

Disabled use of windows-sys crate as graceful shutdown on Windows is unreliable in this context.

Updated cleanup.rs and server.rs to directly call child.kill().await for terminating processes on Windows.

Improved logging for process termination and error handling during kill and wait.

Removed timeout-based graceful shutdown attempt on Windows since TerminateProcess is inherently forceful and immediate.

This ensures more predictable process cleanup behavior on Windows platforms.

* final cleanups
2025-07-30 10:09:20 +05:30
Akarshan Biswas
f59739d2b0
refactor: Improve Llama.cpp backend management and auto-update (#5845)
* refactor: Improve Llama.cpp backend management and auto-update

This commit refactors the Llama.cpp extension to enhance backend management and streamline the auto-update process.

Key changes include:

Refactored configureBackends: The logic for determining the best available backend and populating settings is now more modular, preventing duplicate executions.

Dedicated Auto-update Handling: Introduced a handleAutoUpdate method to encapsulate the auto-update logic, including downloading the latest available backend and updating the internal configuration and settings.

Robust Old Backend Cleanup: The removeOldBackends method is improved to ensure only the currently used backend version and type are kept, effectively managing disk space. A delay is added for Windows to prevent file conflicts during cleanup.

Final Installation Check: A ensureFinalBackendInstallation method is added to guarantee the selected backend is installed, acting as a final safeguard after auto-update or if auto-update is disabled.

Minor Fixes:

Added console.log for save_path during decompression for better debugging.

Ensured the output directory exists before decompression in the Rust backend.

Removed extraneous console log for session info.

Updated Cargo.toml and tauri.conf.json versions.

These changes lead to a more reliable and efficient Llama.cpp backend experience within the application, particularly for users with auto-update enabled.

* fix isBackendInstalled parameters

* Address bot's comments

* Address bot comments of using try finally block
2025-07-22 14:35:34 +05:30
Louis
8ca507c01c
feat: proxy support for the new downloader (#5795)
* feat: proxy support for the new downloader

* test: remove outdated test

* ci: clean up
2025-07-17 23:10:21 +07:00
Akarshan Biswas
b736d09168
fix: Prevent spamming /health endpoint and improve startup and resolve compiler warnings (#5784)
* fix: Prevent spamming /health endpoint and improve startup and resolve compiler warnings

This commit introduces a delay and improved logic to the /health endpoint checks in the llamacpp extension, preventing excessive requests during model loading.

Additionally, it addresses several Rust compiler warnings by:
- Commenting out an unused `handle_app_quit` function in `src/core/mcp.rs`.
- Explicitly declaring `target_port`, `session_api_key`, and `buffered_body` as mutable in `src/core/server.rs`.
- Commenting out unused `tokio` imports in `src/core/setup.rs`.
- Enhancing the `load_llama_model` function in `src/core/utils/extensions/inference_llamacpp_extension/server.rs` to better monitor stdout/stderr for readiness and errors, and handle timeouts.
- Commenting out an unused `std::path::Prefix` import and adjusting `normalize_path` in `src/core/utils/mod.rs`.
- Updating the application version to 0.6.904 in `tauri.conf.json`.

* fix grammar!

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* fix grammar 2

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* reimport prefix but only on Windows

* remove instead of commenting

* remove redundant check

* sync app version in cargo.toml with tauri.conf

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-07-16 18:18:11 +05:30
Sam Hoang Van
9a76c94e22
update rmcp to fix issues (#5290) 2025-07-14 16:49:27 +07:00
Akarshan
6ab7d37a08
fix: Update Cargo.toml dependencies on Windows & fix Ctrl+C handling on Windows
This change updates the dependencies of the Cargo.toml file on Windows to include additional features from the `windows-sys` crate. The `CreateProcess flags like CREATE_NEW_PROCESS_GROUP` feature is now enabled to allow for proper process management.
The code now properly sends Ctrl+C to the llama process on Windows, and also includes error handling for when the Ctrl+C command fails. Additionally, it now uses the `Windows` API to kill the process when it times out, and properly handles the wait for the process to exit.
2025-07-03 13:51:59 +05:30
Akarshan
663c720f2a
Add windows-sys to cargo.toml 2025-07-02 12:29:03 +07:00
Akarshan
01d49a4b28
fix: Update server process handling for Windows and Unix systems 2025-07-02 12:27:42 +07:00
Thien Tran
1eb49350e9
add is_library_available command 2025-07-02 12:27:17 +07:00
Akarshan Biswas
da23673a44
feat: Add API key generation for Llama.cpp
This commit introduces API key generation for the Llama.cpp extension.  The API key is now generated on the server side using HMAC-SHA256 and a secret key to ensure security and uniqueness.  The frontend now passes the model ID and API secret to the server to generate the key. This addresses the requirement for secure model access and authorization.
2025-07-02 12:27:12 +07:00
Thien Tran
ded9ae733a
feat: Model import (download + local import) for llama.cpp extension (#5087)
* add pull and abortPull

* add model import (download only)

* write model.yaml. support local model import

* remove cortex-related command

* add TODO

* remove cortex-related command
2025-07-02 12:27:09 +07:00
Akarshan Biswas
f5b5596306
add thiserror to Cargo.toml 2025-07-02 12:26:38 +07:00
Louis
834bc39242
test: init e2e test with selenium and CI work (#5591)
* test: init e2e test

* Update yarn.lock
2025-06-29 17:12:16 +07:00
Louis
92a52dfa08
fix: tauri path env (#5233) 2025-06-10 18:31:21 +07:00
Louis
a3e78dd563
refactor: clean up migrations (#5187) 2025-06-04 00:41:14 +07:00
Louis
ecef9d7df6
feat: handle open Jan on HF GGUF repo (#5173)
* feat: handle open Jan on HF GGUF repo

* chore: reset retry attempts
2025-06-03 01:09:36 +07:00
Faisal Amir
b98c31b184
enhancement: open folder log and change data folder dialog confirm (#5159)
* enhancement: ux change data folder with confirmation and reveal in finder logs

* chore: update button open logs local api server

* Update web-app/src/components/ui/button.tsx

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* chore: handle error when change location data folder failed

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-06-02 08:54:16 +07:00
Louis
573e667c34
feat: migrate legacy local storage data to new app (#5156)
* feat: migrate legacy local storage data to new app

* chore: refactor localstorage db read

* chore: clean up

* chore: migrate api key setting

* chore: apply proxy configs

* chore: fix key
2025-06-01 22:57:01 +07:00
Thien Tran
c324ed592a
feat: Hardware info replacement for cortex (#4925) 2025-05-23 12:59:19 +08:00
vansangpfiev
e43b109291
fix: tauri updater (#5051)
* fix: updater

* chore: sync latest nightly

* chore: ignore electron updater config

* chore: upload signatures

* chore: update connect-src

* chore: add log

* chore: correct path macos s3

* fix: close cortex before restarting

* chore: clean

* chore: comment

* Revert "chore: update connect-src"

This reverts commit a592845c0b5293c121fb17671c14bb1f9958bf00.

* chore: update lastest.yml

* chore: cleanup

* chore: stop uploading yml for electron

* chore: linux workflow
2025-05-22 14:16:10 +07:00
Faisal Amir
4adaeed3da chore: show location data folder and prepare ui for let user change folder 2025-05-20 14:10:49 +07:00
Thien Tran
16514050b9
chore: Pin rmcp commit (#5014) 2025-05-19 14:07:09 +08:00
Thien Tran
4bde6645d0
feat: Download manager for llama.cpp extension (#4933) 2025-05-16 15:01:42 +08:00