280 Commits

Author SHA1 Message Date
theishangoswami
c02d8200ac added exa mcp 2025-09-16 04:38:03 +05:30
Maksym Krasovakyi
71e2e24112 Add model response timeout for local api server as configurable value via UI 2025-09-15 14:25:09 +03:00
Dinh Long Nguyen
b5b6e1dc19
add mcp for web (#6411)
* add mcp for web

* update /jan/v1 endpoint to /v1

* update mise and makefile

* update yarn lock

* use mcp oauth properly
2025-09-12 12:14:10 +07:00
dinhlongviolin1
e2e572ccab
refactor: moved get_short_path to utils and use it in decompress 2025-09-11 09:52:10 +05:30
Akarshan
7ac927ff02
feat: enhance llamacpp backend management and installation
- Add `src-tauri/resources/` to `.gitignore`.
- Introduced utilities to read locally installed backends (`getLocalInstalledBackends`) and fetch remote supported backends (`fetchRemoteSupportedBackends`).
- Refactored `listSupportedBackends` to merge remote and local entries with deduplication and proper sorting.
- Exported `getBackendDir` and integrated it into the extension.
- Added helper `parseBackendVersion` and new method `checkBackendForUpdates` to detect newer backend versions.
- Implemented `installBackend` for manual backend archive installation, including platform‑specific binary path handling.
- Updated command‑line argument logic for `--flash-attn` to respect version‑specific defaults.
- Modified Tauri filesystem `decompress` command to remove overly strict path validation.
2025-09-11 09:52:09 +05:30
Dinh Long Nguyen
5cd81bc6e8
feat: improve testing (#6395)
* add more test rust test

* fix servicehub test

* fix tauri failing on windows
2025-09-09 12:16:25 +07:00
Dinh Long Nguyen
a30eb7f968
feat: Jan Web (reusing Jan Desktop UI) (#6298)
* add platform guards

* add service management

* fix types

* move to zustand for servicehub

* update App Updater

* update tauri missing move

* update app updater

* refactor: move PlatformFeatures to separate const file

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* change tauri fetch name

* update implementation

* update extension fetch

* make web version run properly

* disabled unused web settings

* fix all tests

* fix lint

* fix tests

* add mock for extension

* fix build

* update make and mise

* fix tsconfig for web-extensions

* fix loader type

* cleanup

* fix test

* update error handling + mcp should be working

* Update mcp init

* use separate is_web_app build property

* Remove fixed model catalog url

* fix additional tests

* fix download issue (event emitter not implemented correctly)

* Update Title html

* fix app logs

* update root tsx render timing

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-05 01:47:46 +07:00
Minh141120
0efdf54819 chore: relocate LICENSE to resources folder 2025-08-26 14:18:35 +07:00
Minh141120
8cdb9e943d chore: bundle license to linux dist 2025-08-26 13:57:45 +07:00
Minh141120
e2adb3037a chore: bundle license to resource mac 2025-08-26 12:30:23 +07:00
Minh141120
87f3b168c9 chore: bundle license to app 2025-08-26 11:17:59 +07:00
Faisal Amir
4137821e53 fix: system monitor window permission 2025-08-25 17:38:30 +07:00
Akarshan Biswas
510c70bdf7
feat: Add model compatibility check and memory estimation (#6243)
* feat: Add model compatibility check and memory estimation

This commit introduces a new feature to check if a given model is supported based on available device memory.

The change includes:
- A new `estimateKVCache` method that calculates the required memory for the model's KV cache. It uses GGUF metadata such as `block_count`, `head_count`, `key_length`, and `value_length` to perform the calculation.
- An `isModelSupported` method that combines the model file size and the estimated KV cache size to determine the total memory required. It then checks if any available device has sufficient free memory to load the model.
- An updated error message for the `version_backend` check to be more user-friendly, suggesting a stable internet connection as a potential solution for backend setup failures.

This functionality helps prevent the application from attempting to load models that would exceed the device's memory capacity, leading to more stable and predictable behavior.

fixes: #5505

* Update extensions/llamacpp-extension/src/index.ts

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* Update extensions/llamacpp-extension/src/index.ts

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* Extend this to available system RAM if GGML device is not available

* fix: Improve model metadata and memory checks

This commit refactors the logic for checking if a model is supported by a system's available memory.

**Key changes:**
- **Remote model support**: The `read_gguf_metadata` function can now fetch metadata from a remote URL by reading the file in chunks.
- **Improved KV cache size calculation**: The KV cache size is now estimated more accurately by using `attention.key_length` and `attention.value_length` from the GGUF metadata, with a fallback to `embedding_length`.
- **Granular memory check statuses**: The `isModelSupported` function now returns a more specific status (`'RED'`, `'YELLOW'`, `'GREEN'`) to indicate whether the model weights or the KV cache are too large for the available memory.
- **Consolidated logic**: The logic for checking local and remote models has been consolidated into a single `isModelSupported` function, improving code clarity and maintainability.

These changes provide more robust and informative model compatibility checks, especially for models hosted on remote servers.

* Update extensions/llamacpp-extension/src/index.ts

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* Make ctx_size optional and use sum free memory across ggml devices

* feat: hub and dropdown model selection handle model compatibility

* feat: update bage model info color

* chore: enable detail page to get compatibility model

* chore: update copy

* chore: update shrink indicator UI

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-08-21 16:13:50 +05:30
Akarshan Biswas
5c3a6fec32
feat: Add support for custom environmental variables to llama.cpp (#6256)
This commit adds a new setting `llamacpp_env` to the llama.cpp extension, allowing users to specify custom environment variables. These variables are passed to the backend process when it starts.

A new function `parseEnvFromString` is introduced to handle the parsing of the semicolon-separated key-value pairs from the user input. The environment variables are then used in the `load` function and when listing available devices. This enables more flexible configuration of the llama.cpp backend, such as specifying visible GPUs for Vulkan.

This change also updates the Tauri command `get_devices` to accept environment variables, ensuring that device discovery respects the user's settings.
2025-08-21 15:50:37 +05:30
Dinh Long Nguyen
32a2ca95b6
feat: gguf file size + hash validation (#5266) (#6259)
* feat: gguf file size + hash validation

* fix tests fe

* update cargo tests

* handle asyn download for both models and mmproj

* move progress tracker to models

* handle file download cancelled

* add cancellation mid hash run
2025-08-21 16:17:58 +07:00
Louis
6b55812739
Merge pull request #6249 from menloresearch/feat/detect-cpu-arch-run-time
feat: detect cpu arch in runtime
2025-08-21 11:51:13 +07:00
Louis
51a9021994
fix: test 2025-08-21 11:30:48 +07:00
Louis
973a8dd8cc
fix: simplify cpu arch detection 2025-08-21 10:47:39 +07:00
Louis
6850dda108
feat: MCP server error handling 2025-08-20 23:42:12 +07:00
Louis
2398c0ab33
Update src-tauri/plugins/tauri-plugin-hardware/src/tests.rs
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-08-20 22:22:25 +07:00
Louis
ebae86f3e6
feat: detect cpu arch in runtime 2025-08-20 21:37:34 +07:00
Dinh Long Nguyen
b0eec07a01
Add contributing section for jan (#6231) (#6232)
* Add contributing section for jan

* Update CONTRIBUTING.md

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-08-20 10:18:35 +07:00
Faisal Amir
5481ee9e35
Merge pull request #6134 from menloresearch/feat/attachment-ui
feat: attachment UI
2025-08-20 10:04:32 +07:00
Louis
6efdd66bbd
Merge pull request #6236 from menloresearch/feat/add-tool-call-cancellation 2025-08-20 09:04:53 +07:00
Faisal Amir
6203a93325
Merge pull request #6233 from menloresearch/fix/import-model
fix: improve ux import model
2025-08-20 09:03:33 +07:00
Louis
91f05b8f32
feat: add tool call cancellation 2025-08-19 23:27:12 +07:00
Akarshan Biswas
e761c439d7
feat: Pass API key via environment variable instead of command line argument (#6225)
This change modifies how the API key is passed to the llama-server process. Previously, it was sent as a command line argument (--api-key). This approach has been updated to pass the key via an environment variable (LLAMA_API_KEY).

This improves security by preventing the API key from being visible in the process list (ps aux on Linux, Task Manager on Windows, etc.), where it could potentially be exposed to other users or processes on the same system.

The commit also updates the Rust backend to read the API key from the environment variable instead of parsing it from the command line arguments.
2025-08-19 20:57:06 +05:30
Faisal Amir
b828d3f84f chore: handle toaster failed import model 2025-08-19 22:07:30 +07:00
Akarshan
9afeb5e514 feat: Add offload_mmproj option and validation
This commit introduces a new configuration option offload_mmproj to the llamacpp extension.

The offload_mmproj setting allows users to control whether the multimodal projector model is offloaded to the GPU. By default, it's offloaded for better performance. If set to false, the projector model will remain on the CPU, which can be useful in low GPU memory scenarios, though image processing might take longer.

Additionally, this commit adds validate_mmproj_path to ensure the provided --mmproj path is valid and accessible, preventing issues during model loading.

This change also refactors some invoke calls for improved readability.
2025-08-19 19:51:29 +07:00
Faisal Amir
5f1cb67ffc feat: enable attachment UI 2025-08-19 19:51:01 +07:00
Louis
2492d6f9d0
fix: http mcp with headers 2025-08-18 09:29:46 +07:00
Louis
54e0f9b595
feat: add connection timeout setting 2025-08-15 12:45:02 +07:00
Louis
c8d9592ab8
chore: mcp group server, action and import json 2025-08-15 11:37:21 +07:00
Louis
25043dda7b
feat: MCP streamable http and sse transports 2025-08-15 10:12:41 +07:00
Louis
13a1969150
feat: MCP - State update 2025-08-15 10:02:06 +07:00
Dinh Long Nguyen
e1c8d98bf2
Backend Architecture Refactoring (#6094) (#6162)
* add llamacpp plugin

* Refactor llamacpp plugin

* add utils plugin

* remove utils folder

* add hardware implementation

* add utils folder + move utils function

* organize cargo files

* refactor utils src

* refactor util

* apply fmt

* fmt

* Update gguf + reformat

* add permission for gguf commands

* fix cargo test windows

* revert yarn lock

* remove cargo.lock for hardware plugin

* ignore cargo.lock file

* Fix hardware invoke + refactor hardware + refactor tests, constants

* use api wrapper in extension to invoke hardware call + api wrapper build integration

* add newline at EOF (per Akarshan)

* add vi mock for getSystemInfo
2025-08-15 08:59:01 +07:00
Akarshan Biswas
f4661912b0
feat: Add GGUF metadata reading functionality (#6120)
* feat: Add GGUF metadata reading functionality

This commit introduces a new Tauri command and a corresponding function to read metadata from GGUF model files.

The new read_gguf_metadata command in the Rust backend uses the byteorder crate to parse the GGUF file format and extract key metadata. This information, including the file's version, tensor count, and a key-value map of other metadata, is then made available to the TypeScript frontend.

This functionality is a foundational step toward providing users with more detailed information about their loaded models directly within the application.

This will be refactored later.

fixes: #6001

* loadMetadata() should return

* Properly throw eror to FE

* Use BufReader to improve performance
2025-08-13 22:54:20 +05:30
Louis
9ed98614fe fix: factory reset process got blocked 2025-08-11 19:42:59 +07:00
Louis
f3dd26e499
fix: uvx and npx dirs should be not be relocated 2025-08-11 14:33:58 +07:00
Louis
b924156a15
fix: bring back GPU detection 2025-08-11 13:52:20 +07:00
Louis
4f5d9b8222
Merge pull request #6089 from menloresearch/fix/clean-up-unused-apis
refactor: clean up unused hardware apis
2025-08-11 00:02:31 +07:00
Louis
59afafba0e fix: test command 2025-08-10 23:36:14 +07:00
Louis
f0a9080ef7 fix: cargo test on windows 2025-08-10 22:46:44 +07:00
Akarshan Biswas
0cfc745954
feat: Introduce structured error handling for llamacpp extension (#6087)
* feat: Introduce structured error handling for llamacpp extension

This commit introduces a structured error handling system for the `llamacpp` extension. Instead of returning simple string errors, we now use a custom `LlamacppError` struct with a specific `ErrorCode` enum. This allows the frontend to display more user-friendly and actionable error messages based on the code, rather than raw debug logs.

The changes include:
- A new `ErrorCode` enum to categorize errors (e.g., `OutOfMemory`, `ModelArchNotSupported`, `BinaryNotFound`).
- A `LlamacppError` struct to encapsulate the code, a user-facing message, and optional detailed logs.
- A static method `from_stderr` that intelligently parses llama.cpp's standard error output to identify and map common issues like Out of Memory errors to a specific error code.
- Refactored `ServerError` enum to wrap the new `LlamacppError` and provide a consistent serialization format for the Tauri frontend.
- Updated all relevant functions (`load_llama_model`, `get_devices`) to return the new structured error type, ensuring a more robust and predictable error flow.
- A reduced timeout for model loading from 300 to 180 seconds.

This work lays the groundwork for a more intuitive and helpful user experience, as the application can now provide clear guidance to users when a model fails to load.

* Update src-tauri/src/core/utils/extensions/inference_llamacpp_extension/server.rs

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* Update src-tauri/src/core/utils/extensions/inference_llamacpp_extension/server.rs

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* chore: update FE handle error object from extension

* chore: fix property type

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-08-07 23:28:25 +05:30
Louis
fc7d8a7a9c
fix: test 2025-08-07 23:47:51 +07:00
Akarshan
0b7477ea56
move nix to non windows 2025-08-07 21:21:47 +05:30
Louis
9285714345
fix: tests 2025-08-07 22:38:28 +07:00
Akarshan
bdec0af791
fix windows test 2025-08-07 20:37:33 +05:30
Akarshan
9482c0a6b9
Revert "fix import on Windows"
This reverts commit b0e7030939a82baec5f12c44639d0eb6c3c1cf43.
2025-08-07 20:35:13 +05:30
Akarshan
b0e7030939
fix import on Windows 2025-08-07 20:29:05 +05:30