6213 Commits

Author SHA1 Message Date
Dinh Long Nguyen
1e214c5be2
Merge pull request #6569 from menloresearch/stag-web
Stag web to prod web
2025-09-23 21:50:02 +07:00
Dinh Long Nguyen
b3c3cc8f26
Merge pull request #6568 from menloresearch/feat/sync-staging
Dev web sync to staging
2025-09-23 21:33:25 +07:00
dinhlongviolin1
3668bfb14f Merge remote-tracking branch 'origin/stag-web' into feat/sync-staging 2025-09-23 21:31:44 +07:00
Dinh Long Nguyen
bc8ff74e98
Merge pull request #6566 from menloresearch/feat/update-release-note
Update release note on dev-web
2025-09-23 21:27:28 +07:00
dinhlongviolin1
2367c156e2 Update release note 2025-09-23 21:26:01 +07:00
Dinh Long Nguyen
494db746f7
Merge pull request #6565 from menloresearch/feat/sync-prod-web
Feat/sync prod web
2025-09-23 21:19:11 +07:00
dinhlongviolin1
94bfad8d27 Merge branch 'dev-web' into feat/sync-prod-web 2025-09-23 21:16:39 +07:00
Dinh Long Nguyen
685054c5bc
Sync dev with dev-web (#6564)
*  feat: Re-arrange docs as needed

* 🔧 chore: re-arrange the folder structure

* Add server docs

Add server docs

* enhancement: migrate handbook and janv2

* Update docs/src/components/ui/dropdown-button.tsx

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* Update docs/src/pages/_meta.json

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* chore: update feedback #1

* fix: layout ability model

* feat: add azure as first class provider (#6555)

* feat: add azure as first class provider

* fix: deployment url

* Update handbook: restructure content and add new sections

- Add betting-on-open-source.mdx and open-superintelligence.mdx
- Update handbook index with new structure
- Remove outdated handbook sections (growth, happy, history, money, talent, teams, users, why)
- Update handbook _meta.json to reflect new structure

* chore: fix meta data json

* chore: update missing install

* fix: Catch local API server various errors (#6548)

* fix: Catch local API server various errors

* chore: Add tests to cover error catches

* fix: LocalAPI server trusted host should accept asterisk (#6551)

* feat: support .zip archives for manual backend install (#6534)

* feat(llamacpp): support .zip archives for manual backend install

* Update Lock Files

* Merge pull request #6563 from menloresearch/feat/web-minor-ui-tweak-login

feat: tweak login UI

---------

Co-authored-by: LazyYuuki <huy2840@gmail.com>
Co-authored-by: nngostuds <locnguyen1986@gmail.com>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
Co-authored-by: Louis <louis@jan.ai>
Co-authored-by: eckartal <emre@jan.ai>
Co-authored-by: Nghia Doan <dhnghia0604@gmail.com>
Co-authored-by: Roushan Kumar Singh <158602016+github-roushan@users.noreply.github.com>
2025-09-23 21:12:08 +07:00
Dinh Long Nguyen
b322c7649b
Merge pull request #6563 from menloresearch/feat/web-minor-ui-tweak-login
feat: tweak login UI
2025-09-23 21:09:58 +07:00
Roushan Kumar Singh
3f51c35229
feat: support .zip archives for manual backend install (#6534)
* feat(llamacpp): support .zip archives for manual backend install

* Update Lock Files
2025-09-23 18:02:06 +05:30
Louis
7f09c36a92
fix: LocalAPI server trusted host should accept asterisk (#6551) 2025-09-23 17:45:37 +07:00
Nghia Doan
6f827872fb
fix: Catch local API server various errors (#6548)
* fix: Catch local API server various errors

* chore: Add tests to cover error catches
2025-09-23 17:40:16 +07:00
Faisal Amir
9741bf15b5
Merge pull request #6535 from menloresearch/docs/new-docs
 feat: Re-arrange docs as needed
2025-09-23 17:27:20 +07:00
Faisal Amir
8153287520
Merge pull request #6552 from menloresearch/docs/v2-landing
enhancement: migrate handbook and janv2
2025-09-23 17:26:45 +07:00
Faisal Amir
a6a2f0c191 chore: update missing install 2025-09-23 17:26:19 +07:00
Faisal Amir
3ec41e080f chore: fix meta data json 2025-09-23 17:17:35 +07:00
eckartal
26ed125693 Update handbook: restructure content and add new sections
- Add betting-on-open-source.mdx and open-superintelligence.mdx
- Update handbook index with new structure
- Remove outdated handbook sections (growth, happy, history, money, talent, teams, users, why)
- Update handbook _meta.json to reflect new structure
2025-09-23 18:10:59 +08:00
Faisal Amir
3bbce97329
Merge pull request #6559 from menloresearch/fix/layout-ability-model
fix: layout ability model
2025-09-23 16:13:33 +07:00
Louis
8a51cc1656
feat: add azure as first class provider (#6555)
* feat: add azure as first class provider

* fix: deployment url
2025-09-23 16:09:06 +07:00
Faisal Amir
3133d40081 fix: layout ability model 2025-09-23 15:27:41 +07:00
Dinh Long Nguyen
7413f1354f
bring dev changes to web dev (#6557)
* fix: avoid error validate nested dom

* fix: correct context shift flag handling in LlamaCPP extension (#6404) (#6431)

* fix: correct context shift flag handling in LlamaCPP extension

The previous implementation added the `--no-context-shift` flag when `cfg.ctx_shift` was disabled, which conflicted with the llama.cpp CLI where the presence of `--context-shift` enables the feature.
The logic is updated to push `--context-shift` only when `cfg.ctx_shift` is true, ensuring the extension passes the correct argument and behaves as expected.

* feat: detect model out of context during generation

---------

Co-authored-by: Dinh Long Nguyen <dinhlongviolin1@gmail.com>

* chore: add install-rust-targets step for macOS universal builds

* fix: make install-rust-targets a dependency

* enhancement: copy MCP permission

* chore: make action mutton capitalize

* Update web-app/src/locales/en/tool-approval.json

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* chore: simplify macos workflow

* fix: KVCache size calculation and refactor (#6438)

- Removed the unused `getKVCachePerToken` helper and replaced it with a unified `estimateKVCache` that returns both total size and per‑token size.
- Fixed the KV cache size calculation to account for all layers, correcting previous under‑estimation.
- Added proper clamping of user‑requested context lengths to the model’s maximum.
- Refactored VRAM budgeting: introduced explicit reserves, fixed engine overhead, and separate multipliers for VRAM and system RAM based on memory mode.
- Implemented a more robust planning flow with clear GPU, Hybrid, and CPU pathways, including fallback configurations when resources are insufficient.
- Updated default context length handling and safety buffers to prevent OOM situations.
- Adjusted usable memory percentage to 90 % and refined logging for easier debugging.

* fix: detect allocation failures as out-of-memory errors (#6459)

The Llama.cpp backend can emit the phrase “failed to allocate” when it runs out of memory.
Adding this check ensures such messages are correctly classified as out‑of‑memory errors,
providing more accurate error handling CPU backends.

* fix: pathname file install BE

* fix: set default memory mode and clean up unused import (#6463)

Use fallback value 'high' for memory_util config and remove unused GgufMetadata import.

* fix: auto update should not block popup

* fix: remove log

* fix: imporove edit message with attachment image

* fix: imporove edit message with attachment image

* fix: type imageurl

* fix: immediate dropdown value update

* fix: linter

* fix/validate-mmproj-from-general-basename

* fix/revalidate-model-gguf

* fix: loader when importing

* fix/mcp-json-validation

* chore: update locale mcp json

* fix: new extension settings aren't populated properly (#6476)

* chore: embed webview2 bootstrapper in tauri windows

* fix: validat type mcp json

* chore: prevent click outside for edit dialog

* feat: add qa checklist

* chore: remove old checklist

* chore: correct typo in checklist

* fix: correct memory suitability checks in llamacpp extension (#6504)

The previous implementation mixed model size and VRAM checks, leading to inaccurate status reporting (e.g., false RED results).
- Simplified import statement for `readGgufMetadata`.
- Fixed RAM/VRAM comparison by removing unnecessary parentheses.
- Replaced ambiguous `modelSize > usableTotalMemory` check with a clear `totalRequired > usableTotalMemory` hard‑limit condition.
- Refactored the status logic to explicitly handle the CPU‑GPU hybrid scenario, returning **YELLOW** when the total requirement fits combined memory but exceeds VRAM.
- Updated comments for better readability and maintenance.

* fix: thread rerender issue

* chore: clean up console log

* chore: uncomment irrelevant fix

* fix: linter

* chore: remove duplicated block

* fix: tests

* Merge pull request #6469 from menloresearch/fix/deeplink-not-work-on-windows

fix: deeplink issue on Windows

* fix: reduce unnessary rerender due to current thread retrieval

* fix: reduce app layout rerender due to router state update

* fix: avoid the entire app layout re render on route change

* clean: unused import

* Merge pull request #6514 from menloresearch/feat/web-gtag

feat: Add GA Measurement and change keyboard bindings on web

* chore: update build tauri commands

* chore: remove unused task

* fix: should not rerender thread message components when typing

* fix re render issue

* direct tokenspeed access

* chore: sync latest

* feat: Add Jan API server Swagger UI (#6502)

* feat: Add Jan API server Swagger UI

- Serve OpenAPI spec (`static/openapi.json`) directly from the proxy server.
- Implement Swagger UI assets (`swagger-ui.css`, `swagger-ui-bundle.js`, `favicon.ico`) and a simple HTML wrapper under `/docs`.
- Extend the proxy whitelist to include Swagger UI routes.
- Add routing logic for `/openapi.json`, `/docs`, and Swagger UI static files.
- Update whitelisted paths and integrate CORS handling for the new endpoints.

* feat: serve Swagger UI at root path

The Swagger UI endpoint previously lived under `/docs`. The route handling and
exclusion list have been updated so the UI is now served directly at `/`.
This simplifies access, aligns with the expected root URL in the Tauri
frontend, and removes the now‑unused `/docs` path handling.

* feat: add model loading state and translations for local API server

Implemented a loading indicator for model startup, updated the start/stop button to reflect model loading and server starting states, and disabled interactions while pending. Added new translation keys (`loadingModel`, `startingServer`) across all supported locales (en, de, id, pl, vn, zh-CN, zh-TW) and integrated them into the UI. Included a small delay after model start to ensure backend state consistency. This improves user feedback and prevents race conditions during server initialization.

* fix: tests

* fix: linter

* fix: build

* docs: update changelog for v0.6.10

* fix(number-input): preserve '0.0x' format when typing (#6520)

* docs: update url for gifs and videos

* chore: update url for jan-v1 docs

* fix: Typo in openapi JSON (#6528)

* enhancement: toaster delete mcp server

* Update 2025-09-18-auto-optimize-vision-imports.mdx

* Merge pull request #6475 from menloresearch/feat/bump-tokenjs

feat: fix remote provider vision capability

* fix: prevent consecutive messages with same role (#6544)

* fix: prevent consecutive messages with same role

* fix: tests

* fix: first message should not be assistant

* fix: tests

* feat: Prompt progress when streaming (#6503)

* feat: Prompt progress when streaming

- BE changes:
    - Add a `return_progress` flag to `chatCompletionRequest` and a corresponding `prompt_progress` payload in `chatCompletionChunk`. Introduce `chatCompletionPromptProgress` interface to capture cache, processed, time, and total token counts.
    - Update the Llamacpp extension to always request progress data when streaming, enabling UI components to display real‑time generation progress and leverage llama.cpp’s built‑in progress reporting.

* Make return_progress optional

* chore: update ui prompt progress before streaming content

* chore: remove log

* chore: remove progress when percentage >= 100

* chore: set timeout prompt progress

* chore: move prompt progress outside streaming content

* fix: tests

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
Co-authored-by: Louis <louis@jan.ai>

* chore: add ci for web stag (#6550)

* feat: add getTokensCount method to compute token usage (#6467)

* feat: add getTokensCount method to compute token usage

Implemented a new async `getTokensCount` function in the LLaMA.cpp extension.
The method validates the model session, checks process health, applies the request template, and tokenizes the resulting prompt to return the token count. Includes detailed error handling for crashed models and API failures, enabling callers to assess token usage before sending completions.

* Fix: typos

* chore: update ui token usage

* chore: remove unused code

* feat: add image token handling for multimodal LlamaCPP models

Implemented support for counting image tokens when using vision-enabled models:
- Extended `SessionInfo` with optional `mmprojPath` to store the multimodal project file.
- Propagated `mmproj_path` from the Tauri plugin into the session info.
- Added import of `chatCompletionRequestMessage` and enhanced token calculation logic in the LlamaCPP extension:
- Detects image content in messages.
- Reads GGUF metadata from `mmprojPath` to compute accurate image token counts.
- Provides a fallback estimation if metadata reading fails.
- Returns the sum of text and image tokens.
- Introduced helper methods `calculateImageTokens` and `estimateImageTokensFallback`.
- Minor clean‑ups such as comment capitalization and debug logging.

* chore: update FE send params message include content type image_url

* fix mmproj path from session info and num tokens calculation

* fix: Correct image token estimation calculation in llamacpp extension

This commit addresses an inaccurate token count for images in the llama.cpp extension.

The previous logic incorrectly calculated the token count based on image patch size and dimensions. This has been replaced with a more precise method that uses the clip.vision.projection_dim value from the model metadata.

Additionally, unnecessary debug logging was removed, and a new log was added to show the mmproj metadata for improved visibility.

* fix per image calc

* fix: crash due to force unwrap

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
Co-authored-by: Louis <louis@jan.ai>

* fix: custom fetch for all providers (#6538)

* fix: custom fetch for all providers

* fix: run in development should use built-in fetch

* add full-width model names (#6350)

* fix: prevent relocation to root directories (#6547)

* fix: prevent relocation to root directories

* Update web-app/src/locales/zh-TW/settings.json

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* feat: web remote conversation (#6554)

* feat: implement conversation endpoint

* use conversation aware endpoint

* fetch message correctly

* preserve first message

* fix logout

* fix broadcast issue locally + auth not refreshing profile on other tabs+ clean up and sync messages

* add is dev tag

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
Co-authored-by: Akarshan Biswas <akarshan@menlo.ai>
Co-authored-by: Minh141120 <minh.itptit@gmail.com>
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
Co-authored-by: Nguyen Ngoc Minh <91668012+Minh141120@users.noreply.github.com>
Co-authored-by: Louis <louis@jan.ai>
Co-authored-by: Bui Quang Huy <34532913+LazyYuuki@users.noreply.github.com>
Co-authored-by: Roushan Singh <github.rtron18@gmail.com>
Co-authored-by: hiento09 <136591877+hiento09@users.noreply.github.com>
Co-authored-by: Alexey Haidamaka <gdmkaa@gmail.com>
2025-09-23 15:13:15 +07:00
Dinh Long Nguyen
df61546942
feat: web remote conversation (#6554)
* feat: implement conversation endpoint

* use conversation aware endpoint

* fetch message correctly

* preserve first message

* fix logout

* fix broadcast issue locally + auth not refreshing profile on other tabs+ clean up and sync messages

* add is dev tag
2025-09-23 15:09:45 +07:00
Faisal Amir
2f85f214ea chore: update feedback #1 2025-09-23 13:29:28 +07:00
Faisal Amir
3c004819ca
Update docs/src/pages/_meta.json
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-09-23 12:49:55 +07:00
Faisal Amir
9a936ef826
Update docs/src/components/ui/dropdown-button.tsx
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-09-23 12:49:39 +07:00
Faisal Amir
d2c86801b4 enhancement: migrate handbook and janv2 2025-09-23 12:45:57 +07:00
Louis
292941e1d0
fix: prevent relocation to root directories (#6547)
* fix: prevent relocation to root directories

* Update web-app/src/locales/zh-TW/settings.json

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-09-23 10:16:11 +07:00
Alexey Haidamaka
5adc0d9d46
add full-width model names (#6350) 2025-09-23 10:14:21 +07:00
Louis
568ee857d5
fix: custom fetch for all providers (#6538)
* fix: custom fetch for all providers

* fix: run in development should use built-in fetch
2025-09-23 09:55:36 +07:00
Akarshan Biswas
885da29f28
feat: add getTokensCount method to compute token usage (#6467)
* feat: add getTokensCount method to compute token usage

Implemented a new async `getTokensCount` function in the LLaMA.cpp extension.
The method validates the model session, checks process health, applies the request template, and tokenizes the resulting prompt to return the token count. Includes detailed error handling for crashed models and API failures, enabling callers to assess token usage before sending completions.

* Fix: typos

* chore: update ui token usage

* chore: remove unused code

* feat: add image token handling for multimodal LlamaCPP models

Implemented support for counting image tokens when using vision-enabled models:
- Extended `SessionInfo` with optional `mmprojPath` to store the multimodal project file.
- Propagated `mmproj_path` from the Tauri plugin into the session info.
- Added import of `chatCompletionRequestMessage` and enhanced token calculation logic in the LlamaCPP extension:
- Detects image content in messages.
- Reads GGUF metadata from `mmprojPath` to compute accurate image token counts.
- Provides a fallback estimation if metadata reading fails.
- Returns the sum of text and image tokens.
- Introduced helper methods `calculateImageTokens` and `estimateImageTokensFallback`.
- Minor clean‑ups such as comment capitalization and debug logging.

* chore: update FE send params message include content type image_url

* fix mmproj path from session info and num tokens calculation

* fix: Correct image token estimation calculation in llamacpp extension

This commit addresses an inaccurate token count for images in the llama.cpp extension.

The previous logic incorrectly calculated the token count based on image patch size and dimensions. This has been replaced with a more precise method that uses the clip.vision.projection_dim value from the model metadata.

Additionally, unnecessary debug logging was removed, and a new log was added to show the mmproj metadata for improved visibility.

* fix per image calc

* fix: crash due to force unwrap

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
Co-authored-by: Louis <louis@jan.ai>
2025-09-23 07:52:19 +05:30
hiento09
14768a6ed6 Merge branch 'dev-web' into stag-web 2025-09-23 02:01:43 +07:00
hiento09
05e58cffe8
chore: add ci for web stag (#6550) 2025-09-23 01:58:48 +07:00
Akarshan Biswas
bf7f176741
feat: Prompt progress when streaming (#6503)
* feat: Prompt progress when streaming

- BE changes:
    - Add a `return_progress` flag to `chatCompletionRequest` and a corresponding `prompt_progress` payload in `chatCompletionChunk`. Introduce `chatCompletionPromptProgress` interface to capture cache, processed, time, and total token counts.
    - Update the Llamacpp extension to always request progress data when streaming, enabling UI components to display real‑time generation progress and leverage llama.cpp’s built‑in progress reporting.

* Make return_progress optional

* chore: update ui prompt progress before streaming content

* chore: remove log

* chore: remove progress when percentage >= 100

* chore: set timeout prompt progress

* chore: move prompt progress outside streaming content

* fix: tests

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
Co-authored-by: Louis <louis@jan.ai>
2025-09-22 20:37:27 +05:30
Faisal Amir
e1294cdc30
Merge pull request #6529 from menloresearch/enhancement/toaster-delete-mcp
enhancement: toaster delete mcp server
2025-09-22 21:12:02 +07:00
Louis
0d2c99a413
fix: prevent consecutive messages with same role (#6544)
* fix: prevent consecutive messages with same role

* fix: tests

* fix: first message should not be assistant

* fix: tests
2025-09-22 19:27:45 +07:00
nngostuds
c327622a6c Add server docs
Add server docs
2025-09-22 17:29:02 +07:00
LazyYuuki
ccce3b24e1 🔧 chore: re-arrange the folder structure 2025-09-22 17:23:29 +08:00
Louis
b0b84b7eda
Merge pull request #6475 from menloresearch/feat/bump-tokenjs
feat: fix remote provider vision capability
2025-09-22 14:37:26 +07:00
Nguyen Ngoc Minh
8cdb021b3d
Merge pull request #6518 from menloresearch/chore/update-build-tauri
chore: update build tauri commands
2025-09-22 11:32:01 +07:00
LazyYuuki
48ddc20026 feat: Re-arrange docs as needed 2025-09-21 19:24:48 +08:00
Bui Quang Huy
361c9eeff4
Merge pull request #6524 from menloresearch/docs/update-changelog
docs: update changelog for v0.6.10
2025-09-19 18:08:47 -07:00
Bui Quang Huy
b6169a48e6
Update 2025-09-18-auto-optimize-vision-imports.mdx 2025-09-20 09:04:11 +08:00
Faisal Amir
29862435ab
Merge pull request #6526 from github-roushan/fix-number-input
fix(number-input): preserve '0.0x' format when typing (#6520)
2025-09-19 23:25:22 +07:00
Faisal Amir
ec425163d3 enhancement: toaster delete mcp server 2025-09-19 23:25:01 +07:00
Akarshan Biswas
991bbec53a
fix: Typo in openapi JSON (#6528) 2025-09-19 12:53:39 +05:30
Nguyen Ngoc Minh
e1fa60be99
Merge pull request #6527 from menloresearch/docs/update-url-for-gif-and-videos
docs: update url for gifs and videos
2025-09-19 14:22:12 +07:00
Minh141120
465544cc2c chore: update url for jan-v1 docs 2025-09-19 14:17:23 +07:00
Minh141120
4694ab8350 docs: update url for gifs and videos 2025-09-19 14:10:59 +07:00
Roushan Singh
ae2532d40d fix(number-input): preserve '0.0x' format when typing (#6520) 2025-09-19 11:36:06 +05:30
Bui Quang Huy
2c251d0cef
Merge branch 'dev' into docs/update-changelog 2025-09-18 22:00:27 -07:00