Compare commits

..

33 Commits

Author SHA1 Message Date
hiento09
083553e1e2 chore: update api domain to jan.ai (#6832) 2025-10-28 15:47:04 +07:00
Dinh Long Nguyen
39917920bd Merge branch 'dev-web' into stag-web 2025-10-24 15:25:42 +07:00
Dinh Long Nguyen
cde8e54fdd Merge branch 'dev' into dev-web 2025-10-24 14:52:39 +07:00
Dinh Long Nguyen
5b6feb7973 Merge branch 'dev' into dev-web 2025-10-24 09:02:12 +07:00
Dinh Long Nguyen
370527bb50 update tracker 2025-10-24 01:40:34 +07:00
Dinh Long Nguyen
22645549ce Merge branch 'dev' into dev-web 2025-10-24 01:33:31 +07:00
hiento09
289dc2b6d3 chore: api change domain to menlo.ai (#6764) 2025-10-08 13:25:27 +07:00
hiento09
475eede903 chore: api change domain to menlo.ai (#6764) 2025-10-08 13:24:44 +07:00
Dinh Long Nguyen
9a8aa07094 Merge branch 'dev-web' into stag-web 2025-10-02 00:51:44 +07:00
Dinh Long Nguyen
efccec0bd7 update tracker 2025-10-02 00:51:23 +07:00
Dinh Long Nguyen
47fcdfd90f update web version tracker 2025-10-02 00:51:01 +07:00
Dinh Long Nguyen
cdfcbd0a2b Merge branch 'dev' into dev-web 2025-10-02 00:48:25 +07:00
Dinh Long Nguyen
b238fbcd41 Merge branch 'dev-web' into stag-web 2025-09-26 15:57:30 +07:00
Dinh Long Nguyen
efdd1b3971 Merge branch 'dev' into dev-web 2025-09-26 15:55:14 +07:00
Dinh Long Nguyen
b3c3cc8f26
Merge pull request #6568 from menloresearch/feat/sync-staging
Dev web sync to staging
2025-09-23 21:33:25 +07:00
dinhlongviolin1
3668bfb14f Merge remote-tracking branch 'origin/stag-web' into feat/sync-staging 2025-09-23 21:31:44 +07:00
Dinh Long Nguyen
bc8ff74e98
Merge pull request #6566 from menloresearch/feat/update-release-note
Update release note on dev-web
2025-09-23 21:27:28 +07:00
dinhlongviolin1
2367c156e2 Update release note 2025-09-23 21:26:01 +07:00
Dinh Long Nguyen
494db746f7
Merge pull request #6565 from menloresearch/feat/sync-prod-web
Feat/sync prod web
2025-09-23 21:19:11 +07:00
dinhlongviolin1
94bfad8d27 Merge branch 'dev-web' into feat/sync-prod-web 2025-09-23 21:16:39 +07:00
Dinh Long Nguyen
685054c5bc
Sync dev with dev-web (#6564)
*  feat: Re-arrange docs as needed

* 🔧 chore: re-arrange the folder structure

* Add server docs

Add server docs

* enhancement: migrate handbook and janv2

* Update docs/src/components/ui/dropdown-button.tsx

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* Update docs/src/pages/_meta.json

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* chore: update feedback #1

* fix: layout ability model

* feat: add azure as first class provider (#6555)

* feat: add azure as first class provider

* fix: deployment url

* Update handbook: restructure content and add new sections

- Add betting-on-open-source.mdx and open-superintelligence.mdx
- Update handbook index with new structure
- Remove outdated handbook sections (growth, happy, history, money, talent, teams, users, why)
- Update handbook _meta.json to reflect new structure

* chore: fix meta data json

* chore: update missing install

* fix: Catch local API server various errors (#6548)

* fix: Catch local API server various errors

* chore: Add tests to cover error catches

* fix: LocalAPI server trusted host should accept asterisk (#6551)

* feat: support .zip archives for manual backend install (#6534)

* feat(llamacpp): support .zip archives for manual backend install

* Update Lock Files

* Merge pull request #6563 from menloresearch/feat/web-minor-ui-tweak-login

feat: tweak login UI

---------

Co-authored-by: LazyYuuki <huy2840@gmail.com>
Co-authored-by: nngostuds <locnguyen1986@gmail.com>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
Co-authored-by: Louis <louis@jan.ai>
Co-authored-by: eckartal <emre@jan.ai>
Co-authored-by: Nghia Doan <dhnghia0604@gmail.com>
Co-authored-by: Roushan Kumar Singh <158602016+github-roushan@users.noreply.github.com>
2025-09-23 21:12:08 +07:00
Dinh Long Nguyen
7413f1354f
bring dev changes to web dev (#6557)
* fix: avoid error validate nested dom

* fix: correct context shift flag handling in LlamaCPP extension (#6404) (#6431)

* fix: correct context shift flag handling in LlamaCPP extension

The previous implementation added the `--no-context-shift` flag when `cfg.ctx_shift` was disabled, which conflicted with the llama.cpp CLI where the presence of `--context-shift` enables the feature.
The logic is updated to push `--context-shift` only when `cfg.ctx_shift` is true, ensuring the extension passes the correct argument and behaves as expected.

* feat: detect model out of context during generation

---------

Co-authored-by: Dinh Long Nguyen <dinhlongviolin1@gmail.com>

* chore: add install-rust-targets step for macOS universal builds

* fix: make install-rust-targets a dependency

* enhancement: copy MCP permission

* chore: make action mutton capitalize

* Update web-app/src/locales/en/tool-approval.json

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* chore: simplify macos workflow

* fix: KVCache size calculation and refactor (#6438)

- Removed the unused `getKVCachePerToken` helper and replaced it with a unified `estimateKVCache` that returns both total size and per‑token size.
- Fixed the KV cache size calculation to account for all layers, correcting previous under‑estimation.
- Added proper clamping of user‑requested context lengths to the model’s maximum.
- Refactored VRAM budgeting: introduced explicit reserves, fixed engine overhead, and separate multipliers for VRAM and system RAM based on memory mode.
- Implemented a more robust planning flow with clear GPU, Hybrid, and CPU pathways, including fallback configurations when resources are insufficient.
- Updated default context length handling and safety buffers to prevent OOM situations.
- Adjusted usable memory percentage to 90 % and refined logging for easier debugging.

* fix: detect allocation failures as out-of-memory errors (#6459)

The Llama.cpp backend can emit the phrase “failed to allocate” when it runs out of memory.
Adding this check ensures such messages are correctly classified as out‑of‑memory errors,
providing more accurate error handling CPU backends.

* fix: pathname file install BE

* fix: set default memory mode and clean up unused import (#6463)

Use fallback value 'high' for memory_util config and remove unused GgufMetadata import.

* fix: auto update should not block popup

* fix: remove log

* fix: imporove edit message with attachment image

* fix: imporove edit message with attachment image

* fix: type imageurl

* fix: immediate dropdown value update

* fix: linter

* fix/validate-mmproj-from-general-basename

* fix/revalidate-model-gguf

* fix: loader when importing

* fix/mcp-json-validation

* chore: update locale mcp json

* fix: new extension settings aren't populated properly (#6476)

* chore: embed webview2 bootstrapper in tauri windows

* fix: validat type mcp json

* chore: prevent click outside for edit dialog

* feat: add qa checklist

* chore: remove old checklist

* chore: correct typo in checklist

* fix: correct memory suitability checks in llamacpp extension (#6504)

The previous implementation mixed model size and VRAM checks, leading to inaccurate status reporting (e.g., false RED results).
- Simplified import statement for `readGgufMetadata`.
- Fixed RAM/VRAM comparison by removing unnecessary parentheses.
- Replaced ambiguous `modelSize > usableTotalMemory` check with a clear `totalRequired > usableTotalMemory` hard‑limit condition.
- Refactored the status logic to explicitly handle the CPU‑GPU hybrid scenario, returning **YELLOW** when the total requirement fits combined memory but exceeds VRAM.
- Updated comments for better readability and maintenance.

* fix: thread rerender issue

* chore: clean up console log

* chore: uncomment irrelevant fix

* fix: linter

* chore: remove duplicated block

* fix: tests

* Merge pull request #6469 from menloresearch/fix/deeplink-not-work-on-windows

fix: deeplink issue on Windows

* fix: reduce unnessary rerender due to current thread retrieval

* fix: reduce app layout rerender due to router state update

* fix: avoid the entire app layout re render on route change

* clean: unused import

* Merge pull request #6514 from menloresearch/feat/web-gtag

feat: Add GA Measurement and change keyboard bindings on web

* chore: update build tauri commands

* chore: remove unused task

* fix: should not rerender thread message components when typing

* fix re render issue

* direct tokenspeed access

* chore: sync latest

* feat: Add Jan API server Swagger UI (#6502)

* feat: Add Jan API server Swagger UI

- Serve OpenAPI spec (`static/openapi.json`) directly from the proxy server.
- Implement Swagger UI assets (`swagger-ui.css`, `swagger-ui-bundle.js`, `favicon.ico`) and a simple HTML wrapper under `/docs`.
- Extend the proxy whitelist to include Swagger UI routes.
- Add routing logic for `/openapi.json`, `/docs`, and Swagger UI static files.
- Update whitelisted paths and integrate CORS handling for the new endpoints.

* feat: serve Swagger UI at root path

The Swagger UI endpoint previously lived under `/docs`. The route handling and
exclusion list have been updated so the UI is now served directly at `/`.
This simplifies access, aligns with the expected root URL in the Tauri
frontend, and removes the now‑unused `/docs` path handling.

* feat: add model loading state and translations for local API server

Implemented a loading indicator for model startup, updated the start/stop button to reflect model loading and server starting states, and disabled interactions while pending. Added new translation keys (`loadingModel`, `startingServer`) across all supported locales (en, de, id, pl, vn, zh-CN, zh-TW) and integrated them into the UI. Included a small delay after model start to ensure backend state consistency. This improves user feedback and prevents race conditions during server initialization.

* fix: tests

* fix: linter

* fix: build

* docs: update changelog for v0.6.10

* fix(number-input): preserve '0.0x' format when typing (#6520)

* docs: update url for gifs and videos

* chore: update url for jan-v1 docs

* fix: Typo in openapi JSON (#6528)

* enhancement: toaster delete mcp server

* Update 2025-09-18-auto-optimize-vision-imports.mdx

* Merge pull request #6475 from menloresearch/feat/bump-tokenjs

feat: fix remote provider vision capability

* fix: prevent consecutive messages with same role (#6544)

* fix: prevent consecutive messages with same role

* fix: tests

* fix: first message should not be assistant

* fix: tests

* feat: Prompt progress when streaming (#6503)

* feat: Prompt progress when streaming

- BE changes:
    - Add a `return_progress` flag to `chatCompletionRequest` and a corresponding `prompt_progress` payload in `chatCompletionChunk`. Introduce `chatCompletionPromptProgress` interface to capture cache, processed, time, and total token counts.
    - Update the Llamacpp extension to always request progress data when streaming, enabling UI components to display real‑time generation progress and leverage llama.cpp’s built‑in progress reporting.

* Make return_progress optional

* chore: update ui prompt progress before streaming content

* chore: remove log

* chore: remove progress when percentage >= 100

* chore: set timeout prompt progress

* chore: move prompt progress outside streaming content

* fix: tests

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
Co-authored-by: Louis <louis@jan.ai>

* chore: add ci for web stag (#6550)

* feat: add getTokensCount method to compute token usage (#6467)

* feat: add getTokensCount method to compute token usage

Implemented a new async `getTokensCount` function in the LLaMA.cpp extension.
The method validates the model session, checks process health, applies the request template, and tokenizes the resulting prompt to return the token count. Includes detailed error handling for crashed models and API failures, enabling callers to assess token usage before sending completions.

* Fix: typos

* chore: update ui token usage

* chore: remove unused code

* feat: add image token handling for multimodal LlamaCPP models

Implemented support for counting image tokens when using vision-enabled models:
- Extended `SessionInfo` with optional `mmprojPath` to store the multimodal project file.
- Propagated `mmproj_path` from the Tauri plugin into the session info.
- Added import of `chatCompletionRequestMessage` and enhanced token calculation logic in the LlamaCPP extension:
- Detects image content in messages.
- Reads GGUF metadata from `mmprojPath` to compute accurate image token counts.
- Provides a fallback estimation if metadata reading fails.
- Returns the sum of text and image tokens.
- Introduced helper methods `calculateImageTokens` and `estimateImageTokensFallback`.
- Minor clean‑ups such as comment capitalization and debug logging.

* chore: update FE send params message include content type image_url

* fix mmproj path from session info and num tokens calculation

* fix: Correct image token estimation calculation in llamacpp extension

This commit addresses an inaccurate token count for images in the llama.cpp extension.

The previous logic incorrectly calculated the token count based on image patch size and dimensions. This has been replaced with a more precise method that uses the clip.vision.projection_dim value from the model metadata.

Additionally, unnecessary debug logging was removed, and a new log was added to show the mmproj metadata for improved visibility.

* fix per image calc

* fix: crash due to force unwrap

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
Co-authored-by: Louis <louis@jan.ai>

* fix: custom fetch for all providers (#6538)

* fix: custom fetch for all providers

* fix: run in development should use built-in fetch

* add full-width model names (#6350)

* fix: prevent relocation to root directories (#6547)

* fix: prevent relocation to root directories

* Update web-app/src/locales/zh-TW/settings.json

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* feat: web remote conversation (#6554)

* feat: implement conversation endpoint

* use conversation aware endpoint

* fetch message correctly

* preserve first message

* fix logout

* fix broadcast issue locally + auth not refreshing profile on other tabs+ clean up and sync messages

* add is dev tag

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
Co-authored-by: Akarshan Biswas <akarshan@menlo.ai>
Co-authored-by: Minh141120 <minh.itptit@gmail.com>
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
Co-authored-by: Nguyen Ngoc Minh <91668012+Minh141120@users.noreply.github.com>
Co-authored-by: Louis <louis@jan.ai>
Co-authored-by: Bui Quang Huy <34532913+LazyYuuki@users.noreply.github.com>
Co-authored-by: Roushan Singh <github.rtron18@gmail.com>
Co-authored-by: hiento09 <136591877+hiento09@users.noreply.github.com>
Co-authored-by: Alexey Haidamaka <gdmkaa@gmail.com>
2025-09-23 15:13:15 +07:00
hiento09
14768a6ed6 Merge branch 'dev-web' into stag-web 2025-09-23 02:01:43 +07:00
Dinh Long Nguyen
664f304631
Merge pull request #6507 from menloresearch/dev
Sync dev with dev web (google auth)
2025-09-18 11:47:56 +07:00
Dinh Long Nguyen
a8df33c0dc
Merge pull request #6485 from menloresearch/dev-web
Deploy web to production
2025-09-16 19:30:00 +07:00
Dinh Long Nguyen
5a481b5022
Merge pull request #6483 from menloresearch/fix/dockerfile-error
update docker file for web build
2025-09-16 19:21:52 +07:00
Dinh Long Nguyen
2b6f581f9a update docker file for web build 2025-09-16 19:19:23 +07:00
Dinh Long Nguyen
e88b8baf19
Merge pull request #6470 from menloresearch/dev 2025-09-16 00:01:55 +07:00
Dinh Long Nguyen
8894d72e6b
Merge pull request #6435 from menloresearch/fix/api-base
update base url
2025-09-12 15:02:52 +07:00
dinhlongviolin1
b36fb2dd73 update base url 2025-09-12 00:59:20 -07:00
Dinh Long Nguyen
9a0c16a126
Merge pull request #6434 from menloresearch/dev-web
Merge dev-web branch into prod-web
2025-09-12 14:49:35 +07:00
Dinh Long Nguyen
4ef21545a4
Sync dev web with dev (#6432)
* fix: Polish translation (#6421)

* ci: remove paths triggered for jan server

* ci: fix typo in branch name for jan web

---------

Co-authored-by: Piotr Orzechowski <piotr@orzechowski.tech>
Co-authored-by: Minh141120 <minh.itptit@gmail.com>
Co-authored-by: Nguyen Ngoc Minh <91668012+Minh141120@users.noreply.github.com>
2025-09-12 14:25:11 +07:00
Dinh Long Nguyen
4368eb2893
add internal web version tracker (#6429) 2025-09-12 13:07:12 +07:00
131 changed files with 235 additions and 3056 deletions

View File

@ -1,5 +1,5 @@
blank_issues_enabled: true
contact_links:
- name: Jan Discussions
url: https://github.com/orgs/janhq/discussions/categories/q-a
url: https://github.com/orgs/menloresearch/discussions/categories/q-a
about: Get help, discuss features & roadmap, and share your projects

View File

@ -168,62 +168,62 @@ jobs:
AWS_DEFAULT_REGION: ${{ secrets.DELTA_AWS_REGION }}
AWS_EC2_METADATA_DISABLED: 'true'
# noti-discord-nightly-and-update-url-readme:
# needs:
# [
# build-macos,
# build-windows-x64,
# build-linux-x64,
# get-update-version,
# set-public-provider,
# sync-temp-to-latest,
# ]
# secrets: inherit
# if: github.event_name == 'schedule'
# uses: ./.github/workflows/template-noti-discord-and-update-url-readme.yml
# with:
# ref: refs/heads/dev
# build_reason: Nightly
# push_to_branch: dev
# new_version: ${{ needs.get-update-version.outputs.new_version }}
noti-discord-nightly-and-update-url-readme:
needs:
[
build-macos,
build-windows-x64,
build-linux-x64,
get-update-version,
set-public-provider,
sync-temp-to-latest,
]
secrets: inherit
if: github.event_name == 'schedule'
uses: ./.github/workflows/template-noti-discord-and-update-url-readme.yml
with:
ref: refs/heads/dev
build_reason: Nightly
push_to_branch: dev
new_version: ${{ needs.get-update-version.outputs.new_version }}
# noti-discord-pre-release-and-update-url-readme:
# needs:
# [
# build-macos,
# build-windows-x64,
# build-linux-x64,
# get-update-version,
# set-public-provider,
# sync-temp-to-latest,
# ]
# secrets: inherit
# if: github.event_name == 'push'
# uses: ./.github/workflows/template-noti-discord-and-update-url-readme.yml
# with:
# ref: refs/heads/dev
# build_reason: Pre-release
# push_to_branch: dev
# new_version: ${{ needs.get-update-version.outputs.new_version }}
noti-discord-pre-release-and-update-url-readme:
needs:
[
build-macos,
build-windows-x64,
build-linux-x64,
get-update-version,
set-public-provider,
sync-temp-to-latest,
]
secrets: inherit
if: github.event_name == 'push'
uses: ./.github/workflows/template-noti-discord-and-update-url-readme.yml
with:
ref: refs/heads/dev
build_reason: Pre-release
push_to_branch: dev
new_version: ${{ needs.get-update-version.outputs.new_version }}
# noti-discord-manual-and-update-url-readme:
# needs:
# [
# build-macos,
# build-windows-x64,
# build-linux-x64,
# get-update-version,
# set-public-provider,
# sync-temp-to-latest,
# ]
# secrets: inherit
# if: github.event_name == 'workflow_dispatch' && github.event.inputs.public_provider == 'aws-s3'
# uses: ./.github/workflows/template-noti-discord-and-update-url-readme.yml
# with:
# ref: refs/heads/dev
# build_reason: Manual
# push_to_branch: dev
# new_version: ${{ needs.get-update-version.outputs.new_version }}
noti-discord-manual-and-update-url-readme:
needs:
[
build-macos,
build-windows-x64,
build-linux-x64,
get-update-version,
set-public-provider,
sync-temp-to-latest,
]
secrets: inherit
if: github.event_name == 'workflow_dispatch' && github.event.inputs.public_provider == 'aws-s3'
uses: ./.github/workflows/template-noti-discord-and-update-url-readme.yml
with:
ref: refs/heads/dev
build_reason: Manual
push_to_branch: dev
new_version: ${{ needs.get-update-version.outputs.new_version }}
comment-pr-build-url:
needs:

View File

@ -82,11 +82,11 @@ jobs:
VERSION=${{ needs.get-update-version.outputs.new_version }}
PUB_DATE=$(date -u +"%Y-%m-%dT%H:%M:%S.%3NZ")
LINUX_SIGNATURE="${{ needs.build-linux-x64.outputs.APPIMAGE_SIG }}"
LINUX_URL="https://github.com/janhq/jan/releases/download/v${{ needs.get-update-version.outputs.new_version }}/${{ needs.build-linux-x64.outputs.APPIMAGE_FILE_NAME }}"
LINUX_URL="https://github.com/menloresearch/jan/releases/download/v${{ needs.get-update-version.outputs.new_version }}/${{ needs.build-linux-x64.outputs.APPIMAGE_FILE_NAME }}"
WINDOWS_SIGNATURE="${{ needs.build-windows-x64.outputs.WIN_SIG }}"
WINDOWS_URL="https://github.com/janhq/jan/releases/download/v${{ needs.get-update-version.outputs.new_version }}/${{ needs.build-windows-x64.outputs.FILE_NAME }}"
WINDOWS_URL="https://github.com/menloresearch/jan/releases/download/v${{ needs.get-update-version.outputs.new_version }}/${{ needs.build-windows-x64.outputs.FILE_NAME }}"
DARWIN_SIGNATURE="${{ needs.build-macos.outputs.MAC_UNIVERSAL_SIG }}"
DARWIN_URL="https://github.com/janhq/jan/releases/download/v${{ needs.get-update-version.outputs.new_version }}/${{ needs.build-macos.outputs.TAR_NAME }}"
DARWIN_URL="https://github.com/menloresearch/jan/releases/download/v${{ needs.get-update-version.outputs.new_version }}/${{ needs.build-macos.outputs.TAR_NAME }}"
jq --arg version "$VERSION" \
--arg pub_date "$PUB_DATE" \

View File

@ -29,7 +29,7 @@ jobs:
local max_retries=3
local tag
while [ $retries -lt $max_retries ]; do
tag=$(curl -s https://api.github.com/repos/janhq/jan/releases/latest | jq -r .tag_name)
tag=$(curl -s https://api.github.com/repos/menloresearch/jan/releases/latest | jq -r .tag_name)
if [ -n "$tag" ] && [ "$tag" != "null" ]; then
echo $tag
return

View File

@ -50,6 +50,6 @@ jobs:
- macOS Universal: https://delta.jan.ai/nightly/Jan-nightly_{{ VERSION }}_universal.dmg
- Linux Deb: https://delta.jan.ai/nightly/Jan-nightly_{{ VERSION }}_amd64.deb
- Linux AppImage: https://delta.jan.ai/nightly/Jan-nightly_{{ VERSION }}_amd64.AppImage
- Github action run: https://github.com/janhq/jan/actions/runs/{{ GITHUB_RUN_ID }}
- Github action run: https://github.com/menloresearch/jan/actions/runs/{{ GITHUB_RUN_ID }}
env:
DISCORD_WEBHOOK: ${{ secrets.DISCORD_WEBHOOK }}

View File

@ -143,7 +143,7 @@ jan/
**Option 1: The Easy Way (Make)**
```bash
git clone https://github.com/janhq/jan
git clone https://github.com/menloresearch/jan
cd jan
make dev
```
@ -152,8 +152,8 @@ make dev
### Reporting Bugs
- **Ensure the bug was not already reported** by searching on GitHub under [Issues](https://github.com/janhq/jan/issues)
- If you're unable to find an open issue addressing the problem, [open a new one](https://github.com/janhq/jan/issues/new)
- **Ensure the bug was not already reported** by searching on GitHub under [Issues](https://github.com/menloresearch/jan/issues)
- If you're unable to find an open issue addressing the problem, [open a new one](https://github.com/menloresearch/jan/issues/new)
- Include your system specs and error logs - it helps a ton
### Suggesting Enhancements

View File

@ -28,7 +28,6 @@ COPY ./Makefile ./Makefile
COPY ./.* /
COPY ./package.json ./package.json
COPY ./yarn.lock ./yarn.lock
COPY ./pre-install ./pre-install
COPY ./core ./core
# Build web application

View File

@ -4,10 +4,10 @@
<p align="center">
<!-- ALL-CONTRIBUTORS-BADGE:START - Do not remove or modify this section -->
<img alt="GitHub commit activity" src="https://img.shields.io/github/commit-activity/m/janhq/jan"/>
<img alt="Github Last Commit" src="https://img.shields.io/github/last-commit/janhq/jan"/>
<img alt="Github Contributors" src="https://img.shields.io/github/contributors/janhq/jan"/>
<img alt="GitHub closed issues" src="https://img.shields.io/github/issues-closed/janhq/jan"/>
<img alt="GitHub commit activity" src="https://img.shields.io/github/commit-activity/m/menloresearch/jan"/>
<img alt="Github Last Commit" src="https://img.shields.io/github/last-commit/menloresearch/jan"/>
<img alt="Github Contributors" src="https://img.shields.io/github/contributors/menloresearch/jan"/>
<img alt="GitHub closed issues" src="https://img.shields.io/github/issues-closed/menloresearch/jan"/>
<img alt="Discord" src="https://img.shields.io/discord/1107178041848909847?label=discord"/>
</p>
@ -15,7 +15,7 @@
<a href="https://www.jan.ai/docs/desktop">Getting Started</a>
- <a href="https://discord.gg/Exe46xPMbK">Community</a>
- <a href="https://jan.ai/changelog">Changelog</a>
- <a href="https://github.com/janhq/jan/issues">Bug reports</a>
- <a href="https://github.com/menloresearch/jan/issues">Bug reports</a>
</p>
Jan is bringing the best of open-source AI in an easy-to-use product. Download and run LLMs with **full control** and **privacy**.
@ -48,7 +48,7 @@ The easiest way to get started is by downloading one of the following versions f
</table>
Download from [jan.ai](https://jan.ai/) or [GitHub Releases](https://github.com/janhq/jan/releases).
Download from [jan.ai](https://jan.ai/) or [GitHub Releases](https://github.com/menloresearch/jan/releases).
## Features
@ -73,7 +73,7 @@ For those who enjoy the scenic route:
### Run with Make
```bash
git clone https://github.com/janhq/jan
git clone https://github.com/menloresearch/jan
cd jan
make dev
```
@ -128,7 +128,7 @@ Contributions welcome. See [CONTRIBUTING.md](CONTRIBUTING.md) for the full spiel
## Contact
- **Bugs**: [GitHub Issues](https://github.com/janhq/jan/issues)
- **Bugs**: [GitHub Issues](https://github.com/menloresearch/jan/issues)
- **Business**: hello@jan.ai
- **Jobs**: hr@jan.ai
- **General Discussion**: [Discord](https://discord.gg/FTk2MvZwJH)

50
WEB_VERSION_TRACKER.md Normal file
View File

@ -0,0 +1,50 @@
# Jan Web Version Tracker
Internal tracker for web component changes and features.
## v0.0.13 (current)
**Release Date**: 2025-10-24
**Commit SHA**: 22645549cea48b1ae24b5b9dc70411fd3bfc9935
**Main Features**:
- Migrate auth to platform menlo
- Remove conv prefix
- Disable Project for web
- Model capabilites are fetched correctly from model catalog
## v0.0.12
**Release Date**: 2025-10-02
**Commit SHA**: df145d63a93bd27336b5b539ce0719fe9c7719e3
**Main Features**:
- Search button instead of tools
- Projects support properly for local used
- Temporary chat mode
- Performance enhancement: prevent thread items over fetching on app start
- Fix Google Tag
## v0.0.11
**Release Date**: 2025-09-23
**Commit SHA**: 494db746f7dd1f51241cec80bbf550901a0115e5
**Main Features**:
- Google login support
- Remote conversation and message persistent
- UI improvements
- Multiple tab synchronization on browser
## v0.0.10
**Release Date**: 2025-09-11
**Commit SHA**: b5b6e1dc197378d06ccbf127f60e44779f1e44e5
**Main Features**:
- Chat interface with completion route support
- MCP (Model Context Protocol) integration
- Core web functionality for Jan AI
**Changes**:
- Initial web version release
- Basic chat completion API integration
- MCP server support for tool calling
- Web-optimized UI components

View File

@ -1,7 +1,7 @@
# Core dependencies
cua-computer[all]~=0.3.5
cua-agent[all]~=0.3.0
cua-agent @ git+https://github.com/janhq/cua.git@compute-agent-0.3.0-patch#subdirectory=libs/python/agent
cua-agent @ git+https://github.com/menloresearch/cua.git@compute-agent-0.3.0-patch#subdirectory=libs/python/agent
# ReportPortal integration
reportportal-client~=5.6.5

View File

@ -13,7 +13,7 @@ import * as core from '@janhq/core'
## Build an Extension
1. Download an extension template, for example, [https://github.com/janhq/extension-template](https://github.com/janhq/extension-template).
1. Download an extension template, for example, [https://github.com/menloresearch/extension-template](https://github.com/menloresearch/extension-template).
2. Update the source code:

View File

@ -18,7 +18,7 @@ We try to **keep routes consistent** to maintain SEO.
## How to Contribute
Refer to the [Contributing Guide](https://github.com/janhq/jan/blob/main/CONTRIBUTING.md) for more comprehensive information on how to contribute to the Jan project.
Refer to the [Contributing Guide](https://github.com/menloresearch/jan/blob/main/CONTRIBUTING.md) for more comprehensive information on how to contribute to the Jan project.
### Pre-requisites and Installation

View File

@ -1581,7 +1581,7 @@
},
"cover": {
"type": "string",
"example": "https://raw.githubusercontent.com/janhq/jan/main/models/trinity-v1.2-7b/cover.png"
"example": "https://raw.githubusercontent.com/menloresearch/jan/main/models/trinity-v1.2-7b/cover.png"
},
"engine": {
"type": "string",

View File

@ -27,7 +27,7 @@ export const APIReference = () => {
<ApiReferenceReact
configuration={{
spec: {
url: 'https://raw.githubusercontent.com/janhq/docs/main/public/openapi/jan.json',
url: 'https://raw.githubusercontent.com/menloresearch/docs/main/public/openapi/jan.json',
},
theme: 'alternate',
hideModels: true,

View File

@ -57,7 +57,7 @@ const Changelog = () => {
<p className="text-base mt-2 leading-relaxed">
Latest release updates from the Jan team. Check out our&nbsp;
<a
href="https://github.com/orgs/janhq/projects/30"
href="https://github.com/orgs/menloresearch/projects/30"
className="text-blue-600 dark:text-blue-400 cursor-pointer"
>
Roadmap
@ -150,7 +150,7 @@ const Changelog = () => {
<div className="text-center">
<Link
href="https://github.com/janhq/jan/releases"
href="https://github.com/menloresearch/jan/releases"
target="_blank"
className="dark:nx-bg-neutral-900 dark:text-white bg-black text-white hover:text-white justify-center dark:border dark:border-neutral-800 flex-shrink-0 px-4 py-3 rounded-xl inline-flex items-center"
>

View File

@ -72,7 +72,7 @@ export default function CardDownload({ lastRelease }: Props) {
return {
...system,
href: `https://github.com/janhq/jan/releases/download/${lastRelease.tag_name}/${downloadUrl}`,
href: `https://github.com/menloresearch/jan/releases/download/${lastRelease.tag_name}/${downloadUrl}`,
size: asset ? formatFileSize(asset.size) : undefined,
}
})

View File

@ -139,7 +139,7 @@ const DropdownDownload = ({ lastRelease }: Props) => {
return {
...system,
href: `https://github.com/janhq/jan/releases/download/${lastRelease.tag_name}/${downloadUrl}`,
href: `https://github.com/menloresearch/jan/releases/download/${lastRelease.tag_name}/${downloadUrl}`,
size: asset ? formatFileSize(asset.size) : undefined,
}
})

View File

@ -23,7 +23,7 @@ const BuiltWithLove = () => {
</div>
<div className="flex flex-col lg:flex-row gap-8 mt-8 items-center justify-center">
<a
href="https://github.com/janhq/jan"
href="https://github.com/menloresearch/jan"
target="_blank"
className="dark:bg-white bg-black inline-flex w-56 px-4 py-3 rounded-xl cursor-pointer justify-center items-start space-x-4 "
>

View File

@ -44,7 +44,7 @@ const Hero = () => {
<div className="mt-10 text-center">
<div>
<Link
href="https://github.com/janhq/jan/releases"
href="https://github.com/menloresearch/jan/releases"
target="_blank"
className="hidden lg:inline-block"
>

View File

@ -95,7 +95,7 @@ const Home = () => {
<div className="container mx-auto relative z-10">
<div className="flex justify-center items-center mt-14 lg:mt-20 px-4">
<a
href={`https://github.com/janhq/jan/releases/tag/${lastVersion}`}
href={`https://github.com/menloresearch/jan/releases/tag/${lastVersion}`}
target="_blank"
rel="noopener noreferrer"
className="bg-black/40 px-3 lg:px-4 rounded-full h-10 inline-flex items-center max-w-full animate-fade-in delay-100"
@ -270,7 +270,7 @@ const Home = () => {
data-delay="600"
>
<a
href="https://github.com/janhq/jan"
href="https://github.com/menloresearch/jan"
target="_blank"
rel="noopener noreferrer"
>
@ -387,7 +387,7 @@ const Home = () => {
</div>
<a
className="hidden md:block"
href="https://github.com/janhq/jan"
href="https://github.com/menloresearch/jan"
target="_blank"
rel="noopener noreferrer"
>
@ -413,7 +413,7 @@ const Home = () => {
</p>
<a
className="md:hidden mt-4 block w-full"
href="https://github.com/janhq/jan"
href="https://github.com/menloresearch/jan"
target="_blank"
rel="noopener noreferrer"
>

View File

@ -95,7 +95,7 @@ const Navbar = ({ noScroll }: { noScroll?: boolean }) => {
})}
<li>
<a
href="https://github.com/janhq/jan/releases/latest"
href="https://github.com/menloresearch/jan/releases/latest"
target="_blank"
rel="noopener noreferrer"
>
@ -141,7 +141,7 @@ const Navbar = ({ noScroll }: { noScroll?: boolean }) => {
<FaLinkedinIn className="size-5" />
</a>
<a
href="https://github.com/janhq/jan"
href="https://github.com/menloresearch/jan"
target="_blank"
rel="noopener noreferrer"
className="rounded-lg flex items-center justify-center"
@ -156,7 +156,7 @@ const Navbar = ({ noScroll }: { noScroll?: boolean }) => {
{/* Mobile Download Button and Hamburger */}
<div className="lg:hidden flex items-center gap-3">
<a
href="https://github.com/janhq/jan/releases/latest"
href="https://github.com/menloresearch/jan/releases/latest"
target="_blank"
rel="noopener noreferrer"
>
@ -278,7 +278,7 @@ const Navbar = ({ noScroll }: { noScroll?: boolean }) => {
<FaLinkedinIn className="size-5" />
</a>
<a
href="https://github.com/janhq/jan"
href="https://github.com/menloresearch/jan"
target="_blank"
rel="noopener noreferrer"
className="text-black rounded-lg flex items-center justify-center"
@ -296,7 +296,7 @@ const Navbar = ({ noScroll }: { noScroll?: boolean }) => {
asChild
>
<a
href="https://github.com/janhq/jan/releases/latest"
href="https://github.com/menloresearch/jan/releases/latest"
target="_blank"
rel="noopener noreferrer"
>

View File

@ -120,7 +120,7 @@ export function DropdownButton({
return {
...option,
href: `https://github.com/janhq/jan/releases/download/${lastRelease.tag_name}/${fileName}`,
href: `https://github.com/menloresearch/jan/releases/download/${lastRelease.tag_name}/${fileName}`,
size: asset ? formatFileSize(asset.size) : 'N/A',
}
})

View File

@ -18,7 +18,7 @@ description: Development setup, workflow, and contribution guidelines for Jan Se
1. **Clone Repository**
```bash
git clone https://github.com/janhq/jan-server
git clone https://github.com/menloresearch/jan-server
cd jan-server
```

View File

@ -19,7 +19,7 @@ Jan Server currently supports minikube for local development. Production Kuberne
1. **Clone the repository**
```bash
git clone https://github.com/janhq/jan-server
git clone https://github.com/menloresearch/jan-server
cd jan-server
```

View File

@ -24,4 +24,4 @@ Fixes 💫
Update your product or download the latest: https://jan.ai
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.5).
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.5).

View File

@ -24,4 +24,4 @@ Jan now supports Mistral's new model Codestral. Thanks [Bartowski](https://huggi
More GGUF models can run in Jan - we rebased to llama.cpp b3012.Big thanks to [ggerganov](https://github.com/ggerganov)
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.0).
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.0).

View File

@ -28,4 +28,4 @@ Jan now understands LaTeX, allowing users to process and understand complex math
![Latex](https://catalog.jan.ai/docs/jan_update_latex.gif)
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.4.12).
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.4.12).

View File

@ -28,4 +28,4 @@ Users can now connect to OpenAI's new model GPT-4o.
![GPT4o](https://catalog.jan.ai/docs/jan_v0_4_13_openai_gpt4o.gif)
For more details, see the [GitHub release notes.](https://github.com/janhq/jan/releases/tag/v0.4.13)
For more details, see the [GitHub release notes.](https://github.com/menloresearch/jan/releases/tag/v0.4.13)

View File

@ -16,4 +16,4 @@ More GGUF models can run in Jan - we rebased to llama.cpp b2961.
Huge shoutouts to [ggerganov](https://github.com/ggerganov) and contributors for llama.cpp, and [Bartowski](https://huggingface.co/bartowski) for GGUF models.
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.4.14).
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.4.14).

View File

@ -26,4 +26,4 @@ We've updated to llama.cpp b3088 for better performance - thanks to [GG](https:/
- Reduced chat font weight (back to normal!)
- Restored the maximize button
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.1).
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.1).

View File

@ -32,4 +32,4 @@ We've restored the tooltip hover functionality, which makes it easier to access
The right-click options for thread settings are now fully operational again. You can now manage your threads with this fix.
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.2).
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.2).

View File

@ -23,4 +23,4 @@ We've been working on stability issues over the last few weeks. Jan is now more
- Fixed the GPU memory utilization bar
- Some UX and copy improvements
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.3).
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.3).

View File

@ -32,4 +32,4 @@ Switching between threads used to reset your instruction settings. Thats fixe
### Minor UI Tweaks & Bug Fixes
Weve also resolved issues with the input slider on the right panel and tackled several smaller bugs to keep everything running smoothly.
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.4).
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.4).

View File

@ -23,4 +23,4 @@ Fixes 💫
Update your product or download the latest: https://jan.ai
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.7).
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.7).

View File

@ -22,4 +22,4 @@ Jan v0.5.9 is here: fixing what needed fixing
Update your product or download the latest: https://jan.ai
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.9).
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.9).

View File

@ -22,4 +22,4 @@ and various UI/UX enhancements 💫
Update your product or download the latest: https://jan.ai
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.8).
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.8).

View File

@ -19,4 +19,4 @@ Jan v0.5.10 is live: Jan is faster, smoother, and more reliable.
Update your product or download the latest: https://jan.ai
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.10).
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.10).

View File

@ -23,4 +23,4 @@ Jan v0.5.11 is here - critical issues fixed, Mac installation updated.
Update your product or download the latest: https://jan.ai
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.11).
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.11).

View File

@ -25,4 +25,4 @@ Jan v0.5.11 is here - critical issues fixed, Mac installation updated.
Update your product or download the latest: https://jan.ai
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.12).
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.12).

View File

@ -20,4 +20,4 @@ import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
Update your product or download the latest: https://jan.ai
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.13).
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.13).

View File

@ -33,4 +33,4 @@ Llama
Update your Jan or [download the latest](https://jan.ai/).
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.14).
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.14).

View File

@ -25,4 +25,4 @@ import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
Update your Jan or [download the latest](https://jan.ai/).
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.15).
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.15).

View File

@ -26,4 +26,4 @@ import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
Update your Jan or [download the latest](https://jan.ai/).
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.16).
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.16).

View File

@ -20,4 +20,4 @@ import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
Update your Jan or [download the latest](https://jan.ai/).
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.17).
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.17).

View File

@ -18,4 +18,4 @@ import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
Update your Jan or [download the latest](https://jan.ai/).
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.6.1).
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.6.1).

View File

@ -18,4 +18,4 @@ import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
Update your Jan or [download the latest](https://jan.ai/).
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.6.3).
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.6.3).

View File

@ -23,4 +23,4 @@ new MCP examples.
Update your Jan or [download the latest](https://jan.ai/).
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.6.5).
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.6.5).

View File

@ -116,4 +116,4 @@ integrations. Stay tuned!
Update your Jan or [download the latest](https://jan.ai/).
For the complete list of changes, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.6.6).
For the complete list of changes, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.6.6).

View File

@ -89,4 +89,4 @@ We're continuing to optimize performance for large models, expand MCP integratio
Update your Jan or [download the latest](https://jan.ai/).
For the complete list of changes, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.6.7).
For the complete list of changes, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.6.7).

View File

@ -74,4 +74,4 @@ v0.6.8 focuses on stability and real workflows: major llama.cpp hardening, two n
Update your Jan or [download the latest](https://jan.ai/).
For the complete list of changes, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.6.8).
For the complete list of changes, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.6.8).

View File

@ -135,5 +135,5 @@ Min-p: 0.0
## 🤝 Community & Support
- **Discussions**: [HuggingFace Community](https://huggingface.co/Menlo/Jan-nano-128k/discussions)
- **Issues**: [GitHub Repository](https://github.com/janhq/deep-research/issues)
- **Issues**: [GitHub Repository](https://github.com/menloresearch/deep-research/issues)
- **Discord**: Join our research community for tips and best practices

View File

@ -9,7 +9,7 @@ Jan Server is a comprehensive self-hosted AI server platform that provides OpenA
Jan Server is a Kubernetes-native platform consisting of multiple microservices that work together to provide a complete AI infrastructure solution. It offers:
![System Architecture Diagram](https://raw.githubusercontent.com/janhq/jan-server/main/docs/Architect.png)
![System Architecture Diagram](https://raw.githubusercontent.com/menloresearch/jan-server/main/docs/Architect.png)
### Key Features
- **OpenAI-Compatible API**: Full compatibility with OpenAI's chat completion API

View File

@ -3,7 +3,7 @@ title: Development
description: Development setup, workflow, and contribution guidelines for Jan Server.
---
## Core Domain Models
![Domain Models Diagram](https://github.com/janhq/jan-server/raw/main/apps/jan-api-gateway/docs/System_Design.png)
![Domain Models Diagram](https://github.com/menloresearch/jan-server/raw/main/apps/jan-api-gateway/docs/System_Design.png)
## Development Setup
### Prerequisites
@ -42,7 +42,7 @@ description: Development setup, workflow, and contribution guidelines for Jan Se
1. **Clone Repository**
```bash
git clone https://github.com/janhq/jan-server
git clone https://github.com/menloresearch/jan-server
cd jan-server
```

View File

@ -40,7 +40,7 @@ Jan Server is a Kubernetes-native platform consisting of multiple microservices
- **Monitoring & Profiling**: Built-in performance monitoring and health checks
## System Architecture
![System Architecture Diagram](https://raw.githubusercontent.com/janhq/jan-server/main/docs/Architect.png)
![System Architecture Diagram](https://raw.githubusercontent.com/menloresearch/jan-server/main/docs/Architect.png)
## Services
### Jan API Gateway

View File

@ -19,7 +19,7 @@ keywords:
import Download from "@/components/Download"
export const getStaticProps = async() => {
const resRelease = await fetch('https://api.github.com/repos/janhq/jan/releases/latest')
const resRelease = await fetch('https://api.github.com/repos/menloresearch/jan/releases/latest')
const release = await resRelease.json()
return {

View File

@ -19,9 +19,9 @@ keywords:
import Home from "@/components/Home"
export const getStaticProps = async() => {
const resReleaseLatest = await fetch('https://api.github.com/repos/janhq/jan/releases/latest')
const resRelease = await fetch('https://api.github.com/repos/janhq/jan/releases?per_page=500')
const resRepo = await fetch('https://api.github.com/repos/janhq/jan')
const resReleaseLatest = await fetch('https://api.github.com/repos/menloresearch/jan/releases/latest')
const resRelease = await fetch('https://api.github.com/repos/menloresearch/jan/releases?per_page=500')
const resRepo = await fetch('https://api.github.com/repos/menloresearch/jan')
const repo = await resRepo.json()
const latestRelease = await resReleaseLatest.json()
const release = await resRelease.json()

View File

@ -14,12 +14,12 @@ import CTABlog from '@/components/Blog/CTA'
Jan now supports [NVIDIA TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) in addition to [llama.cpp](https://github.com/ggerganov/llama.cpp), making Jan multi-engine and ultra-fast for users with Nvidia GPUs.
We've been excited for TensorRT-LLM for a while, and [had a lot of fun implementing it](https://github.com/janhq/nitro-tensorrt-llm). As part of the process, we've run some benchmarks, to see how TensorRT-LLM fares on consumer hardware (e.g. [4090s](https://www.nvidia.com/en-us/geforce/graphics-cards/40-series/), [3090s](https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/)) we commonly see in the [Jan's hardware community](https://discord.com/channels/1107178041848909847/1201834752206974996).
We've been excited for TensorRT-LLM for a while, and [had a lot of fun implementing it](https://github.com/menloresearch/nitro-tensorrt-llm). As part of the process, we've run some benchmarks, to see how TensorRT-LLM fares on consumer hardware (e.g. [4090s](https://www.nvidia.com/en-us/geforce/graphics-cards/40-series/), [3090s](https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/)) we commonly see in the [Jan's hardware community](https://discord.com/channels/1107178041848909847/1201834752206974996).
<Callout type="info" >
**Give it a try!** Jan's TensorRT-LLM extension is available in Jan v0.4.9. We precompiled some TensorRT-LLM models for you to try: `Mistral 7b`, `TinyLlama-1.1b`, `TinyJensen-1.1b` 😂
Bugs or feedback? Let us know on [GitHub](https://github.com/janhq/jan) or via [Discord](https://discord.com/channels/1107178041848909847/1201832734704795688).
Bugs or feedback? Let us know on [GitHub](https://github.com/menloresearch/jan) or via [Discord](https://discord.com/channels/1107178041848909847/1201832734704795688).
</Callout>
<Callout type="info" >

View File

@ -70,34 +70,34 @@ brief survey of how other players approach deep research:
| Kimi | Interactive synthesis | 50100 | 3060+ | PDF, Interactive website | Free |
In our testing, we used the following prompt to assess the quality of the generated report by
the providers above. You can refer to the reports generated [here](https://github.com/janhq/prompt-experiments).
the providers above. You can refer to the reports generated [here](https://github.com/menloresearch/prompt-experiments).
```
Generate a comprehensive report about the state of AI in the past week. Include all
new model releases and notable architectural improvements from a variety of sources.
```
[Google's generated report](https://github.com/janhq/prompt-experiments/blob/main/Gemini%202.5%20Flash%20Report.pdf) was the most verbose, with a whopping 23 pages that reads
[Google's generated report](https://github.com/menloresearch/prompt-experiments/blob/main/Gemini%202.5%20Flash%20Report.pdf) was the most verbose, with a whopping 23 pages that reads
like a professional intelligence briefing. It opens with an executive summary,
systematically categorizes developments, and provides forward-looking strategic
insights—connecting OpenAI's open-weight release to broader democratization trends
and linking infrastructure investments to competitive positioning.
[OpenAI](https://github.com/janhq/prompt-experiments/blob/main/OpenAI%20Deep%20Research.pdf) produced the most citation-heavy output with 134 references throughout 10 pages
[OpenAI](https://github.com/menloresearch/prompt-experiments/blob/main/OpenAI%20Deep%20Research.pdf) produced the most citation-heavy output with 134 references throughout 10 pages
(albeit most of them being from the same source).
[Perplexity](https://github.com/janhq/prompt-experiments/blob/main/Perplexity%20Deep%20Research.pdf) delivered the most actionable 6-page report that maximizes information
[Perplexity](https://github.com/menloresearch/prompt-experiments/blob/main/Perplexity%20Deep%20Research.pdf) delivered the most actionable 6-page report that maximizes information
density while maintaining scannability. Despite being the shortest, it captures all
major developments with sufficient context for decision-making.
[Claude](https://github.com/janhq/prompt-experiments/blob/main/Claude%20Deep%20Research.pdf) produced a comprehensive analysis that interestingly ignored the time constraint,
[Claude](https://github.com/menloresearch/prompt-experiments/blob/main/Claude%20Deep%20Research.pdf) produced a comprehensive analysis that interestingly ignored the time constraint,
covering an 8-month period from January-August 2025 instead of the requested week (Jul 31-Aug
7th 2025). Rather than cataloging recent events, Claude traced the evolution of trends over months.
[Grok](https://github.com/janhq/prompt-experiments/blob/main/Grok%203%20Deep%20Research.pdf) produced a well-structured but relatively shallow 5-page academic-style report that
[Grok](https://github.com/menloresearch/prompt-experiments/blob/main/Grok%203%20Deep%20Research.pdf) produced a well-structured but relatively shallow 5-page academic-style report that
read more like an event catalog than strategic analysis.
[Kimi](https://github.com/janhq/prompt-experiments/blob/main/Kimi%20AI%20Deep%20Research.pdf) produced a comprehensive 13-page report with systematic organization covering industry developments, research breakthroughs, and policy changes, but notably lacks proper citations throughout most of the content despite claiming to use 50-100 sources.
[Kimi](https://github.com/menloresearch/prompt-experiments/blob/main/Kimi%20AI%20Deep%20Research.pdf) produced a comprehensive 13-page report with systematic organization covering industry developments, research breakthroughs, and policy changes, but notably lacks proper citations throughout most of the content despite claiming to use 50-100 sources.
### Understanding Search Strategies

View File

@ -13,7 +13,7 @@ import CTABlog from '@/components/Blog/CTA'
## Abstract
We present a straightforward approach to customizing small, open-source models using fine-tuning and RAG that outperforms GPT-3.5 for specialized use cases. With it, we achieved superior Q&A results of [technical documentation](https://nitro.jan.ai/docs) for a small codebase [codebase](https://github.com/janhq/nitro).
We present a straightforward approach to customizing small, open-source models using fine-tuning and RAG that outperforms GPT-3.5 for specialized use cases. With it, we achieved superior Q&A results of [technical documentation](https://nitro.jan.ai/docs) for a small codebase [codebase](https://github.com/menloresearch/nitro).
In short, (1) extending a general foundation model like [Mistral](https://huggingface.co/mistralai/Mistral-7B-v0.1) with strong math and coding, and (2) training it over a high-quality, synthetic dataset generated from the intended corpus, and (3) adding RAG capabilities, can lead to significant accuracy improvements.
@ -93,11 +93,11 @@ This final model can be found [here on Huggingface](https://huggingface.co/jan-h
As an additional step, we also added [Retrieval Augmented Generation (RAG)](https://blogs.nvidia.com/blog/what-is-retrieval-augmented-generation/) as an experiment parameter.
A simple RAG setup was done using **[Llamaindex](https://www.llamaindex.ai/)** and the **[bge-en-base-v1.5 embedding](https://huggingface.co/BAAI/bge-base-en-v1.5)** model for efficient documentation retrieval and question-answering. You can find the RAG implementation [here](https://github.com/janhq/open-foundry/blob/main/rag-is-not-enough/rag/nitro_rag.ipynb).
A simple RAG setup was done using **[Llamaindex](https://www.llamaindex.ai/)** and the **[bge-en-base-v1.5 embedding](https://huggingface.co/BAAI/bge-base-en-v1.5)** model for efficient documentation retrieval and question-answering. You can find the RAG implementation [here](https://github.com/menloresearch/open-foundry/blob/main/rag-is-not-enough/rag/nitro_rag.ipynb).
## Benchmarking the Results
We curated a new set of [50 multiple-choice questions](https://github.com/janhq/open-foundry/blob/main/rag-is-not-enough/rag/mcq_nitro.csv) (MCQ) based on the Nitro docs. The questions had varying levels of difficulty and had trick components that challenged the model's ability to discern misleading information.
We curated a new set of [50 multiple-choice questions](https://github.com/menloresearch/open-foundry/blob/main/rag-is-not-enough/rag/mcq_nitro.csv) (MCQ) based on the Nitro docs. The questions had varying levels of difficulty and had trick components that challenged the model's ability to discern misleading information.
![image](https://hackmd.io/_uploads/By9vaE1Ta.png)
@ -121,7 +121,7 @@ We conclude that this combination of model merging + finetuning + RAG yields pro
Anecdotally, weve had some success using this model in practice to onboard new team members to the Nitro codebase.
A full research report with more statistics can be found [here](https://github.com/janhq/open-foundry/blob/main/rag-is-not-enough/README.md).
A full research report with more statistics can be found [here](https://github.com/menloresearch/open-foundry/blob/main/rag-is-not-enough/README.md).
# References

View File

@ -203,7 +203,7 @@ When to choose ChatGPT Plus instead:
Ready to try gpt-oss?
- Download Jan: [https://jan.ai/](https://jan.ai/)
- View source code: [https://github.com/janhq/jan](https://github.com/janhq/jan)
- View source code: [https://github.com/menloresearch/jan](https://github.com/menloresearch/jan)
- Need help? Check our [local AI guide](/post/run-ai-models-locally) for beginners
<CTABlog />

View File

@ -4,7 +4,7 @@ title: Support - Jan
# Support
- Bugs & requests: file a GitHub ticket [here](https://github.com/janhq/jan/issues)
- Bugs & requests: file a GitHub ticket [here](https://github.com/menloresearch/jan/issues)
- For discussion: join our Discord [here](https://discord.gg/FTk2MvZwJH)
- For business inquiries: email hello@jan.ai
- For jobs: please email hr@jan.ai

View File

@ -31,7 +31,7 @@ const config: DocsThemeConfig = {
</div>
</span>
),
docsRepositoryBase: 'https://github.com/janhq/jan/tree/dev/docs',
docsRepositoryBase: 'https://github.com/menloresearch/jan/tree/dev/docs',
feedback: {
content: 'Question? Give us feedback →',
labels: 'feedback',

View File

@ -70,6 +70,6 @@ There are a few things to keep in mind when writing your extension code:
```
For more information about the Jan Extension Core module, see the
[documentation](https://github.com/janhq/jan/blob/main/core/README.md).
[documentation](https://github.com/menloresearch/jan/blob/main/core/README.md).
So, what are you waiting for? Go ahead and start customizing your extension!

View File

@ -56,7 +56,7 @@ async function fetchRemoteSupportedBackends(
supportedBackends: string[]
): Promise<{ version: string; backend: string }[]> {
// Pull the latest releases from the repo
const { releases } = await _fetchGithubReleases('janhq', 'llama.cpp')
const { releases } = await _fetchGithubReleases('menloresearch', 'llama.cpp')
releases.sort((a, b) => b.tag_name.localeCompare(a.tag_name))
releases.splice(10) // keep only the latest 10 releases
@ -98,7 +98,7 @@ export async function listSupportedBackends(): Promise<
const sysType = `${os_type}-${arch}`
let supportedBackends = []
// NOTE: janhq's tags for llama.cpp builds are a bit different
// NOTE: menloresearch's tags for llama.cpp builds are a bit different
// TODO: fetch versions from the server?
// TODO: select CUDA version based on driver version
if (sysType == 'windows-x86_64') {
@ -247,7 +247,7 @@ export async function downloadBackend(
// Build URLs per source
const backendUrl =
source === 'github'
? `https://github.com/janhq/llama.cpp/releases/download/${version}/llama-${version}-bin-${backend}.tar.gz`
? `https://github.com/menloresearch/llama.cpp/releases/download/${version}/llama-${version}-bin-${backend}.tar.gz`
: `https://catalog.jan.ai/llama.cpp/releases/${version}/llama-${version}-bin-${backend}.tar.gz`
const downloadItems = [
@ -263,7 +263,7 @@ export async function downloadBackend(
downloadItems.push({
url:
source === 'github'
? `https://github.com/janhq/llama.cpp/releases/download/${version}/cudart-llama-bin-${platformName}-cu11.7-x64.tar.gz`
? `https://github.com/menloresearch/llama.cpp/releases/download/${version}/cudart-llama-bin-${platformName}-cu11.7-x64.tar.gz`
: `https://catalog.jan.ai/llama.cpp/releases/${version}/cudart-llama-bin-${platformName}-cu11.7-x64.tar.gz`,
save_path: await joinPath([libDir, 'cuda11.tar.gz']),
proxy: proxyConfig,
@ -272,7 +272,7 @@ export async function downloadBackend(
downloadItems.push({
url:
source === 'github'
? `https://github.com/janhq/llama.cpp/releases/download/${version}/cudart-llama-bin-${platformName}-cu12.0-x64.tar.gz`
? `https://github.com/menloresearch/llama.cpp/releases/download/${version}/cudart-llama-bin-${platformName}-cu12.0-x64.tar.gz`
: `https://catalog.jan.ai/llama.cpp/releases/${version}/cudart-llama-bin-${platformName}-cu12.0-x64.tar.gz`,
save_path: await joinPath([libDir, 'cuda12.tar.gz']),
proxy: proxyConfig,

View File

@ -35,7 +35,7 @@
</screenshots>
<url type="homepage">https://jan.ai/</url>
<url type="bugtracker">https://github.com/janhq/jan/issues</url>
<url type="bugtracker">https://github.com/menloresearch/jan/issues</url>
<content_rating type="oars-1.1" />

View File

@ -4,7 +4,7 @@ version = "0.6.599"
description = "Use offline LLMs with your own data. Run open source models like Llama2 or Falcon on your internal computers/servers."
authors = ["Jan <service@jan.ai>"]
license = "MIT"
repository = "https://github.com/janhq/jan"
repository = "https://github.com/menloresearch/jan"
edition = "2021"
rust-version = "1.77.2"
resolver = "2"

View File

@ -4,7 +4,7 @@ version = "0.6.599"
authors = ["Jan <service@jan.ai>"]
description = "Tauri plugin for hardware information and GPU monitoring"
license = "MIT"
repository = "https://github.com/janhq/jan"
repository = "https://github.com/menloresearch/jan"
edition = "2021"
rust-version = "1.77.2"
exclude = ["/examples", "/dist-js", "/guest-js", "/node_modules"]

View File

@ -4,7 +4,7 @@ version = "0.6.599"
authors = ["Jan <service@jan.ai>"]
description = "Tauri plugin for managing Jan LlamaCpp server processes and model loading"
license = "MIT"
repository = "https://github.com/janhq/jan"
repository = "https://github.com/menloresearch/jan"
edition = "2021"
rust-version = "1.77.2"
exclude = ["/examples", "/dist-js", "/guest-js", "/node_modules"]

View File

@ -4,7 +4,7 @@ version = "0.1.0"
authors = ["Jan <service@jan.ai>"]
description = "Tauri plugin for RAG utilities (document parsing, types)"
license = "MIT"
repository = "https://github.com/janhq/jan"
repository = "https://github.com/menloresearch/jan"
edition = "2021"
rust-version = "1.77.2"
exclude = ["/examples", "/dist-js", "/guest-js", "/node_modules"]

View File

@ -4,7 +4,7 @@ version = "0.1.0"
authors = ["Jan <service@jan.ai>"]
description = "Tauri plugin for vector storage and similarity search"
license = "MIT"
repository = "https://github.com/janhq/jan"
repository = "https://github.com/menloresearch/jan"
edition = "2021"
rust-version = "1.77.2"
exclude = ["/examples", "/dist-js", "/guest-js", "/node_modules"]

View File

@ -72,7 +72,7 @@
"updater": {
"pubkey": "dW50cnVzdGVkIGNvbW1lbnQ6IG1pbmlzaWduIHB1YmxpYyBrZXk6IDJFNDEzMEVCMUEzNUFENDQKUldSRXJUVWE2ekJCTGc1Mm1BVXgrWmtES3huUlBFR0lCdG5qbWFvMzgyNDhGN3VTTko5Q1NtTW0K",
"endpoints": [
"https://github.com/janhq/jan/releases/latest/download/latest.json"
"https://github.com/menloresearch/jan/releases/latest/download/latest.json"
],
"windows": {
"installMode": "passive"

View File

@ -10,7 +10,6 @@ import {
IconAtom,
IconWorld,
IconCodeCircle2,
IconSparkles,
} from '@tabler/icons-react'
import { Fragment } from 'react/jsx-runtime'
@ -30,8 +29,6 @@ const Capabilities = ({ capabilities }: CapabilitiesProps) => {
icon = <IconEye className="size-4" />
} else if (capability === 'tools') {
icon = <IconTool className="size-3.5" />
} else if (capability === 'proactive') {
icon = <IconSparkles className="size-3.5" />
} else if (capability === 'reasoning') {
icon = <IconAtom className="size-3.5" />
} else if (capability === 'embeddings') {
@ -57,11 +54,7 @@ const Capabilities = ({ capabilities }: CapabilitiesProps) => {
</TooltipTrigger>
<TooltipContent>
<p>
{capability === 'web_search'
? 'Web Search'
: capability === 'proactive'
? 'Proactive'
: capability}
{capability === 'web_search' ? 'Web Search' : capability}
</p>
</TooltipContent>
</Tooltip>

View File

@ -16,8 +16,6 @@ const LANGUAGES = [
{ value: 'zh-CN', label: '简体中文' },
{ value: 'zh-TW', label: '繁體中文' },
{ value: 'de-DE', label: 'Deutsch' },
{ value: 'pt-BR', label: 'Português (Brasil)' },
{ value: 'ja', label: '日本語' },
]
export default function LanguageSwitcher() {

View File

@ -152,19 +152,12 @@ export const ModelInfoHoverCard = ({
</div>
{/* Features Section */}
{(model.num_mmproj > 0 || model.tools || (model.num_mmproj > 0 && model.tools)) && (
{(model.num_mmproj > 0 || model.tools) && (
<div className="border-t border-main-view-fg/10 pt-3">
<h5 className="text-xs font-medium text-main-view-fg/70 mb-2">
Features
</h5>
<div className="flex flex-wrap gap-2">
{model.tools && (
<div className="flex items-center gap-1.5 px-2 py-1 bg-main-view-fg/10 rounded-md">
<span className="text-xs text-main-view-fg font-medium">
Tools
</span>
</div>
)}
{model.num_mmproj > 0 && (
<div className="flex items-center gap-1.5 px-2 py-1 bg-main-view-fg/10 rounded-md">
<span className="text-xs text-main-view-fg font-medium">
@ -172,10 +165,10 @@ export const ModelInfoHoverCard = ({
</span>
</div>
)}
{model.num_mmproj > 0 && model.tools && (
{model.tools && (
<div className="flex items-center gap-1.5 px-2 py-1 bg-main-view-fg/10 rounded-md">
<span className="text-xs text-main-view-fg font-medium">
Proactive
Tools
</span>
</div>
)}

View File

@ -1,124 +0,0 @@
import { describe, it, expect, vi } from 'vitest'
import { render, screen } from '@testing-library/react'
import Capabilities from '../Capabilities'
// Mock Tooltip components
vi.mock('@/components/ui/tooltip', () => ({
Tooltip: ({ children }: { children: React.ReactNode }) => <div>{children}</div>,
TooltipContent: ({ children }: { children: React.ReactNode }) => <div>{children}</div>,
TooltipProvider: ({ children }: { children: React.ReactNode }) => <div>{children}</div>,
TooltipTrigger: ({ children }: { children: React.ReactNode }) => <div>{children}</div>,
}))
// Mock Tabler icons
vi.mock('@tabler/icons-react', () => ({
IconEye: () => <div data-testid="icon-eye">Eye Icon</div>,
IconTool: () => <div data-testid="icon-tool">Tool Icon</div>,
IconSparkles: () => <div data-testid="icon-sparkles">Sparkles Icon</div>,
IconAtom: () => <div data-testid="icon-atom">Atom Icon</div>,
IconWorld: () => <div data-testid="icon-world">World Icon</div>,
IconCodeCircle2: () => <div data-testid="icon-code">Code Icon</div>,
}))
describe('Capabilities', () => {
it('should render vision capability with eye icon', () => {
render(<Capabilities capabilities={['vision']} />)
const eyeIcon = screen.getByTestId('icon-eye')
expect(eyeIcon).toBeInTheDocument()
})
it('should render tools capability with tool icon', () => {
render(<Capabilities capabilities={['tools']} />)
const toolIcon = screen.getByTestId('icon-tool')
expect(toolIcon).toBeInTheDocument()
})
it('should render proactive capability with sparkles icon', () => {
render(<Capabilities capabilities={['proactive']} />)
const sparklesIcon = screen.getByTestId('icon-sparkles')
expect(sparklesIcon).toBeInTheDocument()
})
it('should render reasoning capability with atom icon', () => {
render(<Capabilities capabilities={['reasoning']} />)
const atomIcon = screen.getByTestId('icon-atom')
expect(atomIcon).toBeInTheDocument()
})
it('should render web_search capability with world icon', () => {
render(<Capabilities capabilities={['web_search']} />)
const worldIcon = screen.getByTestId('icon-world')
expect(worldIcon).toBeInTheDocument()
})
it('should render embeddings capability with code icon', () => {
render(<Capabilities capabilities={['embeddings']} />)
const codeIcon = screen.getByTestId('icon-code')
expect(codeIcon).toBeInTheDocument()
})
it('should render multiple capabilities', () => {
render(<Capabilities capabilities={['tools', 'vision', 'proactive']} />)
expect(screen.getByTestId('icon-tool')).toBeInTheDocument()
expect(screen.getByTestId('icon-eye')).toBeInTheDocument()
expect(screen.getByTestId('icon-sparkles')).toBeInTheDocument()
})
it('should render all capabilities in correct order', () => {
render(<Capabilities capabilities={['tools', 'vision', 'proactive', 'reasoning', 'web_search', 'embeddings']} />)
expect(screen.getByTestId('icon-tool')).toBeInTheDocument()
expect(screen.getByTestId('icon-eye')).toBeInTheDocument()
expect(screen.getByTestId('icon-sparkles')).toBeInTheDocument()
expect(screen.getByTestId('icon-atom')).toBeInTheDocument()
expect(screen.getByTestId('icon-world')).toBeInTheDocument()
expect(screen.getByTestId('icon-code')).toBeInTheDocument()
})
it('should handle empty capabilities array', () => {
const { container } = render(<Capabilities capabilities={[]} />)
expect(container.querySelector('[data-testid^="icon-"]')).not.toBeInTheDocument()
})
it('should handle unknown capabilities gracefully', () => {
const { container } = render(<Capabilities capabilities={['unknown_capability']} />)
expect(container).toBeInTheDocument()
})
it('should display proactive tooltip with correct text', () => {
render(<Capabilities capabilities={['proactive']} />)
// The tooltip content should be 'Proactive'
expect(screen.getByTestId('icon-sparkles')).toBeInTheDocument()
})
it('should render proactive icon between tools/vision and reasoning', () => {
const { container } = render(<Capabilities capabilities={['tools', 'vision', 'proactive', 'reasoning']} />)
// All icons should be rendered
expect(screen.getByTestId('icon-tool')).toBeInTheDocument()
expect(screen.getByTestId('icon-eye')).toBeInTheDocument()
expect(screen.getByTestId('icon-sparkles')).toBeInTheDocument()
expect(screen.getByTestId('icon-atom')).toBeInTheDocument()
expect(container.querySelector('[data-testid="icon-sparkles"]')).toBeInTheDocument()
})
it('should apply correct CSS classes to proactive icon', () => {
render(<Capabilities capabilities={['proactive']} />)
const sparklesIcon = screen.getByTestId('icon-sparkles')
expect(sparklesIcon).toBeInTheDocument()
// Icon should have size-3.5 class (same as tools, reasoning, etc.)
expect(sparklesIcon.parentElement).toBeInTheDocument()
})
})

View File

@ -437,31 +437,4 @@ describe('ChatInput', () => {
expect(() => renderWithRouter()).not.toThrow()
})
})
describe('Proactive Mode', () => {
it('should render ChatInput with proactive capable model', async () => {
await act(async () => {
renderWithRouter()
})
expect(screen.getByTestId('chat-input')).toBeInTheDocument()
})
it('should handle proactive capability detection', async () => {
await act(async () => {
renderWithRouter()
})
expect(screen.getByTestId('chat-input')).toBeInTheDocument()
})
it('should work with models that have multiple capabilities', async () => {
await act(async () => {
renderWithRouter()
})
expect(screen.getByTestId('chat-input')).toBeInTheDocument()
})
})
})

View File

@ -82,7 +82,6 @@ vi.mock('@tabler/icons-react', () => ({
IconEye: () => <div data-testid="eye-icon" />,
IconTool: () => <div data-testid="tool-icon" />,
IconLoader2: () => <div data-testid="loader-icon" />,
IconSparkles: () => <div data-testid="sparkles-icon" />,
}))
describe('DialogEditModel - Basic Component Tests', () => {
@ -190,7 +189,7 @@ describe('DialogEditModel - Basic Component Tests', () => {
{
id: 'test-model.gguf',
displayName: 'Test Model',
capabilities: ['vision', 'tools', 'proactive'],
capabilities: ['vision', 'tools'],
},
],
settings: [],
@ -227,7 +226,7 @@ describe('DialogEditModel - Basic Component Tests', () => {
{
id: 'test-model.gguf',
displayName: 'Test Model',
capabilities: ['vision', 'tools', 'proactive', 'completion', 'embeddings', 'web_search', 'reasoning'],
capabilities: ['vision', 'tools', 'completion', 'embeddings', 'web_search', 'reasoning'],
},
],
settings: [],
@ -241,7 +240,7 @@ describe('DialogEditModel - Basic Component Tests', () => {
)
// Component should render without errors even with extra capabilities
// The capabilities helper should only extract vision, tools, and proactive
// The capabilities helper should only extract vision and tools
expect(container).toBeInTheDocument()
})
})

View File

@ -17,7 +17,6 @@ import {
IconTool,
IconAlertTriangle,
IconLoader2,
IconSparkles,
} from '@tabler/icons-react'
import { useState, useEffect } from 'react'
import { useTranslation } from '@/i18n/react-i18next-compat'
@ -46,7 +45,6 @@ export const DialogEditModel = ({
const [capabilities, setCapabilities] = useState<Record<string, boolean>>({
vision: false,
tools: false,
proactive: false,
})
// Initialize with the provided model ID or the first model if available
@ -69,7 +67,6 @@ export const DialogEditModel = ({
const capabilitiesToObject = (capabilitiesList: string[]) => ({
vision: capabilitiesList.includes('vision'),
tools: capabilitiesList.includes('tools'),
proactive: capabilitiesList.includes('proactive'),
})
// Initialize capabilities and display name from selected model
@ -271,23 +268,6 @@ export const DialogEditModel = ({
disabled={isLoading}
/>
</div>
<div className="flex items-center justify-between">
<div className="flex items-center space-x-2">
<IconSparkles className="size-4 text-main-view-fg/70" />
<span className="text-sm">
{t('providers:editModel.proactive')}
</span>
</div>
<Switch
id="proactive-capability"
checked={capabilities.proactive}
onCheckedChange={(checked) =>
handleCapabilityChange('proactive', checked)
}
disabled={isLoading || !(capabilities.tools && capabilities.vision)}
/>
</div>
</div>
</div>

View File

@ -170,7 +170,6 @@ vi.mock('@/lib/completion', () => ({
sendCompletion: vi.fn(),
postMessageProcessing: vi.fn(),
isCompletionResponse: vi.fn(),
captureProactiveScreenshots: vi.fn(() => Promise.resolve([])),
}))
vi.mock('@/lib/messages', () => ({
@ -226,26 +225,4 @@ describe('useChat', () => {
expect(result.current).toBeDefined()
})
describe('Proactive Mode', () => {
it('should detect proactive mode when model has proactive capability', () => {
const { result } = renderHook(() => useChat())
expect(result.current).toBeDefined()
expect(typeof result.current).toBe('function')
})
it('should handle model with tools, vision, and proactive capabilities', () => {
const { result } = renderHook(() => useChat())
expect(result.current).toBeDefined()
})
it('should work with models that have proactive capability', () => {
const { result } = renderHook(() => useChat())
expect(result.current).toBeDefined()
expect(typeof result.current).toBe('function')
})
})
})

View File

@ -65,7 +65,7 @@ describe('useReleaseNotes', () => {
})
expect(mockFetch).toHaveBeenCalledWith(
'https://api.github.com/repos/janhq/jan/releases'
'https://api.github.com/repos/menloresearch/jan/releases'
)
expect(result.current.loading).toBe(false)
expect(result.current.error).toBe(null)
@ -292,7 +292,7 @@ describe('useReleaseNotes', () => {
draft: false,
body: 'Release notes',
published_at: '2024-01-01T00:00:00Z',
html_url: 'https://github.com/janhq/jan/releases/tag/v1.5.0',
html_url: 'https://github.com/menloresearch/jan/releases/tag/v1.5.0',
assets: [],
},
]

View File

@ -16,7 +16,6 @@ import {
newUserThreadContent,
postMessageProcessing,
sendCompletion,
captureProactiveScreenshots,
} from '@/lib/completion'
import { CompletionMessagesBuilder } from '@/lib/messages'
import { renderInstructions } from '@/lib/instructionTemplate'
@ -420,27 +419,6 @@ export const useChat = () => {
})
: []
// Check if proactive mode is enabled
const isProactiveMode = selectedModel?.capabilities?.includes('proactive') ?? false
// Proactive mode: Capture initial screenshot/snapshot before first LLM call
if (isProactiveMode && availableTools.length > 0 && !abortController.signal.aborted) {
console.log('Proactive mode: Capturing initial screenshots before LLM call')
try {
const initialScreenshots = await captureProactiveScreenshots(abortController)
// Add initial screenshots to builder
for (const screenshot of initialScreenshots) {
// Generate unique tool call ID for initial screenshot
const proactiveToolCallId = `proactive_initial_${Date.now()}_${Math.random()}`
builder.addToolMessage(screenshot, proactiveToolCallId)
console.log('Initial proactive screenshot added to context')
}
} catch (e) {
console.warn('Failed to capture initial proactive screenshots:', e)
}
}
let assistantLoopSteps = 0
while (
@ -716,10 +694,6 @@ export const useChat = () => {
)
builder.addAssistantMessage(accumulatedText, undefined, toolCalls)
// Check if proactive mode is enabled for this model
const isProactiveMode = selectedModel?.capabilities?.includes('proactive') ?? false
const updatedMessage = await postMessageProcessing(
toolCalls,
builder,
@ -727,8 +701,7 @@ export const useChat = () => {
abortController,
useToolApproval.getState().approvedTools,
allowAllMCPPermissions ? undefined : showApprovalModal,
allowAllMCPPermissions,
isProactiveMode
allowAllMCPPermissions
)
addMessage(updatedMessage ?? finalContent)
updateStreamingContent(emptyThreadContent)

View File

@ -25,7 +25,7 @@ export const useReleaseNotes = create<ReleaseState>((set) => ({
set({ loading: true, error: null })
try {
const res = await fetch(
'https://api.github.com/repos/janhq/jan/releases'
'https://api.github.com/repos/menloresearch/jan/releases'
)
if (!res.ok) throw new Error('Failed to fetch releases')
const releases = await res.json()

View File

@ -1,5 +1,5 @@
import { describe, it, expect, vi, beforeEach } from 'vitest'
import {
import {
newUserThreadContent,
newAssistantThreadContent,
emptyThreadContent,
@ -8,8 +8,7 @@ import {
stopModel,
normalizeTools,
extractToolCall,
postMessageProcessing,
captureProactiveScreenshots
postMessageProcessing
} from '../completion'
// Mock dependencies
@ -73,54 +72,6 @@ vi.mock('../extension', () => ({
ExtensionManager: {},
}))
vi.mock('@/hooks/useServiceHub', () => ({
getServiceHub: vi.fn(() => ({
mcp: vi.fn(() => ({
getTools: vi.fn(() => Promise.resolve([])),
callToolWithCancellation: vi.fn(() => ({
promise: Promise.resolve({
content: [{ type: 'text', text: 'mock result' }],
error: '',
}),
cancel: vi.fn(),
})),
})),
rag: vi.fn(() => ({
getToolNames: vi.fn(() => Promise.resolve([])),
callTool: vi.fn(() => Promise.resolve({
content: [{ type: 'text', text: 'mock rag result' }],
error: '',
})),
})),
})),
}))
vi.mock('@/hooks/useAttachments', () => ({
useAttachments: {
getState: vi.fn(() => ({ enabled: true })),
},
}))
vi.mock('@/hooks/useAppState', () => ({
useAppState: {
getState: vi.fn(() => ({
setCancelToolCall: vi.fn(),
})),
},
}))
vi.mock('@/lib/platform/const', () => ({
PlatformFeatures: {
ATTACHMENTS: true,
},
}))
vi.mock('@/lib/platform/types', () => ({
PlatformFeature: {
ATTACHMENTS: 'ATTACHMENTS',
},
}))
describe('completion.ts', () => {
beforeEach(() => {
vi.clearAllMocks()
@ -236,448 +187,4 @@ describe('completion.ts', () => {
expect(result.length).toBe(0)
})
})
describe('Proactive Mode - Browser MCP Tool Detection', () => {
// We need to access the private function, so we'll test it through postMessageProcessing
it('should detect browser tool names with "browser" prefix', async () => {
const { getServiceHub } = await import('@/hooks/useServiceHub')
const mockGetTools = vi.fn(() => Promise.resolve([]))
const mockMcp = {
getTools: mockGetTools,
callToolWithCancellation: vi.fn(() => ({
promise: Promise.resolve({ content: [{ type: 'text', text: 'result' }], error: '' }),
cancel: vi.fn(),
}))
}
vi.mocked(getServiceHub).mockReturnValue({
mcp: () => mockMcp,
rag: () => ({ getToolNames: () => Promise.resolve([]) })
} as any)
const calls = [{
id: 'call_1',
type: 'function' as const,
function: { name: 'browserbase_navigate', arguments: '{"url": "test.com"}' }
}]
const builder = {
addToolMessage: vi.fn(),
getMessages: vi.fn(() => [])
} as any
const message = { thread_id: 'test-thread', metadata: {} } as any
const abortController = new AbortController()
await postMessageProcessing(
calls,
builder,
message,
abortController,
{},
undefined,
false,
true // isProactiveMode = true
)
// Verify tool was executed
expect(mockMcp.callToolWithCancellation).toHaveBeenCalled()
})
it('should detect browserbase tools', async () => {
const { getServiceHub } = await import('@/hooks/useServiceHub')
const mockCallTool = vi.fn(() => ({
promise: Promise.resolve({ content: [{ type: 'text', text: 'result' }], error: '' }),
cancel: vi.fn(),
}))
vi.mocked(getServiceHub).mockReturnValue({
mcp: () => ({
getTools: () => Promise.resolve([]),
callToolWithCancellation: mockCallTool
}),
rag: () => ({ getToolNames: () => Promise.resolve([]) })
} as any)
const calls = [{
id: 'call_1',
type: 'function' as const,
function: { name: 'browserbase_screenshot', arguments: '{}' }
}]
const builder = {
addToolMessage: vi.fn(),
getMessages: vi.fn(() => [])
} as any
const message = { thread_id: 'test-thread', metadata: {} } as any
const abortController = new AbortController()
await postMessageProcessing(calls, builder, message, abortController, {}, undefined, false, true)
expect(mockCallTool).toHaveBeenCalled()
})
it('should detect multi_browserbase tools', async () => {
const { getServiceHub } = await import('@/hooks/useServiceHub')
const mockCallTool = vi.fn(() => ({
promise: Promise.resolve({ content: [{ type: 'text', text: 'result' }], error: '' }),
cancel: vi.fn(),
}))
vi.mocked(getServiceHub).mockReturnValue({
mcp: () => ({
getTools: () => Promise.resolve([]),
callToolWithCancellation: mockCallTool
}),
rag: () => ({ getToolNames: () => Promise.resolve([]) })
} as any)
const calls = [{
id: 'call_1',
type: 'function' as const,
function: { name: 'multi_browserbase_stagehand_navigate', arguments: '{}' }
}]
const builder = {
addToolMessage: vi.fn(),
getMessages: vi.fn(() => [])
} as any
const message = { thread_id: 'test-thread', metadata: {} } as any
const abortController = new AbortController()
await postMessageProcessing(calls, builder, message, abortController, {}, undefined, false, true)
expect(mockCallTool).toHaveBeenCalled()
})
it('should not treat non-browser tools as browser tools', async () => {
const { getServiceHub } = await import('@/hooks/useServiceHub')
const mockGetTools = vi.fn(() => Promise.resolve([]))
vi.mocked(getServiceHub).mockReturnValue({
mcp: () => ({
getTools: mockGetTools,
callToolWithCancellation: vi.fn(() => ({
promise: Promise.resolve({ content: [{ type: 'text', text: 'result' }], error: '' }),
cancel: vi.fn(),
}))
}),
rag: () => ({ getToolNames: () => Promise.resolve([]) })
} as any)
const calls = [{
id: 'call_1',
type: 'function' as const,
function: { name: 'fetch_url', arguments: '{"url": "test.com"}' }
}]
const builder = {
addToolMessage: vi.fn(),
getMessages: vi.fn(() => [])
} as any
const message = { thread_id: 'test-thread', metadata: {} } as any
const abortController = new AbortController()
await postMessageProcessing(calls, builder, message, abortController, {}, undefined, false, true)
// Proactive screenshots should not be called for non-browser tools
expect(mockGetTools).not.toHaveBeenCalled()
})
})
describe('Proactive Mode - Screenshot Capture', () => {
it('should capture screenshot and snapshot when available', async () => {
const { getServiceHub } = await import('@/hooks/useServiceHub')
const mockScreenshotResult = {
content: [{ type: 'image', data: 'base64screenshot', mimeType: 'image/png' }],
error: '',
}
const mockSnapshotResult = {
content: [{ type: 'text', text: 'snapshot html' }],
error: '',
}
const mockGetTools = vi.fn(() => Promise.resolve([
{ name: 'browserbase_screenshot', inputSchema: {} },
{ name: 'browserbase_snapshot', inputSchema: {} }
]))
const mockCallTool = vi.fn()
.mockReturnValueOnce({
promise: Promise.resolve(mockScreenshotResult),
cancel: vi.fn(),
})
.mockReturnValueOnce({
promise: Promise.resolve(mockSnapshotResult),
cancel: vi.fn(),
})
vi.mocked(getServiceHub).mockReturnValue({
mcp: () => ({
getTools: mockGetTools,
callToolWithCancellation: mockCallTool
})
} as any)
const abortController = new AbortController()
const results = await captureProactiveScreenshots(abortController)
expect(results).toHaveLength(2)
expect(results[0]).toEqual(mockScreenshotResult)
expect(results[1]).toEqual(mockSnapshotResult)
expect(mockCallTool).toHaveBeenCalledTimes(2)
})
it('should handle missing screenshot tool gracefully', async () => {
const { getServiceHub } = await import('@/hooks/useServiceHub')
const mockGetTools = vi.fn(() => Promise.resolve([
{ name: 'some_other_tool', inputSchema: {} }
]))
vi.mocked(getServiceHub).mockReturnValue({
mcp: () => ({
getTools: mockGetTools,
callToolWithCancellation: vi.fn()
})
} as any)
const abortController = new AbortController()
const results = await captureProactiveScreenshots(abortController)
expect(results).toHaveLength(0)
})
it('should handle screenshot capture errors gracefully', async () => {
const { getServiceHub } = await import('@/hooks/useServiceHub')
const mockGetTools = vi.fn(() => Promise.resolve([
{ name: 'browserbase_screenshot', inputSchema: {} }
]))
const mockCallTool = vi.fn(() => ({
promise: Promise.reject(new Error('Screenshot failed')),
cancel: vi.fn(),
}))
vi.mocked(getServiceHub).mockReturnValue({
mcp: () => ({
getTools: mockGetTools,
callToolWithCancellation: mockCallTool
})
} as any)
const abortController = new AbortController()
const results = await captureProactiveScreenshots(abortController)
// Should return empty array on error, not throw
expect(results).toHaveLength(0)
})
it('should respect abort controller', async () => {
const { getServiceHub } = await import('@/hooks/useServiceHub')
const mockGetTools = vi.fn(() => Promise.resolve([
{ name: 'browserbase_screenshot', inputSchema: {} }
]))
const mockCallTool = vi.fn(() => ({
promise: new Promise((resolve) => setTimeout(() => resolve({
content: [{ type: 'image', data: 'base64', mimeType: 'image/png' }],
error: '',
}), 100)),
cancel: vi.fn(),
}))
vi.mocked(getServiceHub).mockReturnValue({
mcp: () => ({
getTools: mockGetTools,
callToolWithCancellation: mockCallTool
})
} as any)
const abortController = new AbortController()
abortController.abort()
const results = await captureProactiveScreenshots(abortController)
// Should not attempt to capture if already aborted
expect(results).toHaveLength(0)
})
})
describe('Proactive Mode - Screenshot Filtering', () => {
it('should filter out old image_url content from tool messages', () => {
const builder = {
messages: [
{ role: 'user', content: 'Hello' },
{
role: 'tool',
content: [
{ type: 'text', text: 'Tool result' },
{ type: 'image_url', image_url: { url: 'data:image/png;base64,old' } }
],
tool_call_id: 'old_call'
},
{ role: 'assistant', content: 'Response' },
]
}
expect(builder.messages).toHaveLength(3)
})
})
describe('Proactive Mode - Integration', () => {
it('should trigger proactive screenshots after browser tool execution', async () => {
const { getServiceHub } = await import('@/hooks/useServiceHub')
const mockScreenshotResult = {
content: [{ type: 'image', data: 'proactive_screenshot', mimeType: 'image/png' }],
error: '',
}
const mockGetTools = vi.fn(() => Promise.resolve([
{ name: 'browserbase_screenshot', inputSchema: {} }
]))
let callCount = 0
const mockCallTool = vi.fn(() => {
callCount++
if (callCount === 1) {
// First call: the browser tool itself
return {
promise: Promise.resolve({
content: [{ type: 'text', text: 'navigated to page' }],
error: '',
}),
cancel: vi.fn(),
}
} else {
// Second call: proactive screenshot
return {
promise: Promise.resolve(mockScreenshotResult),
cancel: vi.fn(),
}
}
})
vi.mocked(getServiceHub).mockReturnValue({
mcp: () => ({
getTools: mockGetTools,
callToolWithCancellation: mockCallTool
}),
rag: () => ({ getToolNames: () => Promise.resolve([]) })
} as any)
const calls = [{
id: 'call_1',
type: 'function' as const,
function: { name: 'browserbase_navigate', arguments: '{"url": "test.com"}' }
}]
const builder = {
addToolMessage: vi.fn(),
getMessages: vi.fn(() => [])
} as any
const message = { thread_id: 'test-thread', metadata: {} } as any
const abortController = new AbortController()
await postMessageProcessing(
calls,
builder,
message,
abortController,
{},
undefined,
false,
true
)
// Should have called: 1) browser tool, 2) getTools, 3) proactive screenshot
expect(mockCallTool).toHaveBeenCalledTimes(2)
expect(mockGetTools).toHaveBeenCalled()
expect(builder.addToolMessage).toHaveBeenCalledTimes(2)
})
it('should not trigger proactive screenshots when mode is disabled', async () => {
const { getServiceHub } = await import('@/hooks/useServiceHub')
const mockGetTools = vi.fn(() => Promise.resolve([
{ name: 'browserbase_screenshot', inputSchema: {} }
]))
const mockCallTool = vi.fn(() => ({
promise: Promise.resolve({
content: [{ type: 'text', text: 'navigated' }],
error: '',
}),
cancel: vi.fn(),
}))
vi.mocked(getServiceHub).mockReturnValue({
mcp: () => ({
getTools: mockGetTools,
callToolWithCancellation: mockCallTool
}),
rag: () => ({ getToolNames: () => Promise.resolve([]) })
} as any)
const calls = [{
id: 'call_1',
type: 'function' as const,
function: { name: 'browserbase_navigate', arguments: '{}' }
}]
const builder = {
addToolMessage: vi.fn(),
getMessages: vi.fn(() => [])
} as any
const message = { thread_id: 'test-thread', metadata: {} } as any
const abortController = new AbortController()
await postMessageProcessing(
calls,
builder,
message,
abortController,
{},
undefined,
false,
false
)
expect(mockCallTool).toHaveBeenCalledTimes(1)
expect(mockGetTools).not.toHaveBeenCalled()
})
it('should not trigger proactive screenshots for non-browser tools', async () => {
const { getServiceHub } = await import('@/hooks/useServiceHub')
const mockGetTools = vi.fn(() => Promise.resolve([]))
const mockCallTool = vi.fn(() => ({
promise: Promise.resolve({
content: [{ type: 'text', text: 'fetched data' }],
error: '',
}),
cancel: vi.fn(),
}))
vi.mocked(getServiceHub).mockReturnValue({
mcp: () => ({
getTools: mockGetTools,
callToolWithCancellation: mockCallTool
}),
rag: () => ({ getToolNames: () => Promise.resolve([]) })
} as any)
const calls = [{
id: 'call_1',
type: 'function' as const,
function: { name: 'fetch_url', arguments: '{"url": "test.com"}' }
}]
const builder = {
addToolMessage: vi.fn(),
getMessages: vi.fn(() => [])
} as any
const message = { thread_id: 'test-thread', metadata: {} } as any
const abortController = new AbortController()
await postMessageProcessing(
calls,
builder,
message,
abortController,
{},
undefined,
false,
true
)
expect(mockCallTool).toHaveBeenCalledTimes(1)
expect(mockGetTools).not.toHaveBeenCalled()
})
})
})

View File

@ -26,21 +26,18 @@ import {
ConfigOptions,
} from 'token.js'
import { getModelCapabilities } from '@/lib/models'
// Extended config options to include custom fetch function
type ExtendedConfigOptions = ConfigOptions & {
fetch?: typeof fetch
}
import { ulid } from 'ulidx'
import { MCPTool } from '@/types/completion'
import { CompletionMessagesBuilder, ToolResult } from './messages'
import { CompletionMessagesBuilder } from './messages'
import { ChatCompletionMessageToolCall } from 'openai/resources'
import { ExtensionManager } from './extension'
import { useAppState } from '@/hooks/useAppState'
import { injectFilesIntoPrompt } from './fileMetadata'
import { Attachment } from '@/types/attachment'
import { ModelCapabilities } from '@/types/models'
export type ChatCompletionResponse =
| chatCompletion
@ -235,25 +232,10 @@ export const sendCompletion = async (
}
// Inject RAG tools on-demand (not in global tools list)
const providerModelConfig = provider.models?.find(
(model) => model.id === thread.model?.id || model.model === thread.model?.id
)
const effectiveCapabilities = Array.isArray(
providerModelConfig?.capabilities
)
? providerModelConfig?.capabilities ?? []
: getModelCapabilities(provider.provider, thread.model.id)
const modelSupportsTools = effectiveCapabilities.includes(
ModelCapabilities.TOOLS
)
let usableTools = tools
try {
const attachmentsEnabled = useAttachments.getState().enabled
if (
attachmentsEnabled &&
PlatformFeatures[PlatformFeature.ATTACHMENTS] &&
modelSupportsTools
) {
if (attachmentsEnabled && PlatformFeatures[PlatformFeature.ATTACHMENTS]) {
const ragTools = await getServiceHub().rag().getTools().catch(() => [])
if (Array.isArray(ragTools) && ragTools.length) {
usableTools = [...tools, ...ragTools]
@ -396,120 +378,6 @@ export const extractToolCall = (
return calls
}
/**
* Helper function to check if a tool call is a browser MCP tool
* @param toolName - The name of the tool
* @returns true if the tool is a browser-related MCP tool
*/
const isBrowserMCPTool = (toolName: string): boolean => {
const browserToolPrefixes = [
'browser',
'browserbase',
'browsermcp',
'multi_browserbase',
]
return browserToolPrefixes.some((prefix) =>
toolName.toLowerCase().startsWith(prefix)
)
}
/**
* Helper function to capture screenshot and snapshot proactively
* @param abortController - The abort controller for cancellation
* @returns Promise with screenshot and snapshot results
*/
export const captureProactiveScreenshots = async (
abortController: AbortController
): Promise<ToolResult[]> => {
const results: ToolResult[] = []
try {
// Get available tools
const allTools = await getServiceHub().mcp().getTools()
// Find screenshot and snapshot tools
const screenshotTool = allTools.find((t) =>
t.name.toLowerCase().includes('screenshot')
)
const snapshotTool = allTools.find((t) =>
t.name.toLowerCase().includes('snapshot')
)
// Capture screenshot if available
if (screenshotTool && !abortController.signal.aborted) {
try {
const { promise } = getServiceHub().mcp().callToolWithCancellation({
toolName: screenshotTool.name,
arguments: {},
})
const screenshotResult = await promise
if (screenshotResult && typeof screenshotResult !== 'string') {
results.push(screenshotResult as ToolResult)
}
} catch (e) {
console.warn('Failed to capture proactive screenshot:', e)
}
}
// Capture snapshot if available
if (snapshotTool && !abortController.signal.aborted) {
try {
const { promise } = getServiceHub().mcp().callToolWithCancellation({
toolName: snapshotTool.name,
arguments: {},
})
const snapshotResult = await promise
if (snapshotResult && typeof snapshotResult !== 'string') {
results.push(snapshotResult as ToolResult)
}
} catch (e) {
console.warn('Failed to capture proactive snapshot:', e)
}
}
} catch (e) {
console.error('Failed to get MCP tools for proactive capture:', e)
}
return results
}
/**
* Helper function to filter out old screenshot/snapshot images from builder messages
* Keeps only the latest proactive screenshots
* @param builder - The completion messages builder
*/
const filterOldProactiveScreenshots = (builder: CompletionMessagesBuilder) => {
const messages = builder.getMessages()
const filteredMessages: any[] = []
for (const msg of messages) {
if (msg.role === 'tool') {
// If it's a tool message with array content (multimodal)
if (Array.isArray(msg.content)) {
// Filter out images, keep text only for old tool messages
const textOnly = msg.content.filter(
(part: any) => part.type !== 'image_url'
)
if (textOnly.length > 0) {
filteredMessages.push({ ...msg, content: textOnly })
}
} else {
// Keep string content as-is
filteredMessages.push(msg)
}
} else {
// Keep all non-tool messages
filteredMessages.push(msg)
}
}
// Reconstruct builder with filtered messages
// Note: This is a workaround since CompletionMessagesBuilder doesn't have a setter
// We'll need to access the private messages array
// eslint-disable-next-line no-extra-semi
;(builder as any).messages = filteredMessages
}
/**
* @fileoverview Helper function to process the completion response.
* @param calls
@ -519,7 +387,6 @@ const filterOldProactiveScreenshots = (builder: CompletionMessagesBuilder) => {
* @param approvedTools
* @param showModal
* @param allowAllMCPPermissions
* @param isProactiveMode
*/
export const postMessageProcessing = async (
calls: ChatCompletionMessageToolCall[],
@ -532,8 +399,7 @@ export const postMessageProcessing = async (
threadId: string,
toolParameters?: object
) => Promise<boolean>,
allowAllMCPPermissions: boolean = false,
isProactiveMode: boolean = false
allowAllMCPPermissions: boolean = false
) => {
// Handle completed tool calls
if (calls.length) {
@ -589,7 +455,6 @@ export const postMessageProcessing = async (
const toolName = toolCall.function.name
const toolArgs = toolCall.function.arguments.length ? toolParameters : {}
const isRagTool = ragToolNames.has(toolName)
const isBrowserTool = isBrowserMCPTool(toolName)
// Auto-approve RAG tools (local/safe operations), require permission for MCP tools
const approved = isRagTool
@ -678,28 +543,7 @@ export const postMessageProcessing = async (
},
],
}
builder.addToolMessage(result as ToolResult, toolCall.id)
// Proactive mode: Capture screenshot/snapshot after browser tool execution
if (isProactiveMode && isBrowserTool && !abortController.signal.aborted) {
console.log('Proactive mode: Capturing screenshots after browser tool call')
// Filter out old screenshots before adding new ones
filterOldProactiveScreenshots(builder)
// Capture new screenshots
const proactiveScreenshots = await captureProactiveScreenshots(abortController)
// Add proactive screenshots to builder
for (const screenshot of proactiveScreenshots) {
// Generate a unique tool call ID for the proactive screenshot
const proactiveToolCallId = ulid()
builder.addToolMessage(screenshot, proactiveToolCallId)
console.log('Proactive screenshot captured and added to context')
}
}
builder.addToolMessage(result.content[0]?.text ?? '', toolCall.id)
// update message metadata
}
return message

View File

@ -1,4 +1,3 @@
/* eslint-disable @typescript-eslint/no-explicit-any */
import { ChatCompletionMessageParam } from 'token.js'
import { ChatCompletionMessageToolCall } from 'openai/resources'
import { ThreadMessage, ContentType } from '@janhq/core'
@ -7,48 +6,6 @@ import { removeReasoningContent } from '@/utils/reasoning'
type ThreadContent = NonNullable<ThreadMessage['content']>[number]
// Define a temporary type for the expected tool result shape (ToolResult as before)
export type ToolResult = {
content: Array<{
type?: string
text?: string
data?: string
image_url?: { url: string; detail?: string }
}>
error?: string
}
// Helper function to convert the tool's output part into an API content part
const convertToolPartToApiContentPart = (part: ToolResult['content'][0]) => {
if (part.text) {
return { type: 'text', text: part.text }
}
// Handle base64 image data
if (part.data) {
// Assume default image type, though a proper tool should return the mime type
const mimeType =
part.type === 'image' ? 'image/png' : part.type || 'image/png'
const dataUrl = `data:${mimeType};base64,${part.data}`
return {
type: 'image_url',
image_url: {
url: dataUrl,
detail: 'auto',
},
}
}
// Handle pre-formatted image URL
if (part.image_url) {
return { type: 'image_url', image_url: part.image_url }
}
// Fallback to text stringification for structured but unhandled data
return { type: 'text', text: JSON.stringify(part) }
}
/**
* @fileoverview Helper functions for creating chat completion request.
* These functions are used to create chat completion request objects
@ -69,11 +26,7 @@ export class CompletionMessagesBuilder {
.map<ChatCompletionMessageParam>((msg) => {
const param = this.toCompletionParamFromThread(msg)
// In constructor context, normalize empty user text to a placeholder
if (
param.role === 'user' &&
typeof param.content === 'string' &&
param.content === ''
) {
if (param.role === 'user' && typeof param.content === 'string' && param.content === '') {
return { ...param, content: '.' }
}
return param
@ -82,9 +35,7 @@ export class CompletionMessagesBuilder {
}
// Normalize a ThreadMessage into a ChatCompletionMessageParam for Token.js
private toCompletionParamFromThread(
msg: ThreadMessage
): ChatCompletionMessageParam {
private toCompletionParamFromThread(msg: ThreadMessage): ChatCompletionMessageParam {
if (msg.role === 'assistant') {
return {
role: 'assistant',
@ -109,10 +60,7 @@ export class CompletionMessagesBuilder {
if (part.type === ContentType.Image) {
return {
type: 'image_url' as const,
image_url: {
url: part.image_url?.url || '',
detail: part.image_url?.detail || 'auto',
},
image_url: { url: part.image_url?.url || '', detail: part.image_url?.detail || 'auto' },
}
}
// Fallback for unknown content types
@ -162,43 +110,13 @@ export class CompletionMessagesBuilder {
/**
* Add a tool message to the messages array.
* @param content - The content of the tool message (string or ToolResult object).
* @param content - The content of the tool message.
* @param toolCallId - The ID of the tool call associated with the message.
*/
addToolMessage(result: string | ToolResult, toolCallId: string) {
let content: string | any[] = ''
// Handle simple string case
if (typeof result === 'string') {
content = result
} else {
// Check for multimodal content (more than just a simple text string)
const hasMultimodalContent = result.content?.some(
(p) => p.data || p.image_url
)
if (hasMultimodalContent) {
// Build the structured content array
content = result.content.map(convertToolPartToApiContentPart)
} else if (result.content?.[0]?.text) {
// Standard text case
content = result.content[0].text
} else if (result.error) {
// Error case
content = `Tool execution failed: ${result.error}`
} else {
// Fallback: serialize the whole result structure if content is unexpected
try {
content = JSON.stringify(result)
} catch {
content = 'Tool call completed, unexpected output format.'
}
}
}
addToolMessage(content: string, toolCallId: string) {
this.messages.push({
role: 'tool',
// for role 'tool', need to use 'as ChatCompletionMessageParam'
content: content as any,
content: content,
tool_call_id: toolCallId,
})
}

View File

@ -80,7 +80,6 @@
"tools": "Werkzeuge",
"webSearch": "Web Suche",
"reasoning": "Argumentation",
"proactive": "Proaktiv",
"selectAModel": "Wähle ein Modell",
"noToolsAvailable": "Keine Werkzeuge verfügbar",
"noModelsFoundFor": "Keine Modelle gefunden zu \"{{searchValue}}\"",

View File

@ -61,7 +61,6 @@
"capabilities": "Fähigkeiten",
"tools": "Werkzeuge",
"vision": "Vision",
"proactive": "Proaktiv (Experimentell)",
"embeddings": "Einbettungen",
"notAvailable": "Noch nicht verfügbar",
"warning": {

View File

@ -81,7 +81,6 @@
"tools": "Tools",
"webSearch": "Web Search",
"reasoning": "Reasoning",
"proactive": "Proactive",
"selectAModel": "Select a model",
"noToolsAvailable": "No tools available",
"noModelsFoundFor": "No models found for \"{{searchValue}}\"",

View File

@ -61,7 +61,6 @@
"capabilities": "Capabilities",
"tools": "Tools",
"vision": "Vision",
"proactive": "Proactive (Experimental)",
"embeddings": "Embeddings",
"notAvailable": "Not available yet",
"warning": {

View File

@ -80,7 +80,6 @@
"tools": "Alat",
"webSearch": "Pencarian Web",
"reasoning": "Penalaran",
"proactive": "Proaktif",
"selectAModel": "Pilih model",
"noToolsAvailable": "Tidak ada alat yang tersedia",
"noModelsFoundFor": "Tidak ada model yang ditemukan untuk \"{{searchValue}}\"",

View File

@ -61,7 +61,6 @@
"capabilities": "Kemampuan",
"tools": "Alat",
"vision": "Visi",
"proactive": "Proaktif (Eksperimental)",
"embeddings": "Embedding",
"notAvailable": "Belum tersedia",
"warning": {

View File

@ -1,35 +0,0 @@
{
"title": "アシスタント",
"editAssistant": "アシスタントを編集",
"deleteAssistant": "アシスタントを削除",
"deleteConfirmation": "アシスタントを削除",
"deleteConfirmationDesc": "本当にこのアシスタントを削除しますか?この操作は元に戻せません。",
"cancel": "キャンセル",
"delete": "削除",
"addAssistant": "アシスタントを追加",
"emoji": "絵文字",
"name": "名前",
"enterName": "名前を入力",
"nameRequired": "名前は必須です",
"description": "説明(任意)",
"enterDescription": "説明を入力",
"instructions": "指示",
"enterInstructions": "指示を入力",
"predefinedParameters": "事前定義されたパラメータ",
"parameters": "パラメータ",
"key": "キー",
"value": "値",
"stringValue": "文字列",
"numberValue": "数値",
"booleanValue": "ブール値",
"jsonValue": "JSON",
"trueValue": "真",
"falseValue": "偽",
"jsonValuePlaceholder": "JSON値",
"save": "保存",
"createNew": "新しいアシスタントを作成",
"personality": "個性",
"capabilities": "機能",
"instructionsDateHint": "ヒント: {{current_date}} を使用して今日の日付を挿入します。",
"maxToolSteps": "最大ツールステップ数"
}

View File

@ -1,12 +0,0 @@
{
"welcome": "こんにちは、何かお手伝いできることはありますか?",
"description": "今日はどのようなご用件でしょうか?",
"temporaryChat": "一時的なチャット",
"temporaryChatDescription": "チャット履歴に保存されない一時的な会話を開始します。",
"status": {
"empty": "チャットが見つかりません"
},
"sendMessage": "メッセージを送信",
"newConversation": "新しい会話",
"clearHistory": "履歴を消去"
}

View File

@ -1,367 +0,0 @@
{
"assistants": "アシスタント",
"hardware": "ハードウェア",
"mcp-servers": "MCPサーバー",
"local_api_server": "ローカルAPIサーバー",
"https_proxy": "HTTPSプロキシ",
"extensions": "拡張機能",
"general": "全般",
"settings": "設定",
"modelProviders": "モデルプロバイダー",
"appearance": "外観",
"privacy": "プライバシー",
"keyboardShortcuts": "ショートカット",
"newChat": "新しいチャット",
"favorites": "お気に入り",
"recents": "最近の項目",
"hub": "ハブ",
"helpSupport": "ヘルプとサポート",
"helpUsImproveJan": "Janの改善にご協力ください",
"unstarAll": "すべてのスターを解除",
"unstar": "スターを解除",
"deleteAll": "すべて削除",
"star": "スターを付ける",
"rename": "名前を変更",
"delete": "削除",
"copied": "コピーしました!",
"dataFolder": "データフォルダ",
"others": "その他",
"language": "言語",
"login": "ログイン",
"loginWith": "{{provider}}でログイン",
"loginFailed": "ログインに失敗しました",
"logout": "ログアウト",
"loggingOut": "ログアウト中...",
"loggedOut": "正常にログアウトしました",
"logoutFailed": "ログアウトに失敗しました",
"profile": "プロフィール",
"reset": "リセット",
"search": "検索",
"name": "名前",
"cancel": "キャンセル",
"create": "作成",
"save": "保存",
"edit": "編集",
"copy": "コピー",
"back": "戻る",
"close": "閉じる",
"next": "次へ",
"finish": "完了",
"skip": "スキップ",
"allow": "許可",
"deny": "拒否",
"start": "開始",
"stop": "停止",
"preview": "プレビュー",
"compactWidth": "コンパクト幅",
"fullWidth": "全幅",
"dark": "ダーク",
"light": "ライト",
"system": "システム",
"auto": "自動",
"english": "英語",
"medium": "中",
"newThread": "新しいスレッド",
"noResultsFound": "結果が見つかりません",
"noThreadsYet": "スレッドはまだありません",
"noThreadsYetDesc": "新しい会話を始めると、ここにスレッドの履歴が表示されます。",
"downloads": "ダウンロード",
"downloading": "ダウンロード中",
"cancelDownload": "ダウンロードをキャンセル",
"downloadCancelled": "ダウンロードがキャンセルされました",
"downloadComplete": "ダウンロード完了",
"thinking": "考え中...",
"thought": "思考",
"callingTool": "ツールを呼び出し中",
"completed": "完了",
"image": "画像",
"vision": "画像認識",
"embeddings": "埋め込み",
"tools": "ツール",
"webSearch": "ウェブ検索",
"reasoning": "推論",
"selectAModel": "モデルを選択",
"noToolsAvailable": "利用可能なツールはありません",
"noModelsFoundFor": "\"{{searchValue}}\"に一致するモデルが見つかりません",
"failedToLoadModels": "モデルの読み込みに失敗しました",
"noModels": "モデルが見つかりません",
"customAvatar": "カスタムアバター",
"editAssistant": "アシスタントを編集",
"jan": "Jan",
"metadata": "メタデータ",
"regenerate": "再生成",
"threadImage": "スレッド画像",
"editMessage": "メッセージを編集",
"deleteMessage": "メッセージを削除",
"deleteThread": "スレッドを削除",
"renameThread": "スレッド名を変更",
"threadTitle": "スレッドのタイトル",
"deleteAllThreads": "すべてのスレッドを削除",
"allThreadsUnfavorited": "すべてのスレッドのお気に入りを解除しました",
"deleteAllThreadsConfirm": "本当にすべてのスレッドを削除しますか?この操作は元に戻せません。",
"addProvider": "プロバイダーを追加",
"addOpenAIProvider": "OpenAIプロバイダーを追加",
"enterNameForProvider": "プロバイダーの名前を入力してください",
"providerAlreadyExists": "プロバイダー名「{{name}}」はすでに存在します。別の名前を選択してください。",
"adjustFontSize": "フォントサイズを調整",
"changeLanguage": "言語を変更",
"editTheme": "テーマを編集",
"editCodeBlockStyle": "コードブロックのスタイルを編集",
"editServerHost": "サーバーホストを編集",
"pickColorWindowBackground": "ウィンドウの背景色を選択",
"pickColorAppMainView": "アプリのメインビューの色を選択",
"pickColorAppPrimary": "アプリのプライマリカラーを選択",
"pickColorAppAccent": "アプリのアクセントカラーを選択",
"pickColorAppDestructive": "アプリの破壊的アクションの色を選択",
"apiKeyRequired": "APIキーが必要です",
"enterTrustedHosts": "信頼できるホストを入力してください",
"placeholder": {
"chatInput": "何でも聞いてください..."
},
"confirm": "確認",
"continue": "続ける",
"loading": "読み込み中...",
"error": "エラー",
"success": "成功",
"warning": "警告",
"conversationNotAvailable": "会話を利用できません",
"conversationNotAvailableDescription": "アクセスしようとしている会話は利用できないか、削除されています。",
"temporaryChat": "一時的なチャット",
"temporaryChatTooltip": "一時的なチャットは履歴に表示されません",
"noResultsFoundDesc": "検索に一致するチャットが見つかりませんでした。別のキーワードをお試しください。",
"searchModels": "モデルを検索...",
"searchStyles": "スタイルを検索...",
"createAssistant": "アシスタントを作成",
"enterApiKey": "APIキーを入力",
"scrollToBottom": "一番下までスクロール",
"generateAiResponse": "AIの応答を生成",
"addModel": {
"title": "モデルを追加",
"modelId": "モデルID",
"enterModelId": "モデルIDを入力",
"addModel": "モデルを追加",
"description": "プロバイダーに新しいモデルを追加します",
"exploreModels": "プロバイダーのモデルリストを見る"
},
"mcpServers": {
"editServer": "サーバーを編集",
"addServer": "サーバーを追加",
"serverName": "サーバー名",
"enterServerName": "サーバー名を入力",
"command": "コマンド",
"enterCommand": "コマンドを入力",
"arguments": "引数",
"argument": "引数 {{index}}",
"envVars": "環境変数",
"key": "キー",
"value": "値",
"save": "保存"
},
"deleteServer": {
"title": "サーバーを削除",
"delete": "削除"
},
"editJson": {
"errorParse": "JSONの解析に失敗しました",
"errorPaste": "JSONの貼り付けに失敗しました",
"errorFormat": "無効なJSON形式です",
"titleAll": "すべてのサーバー構成を編集",
"placeholder": "JSON構成を入力...",
"save": "保存"
},
"editModel": {
"title": "モデルを編集: {{modelId}}",
"description": "以下のオプションを切り替えて、モデルの機能を設定します。",
"capabilities": "機能",
"tools": "ツール",
"vision": "画像認識",
"embeddings": "埋め込み",
"notAvailable": "まだ利用できません"
},
"outOfContextError": {
"truncateInput": "入力を切り詰める",
"title": "コンテキストエラー",
"description": "このチャットはAIのメモリ制限に近づいています。ホワイトボードがいっぱいになるようなものです。メモリウィンドウコンテキストサイズを拡張して記憶容量を増やすことができますが、コンピュータのメモリ使用量が増える可能性があります。また、入力を切り詰めることもできます。これは、新しいメッセージのためのスペースを確保するために、チャット履歴の一部を忘れることを意味します。",
"increaseContextSizeDescription": "コンテキストサイズを増やしますか?",
"increaseContextSize": "コンテキストサイズを増やす"
},
"toolApproval": {
"title": "ツール権限のリクエスト",
"description": "アシスタントは<strong>{{toolName}}</strong>を使用しようとしています",
"securityNotice": "信頼できるツールのみを許可してください。ツールはあなたのシステムやデータにアクセスする可能性があります。",
"deny": "拒否",
"allowOnce": "一度だけ許可",
"alwaysAllow": "常に許可"
},
"deleteModel": {
"title": "モデルを削除: {{modelId}}",
"description": "本当にこのモデルを削除しますか?この操作は元に戻せません。",
"success": "モデル {{modelId}} は完全に削除されました。",
"cancel": "キャンセル",
"delete": "削除"
},
"deleteProvider": {
"title": "プロバイダーを削除",
"description": "このプロバイダーとすべてのモデルを削除します。この操作は元に戻せません。",
"success": "プロバイダー {{provider}} は完全に削除されました。",
"confirmTitle": "プロバイダーを削除: {{provider}}",
"confirmDescription": "本当にこのプロバイダーを削除しますか?この操作は元に戻せません。",
"cancel": "キャンセル",
"delete": "削除"
},
"modelSettings": {
"title": "モデル設定 - {{modelId}}",
"description": "パフォーマンスと動作を最適化するためにモデル設定を構成します。"
},
"dialogs": {
"changeDataFolder": {
"title": "データフォルダの場所を変更",
"description": "本当にデータフォルダの場所を変更しますか?これにより、すべてのデータが新しい場所に移動し、アプリケーションが再起動します。",
"currentLocation": "現在の場所:",
"newLocation": "新しい場所:",
"cancel": "キャンセル",
"changeLocation": "場所を変更"
},
"deleteAllThreads": {
"title": "すべてのスレッドを削除",
"description": "すべてのスレッドが削除されます。この操作は元に戻せません。"
},
"deleteThread": {
"description": "本当にこのスレッドを削除しますか?この操作は元に戻せません。"
},
"editMessage": {
"title": "メッセージを編集"
},
"messageMetadata": {
"title": "メッセージメタデータ"
}
},
"projects": {
"title": "プロジェクト",
"addProject": "プロジェクトを追加",
"addToProject": "プロジェクトに追加",
"removeFromProject": "プロジェクトから削除",
"createNewProject": "新しいプロジェクトを作成",
"editProject": "プロジェクトを編集",
"deleteProject": "プロジェクトを削除",
"projectName": "プロジェクト名",
"enterProjectName": "プロジェクト名を入力...",
"noProjectsAvailable": "利用可能なプロジェクトはありません",
"noProjectsYet": "プロジェクトはまだありません",
"noProjectsYetDesc": "「プロジェクトを追加」ボタンをクリックして、新しいプロジェクトを開始してください。",
"projectNotFound": "プロジェクトが見つかりません",
"projectNotFoundDesc": "お探しのプロジェクトは存在しないか、削除されています。",
"deleteProjectDialog": {
"title": "プロジェクトを削除",
"description": "本当にこのプロジェクトを削除しますか?この操作は元に戻せません。",
"deleteButton": "削除",
"successWithName": "プロジェクト「{{projectName}}」を正常に削除しました",
"successWithoutName": "プロジェクトを正常に削除しました",
"error": "プロジェクトの削除に失敗しました。もう一度お試しください。",
"ariaLabel": "{{projectName}}を削除"
},
"addProjectDialog": {
"createTitle": "新しいプロジェクトを作成",
"editTitle": "プロジェクトを編集",
"nameLabel": "プロジェクト名",
"namePlaceholder": "プロジェクト名を入力...",
"createButton": "作成",
"updateButton": "更新",
"alreadyExists": "プロジェクト「{{projectName}}」はすでに存在します",
"createSuccess": "プロジェクト「{{projectName}}」を正常に作成しました",
"renameSuccess": "プロジェクト名を「{{oldName}}」から「{{newName}}」に正常に変更しました"
},
"noConversationsIn": "{{projectName}}には会話がありません",
"startNewConversation": "以下で{{projectName}}との新しい会話を開始します",
"conversationsIn": "{{projectName}}での会話",
"conversationsDescription": "会話をクリックしてチャットを続けるか、以下で新しい会話を開始してください。",
"thread": "スレッド",
"threads": "スレッド",
"updated": "更新日時:",
"collapseThreads": "スレッドを折りたたむ",
"expandThreads": "スレッドを展開する",
"update": "更新"
},
"toast": {
"allThreadsUnfavorited": {
"title": "すべてのスレッドのお気に入りを解除しました",
"description": "すべてのスレッドがお気に入りから削除されました。"
},
"deleteAllThreads": {
"title": "すべてのスレッドを削除",
"description": "すべてのスレッドが完全に削除されました。"
},
"renameThread": {
"title": "スレッド名を変更",
"description": "スレッドのタイトルが「{{title}}」に変更されました"
},
"deleteThread": {
"title": "スレッドを削除",
"description": "このスレッドは完全に削除されました。"
},
"editMessage": {
"title": "メッセージを編集",
"description": "メッセージは正常に編集されました。モデルの応答をお待ちください。"
},
"appUpdateDownloaded": {
"title": "アプリの更新をダウンロードしました",
"description": "アプリの更新は正常にダウンロードされました。"
},
"appUpdateDownloadFailed": {
"title": "アプリの更新のダウンロードに失敗しました",
"description": "アプリの更新のダウンロードに失敗しました。もう一度お試しください。"
},
"downloadComplete": {
"title": "ダウンロード完了",
"description": "{{item}}がダウンロードされました"
},
"downloadCancelled": {
"title": "ダウンロードがキャンセルされました",
"description": "ダウンロード処理はキャンセルされました"
},
"downloadFailed": {
"title": "ダウンロードに失敗しました",
"description": "{{item}}のダウンロードに失敗しました"
},
"modelValidationStarted": {
"title": "モデルを検証中",
"description": "ダウンロードしたモデル「{{modelId}}」を正常に完了しました。整合性を検証しています..."
},
"modelValidationFailed": {
"title": "モデルの検証に失敗しました",
"description": "ダウンロードしたモデル「{{modelId}}」は整合性検証に失敗し、削除されました。ファイルが破損または改ざんされている可能性があります。"
},
"downloadAndVerificationComplete": {
"title": "ダウンロード完了",
"description": "モデル「{{item}}」は正常にダウンロードおよび検証されました"
},
"projectCreated": {
"title": "プロジェクトが作成されました",
"description": "プロジェクト「{{projectName}}」は正常に作成されました"
},
"projectRenamed": {
"title": "プロジェクト名が変更されました",
"description": "プロジェクト名は「{{oldName}}」から「{{newName}}」に正常に変更されました"
},
"projectDeleted": {
"title": "プロジェクトが削除されました",
"description": "プロジェクト「{{projectName}}」は正常に削除されました"
},
"projectAlreadyExists": {
"title": "プロジェクトはすでに存在します",
"description": "プロジェクト「{{projectName}}」はすでに存在します"
},
"projectDeleteFailed": {
"title": "削除に失敗しました",
"description": "プロジェクトの削除に失敗しました。もう一度お試しください。"
},
"threadAssignedToProject": {
"title": "スレッドが割り当てられました",
"description": "スレッドは「{{projectName}}」に正常に割り当てられました"
},
"threadRemovedFromProject": {
"title": "スレッドが削除されました",
"description": "スレッドは「{{projectName}}」から正常に削除されました"
}
}
}

View File

@ -1,31 +0,0 @@
{
"sortNewest": "新着順",
"sortMostDownloaded": "ダウンロード数順",
"use": "使用",
"download": "ダウンロード",
"downloaded": "ダウンロード済み",
"loadingModels": "モデルを読み込み中...",
"noModels": "モデルが見つかりません",
"by": "作成者",
"downloads": "ダウンロード",
"variants": "バリアント",
"showVariants": "バリアントを表示",
"useModel": "このモデルを使用",
"downloadModel": "モデルをダウンロード",
"tools": "ツール",
"searchPlaceholder": "Hugging Faceでモデルを検索...",
"joyride": {
"recommendedModelTitle": "おすすめのモデル",
"recommendedModelContent": "さまざまなプロバイダーの強力なAIモデルを1か所で閲覧、ダウンロードできます。まずは、関数呼び出し、ツール統合、および研究機能に最適化されたモデルであるJan-Nanoから始めることをお勧めします。インタラクティブなAIエージェントの構築に最適です。",
"downloadInProgressTitle": "ダウンロード進行中",
"downloadInProgressContent": "モデルは現在ダウンロード中です。ここで進行状況を確認できます。完了すると、使用できるようになります。",
"downloadModelTitle": "モデルをダウンロード",
"downloadModelContent": "「ダウンロード」ボタンをクリックして、モデルのダウンロードを開始します。",
"back": "戻る",
"close": "閉じる",
"lastWithDownload": "ダウンロード",
"last": "完了",
"next": "次へ",
"skip": "スキップ"
}
}

View File

@ -1,3 +0,0 @@
{
"noLogs": "利用可能なログはありません"
}

View File

@ -1,47 +0,0 @@
{
"editServer": "MCPサーバーを編集",
"addServer": "MCPサーバーを追加",
"serverName": "サーバー名",
"enterServerName": "サーバー名を入力",
"command": "コマンド",
"enterCommand": "コマンドを入力 (uvx または npx)",
"arguments": "引数",
"argument": "引数 {{index}}",
"envVars": "環境変数",
"key": "キー",
"value": "値",
"save": "保存",
"status": "ステータス",
"connected": "接続済み",
"disconnected": "切断済み",
"deleteServer": {
"title": "MCPサーバーを削除",
"description": "本当にMCPサーバー {{serverName}} を削除しますか?この操作は元に戻せません。",
"delete": "削除",
"success": "MCPサーバー {{serverName}} を正常に削除しました"
},
"editJson": {
"title": "MCPサーバーのJSONを編集: {{serverName}}",
"titleAll": "すべてのMCPサーバーのJSONを編集",
"placeholder": "JSON構成を入力",
"errorParse": "初期データの解析に失敗しました",
"errorPaste": "貼り付けたコンテンツのJSON形式が無効です",
"errorFormat": "無効なJSON形式です",
"errorServerName": "サーバー名は必須であり、空にすることはできません",
"errorMissingServerNameKey": "JSONは {\"serverName\": {config}} のように構成する必要があります - サーバー名のキーがありません",
"errorInvalidType": "サーバー '{{serverName}}' のタイプ '{{type}}' が無効です。タイプは 'stdio'、'http'、または 'sse' である必要があります",
"save": "保存"
},
"checkParams": "チュートリアルに従ってパラメータを確認してください。",
"title": "MCPサーバー",
"experimental": "実験的",
"editAllJson": "すべてのサーバーのJSONを編集",
"findMore": "その他のMCPサーバーは以下で検索してください",
"allowPermissions": "すべてのMCPツール権限を許可",
"allowPermissionsDesc": "有効にすると、すべてのMCPツール呼び出しは許可ダイアログを表示せずに自動的に承認されます。この設定は、新しいチャットを含むすべての会話にグローバルに適用されます。",
"noServers": "MCPサーバーが見つかりません",
"args": "引数",
"env": "環境変数",
"serverStatusActive": "サーバー {{serverKey}} が正常にアクティブ化されました",
"serverStatusInactive": "サーバー {{serverKey}} が正常に非アクティブ化されました"
}

View File

@ -1,7 +0,0 @@
{
"title": "コンテキストエラー",
"description": "このチャットはAIのメモリ制限に近づいています。ホワイトボードがいっぱいになるようなものです。メモリウィンドウコンテキストサイズを拡張して記憶容量を増やすことができますが、コンピュータのメモリ使用量が増える可能性があります。また、入力を切り詰めることもできます。これは、新しいメッセージのためのスペースを確保するために、チャット履歴の一部を忘れることを意味します。",
"increaseContextSizeDescription": "コンテキストサイズを増やしますか?",
"truncateInput": "入力を切り詰める",
"increaseContextSize": "コンテキストサイズを増やす"
}

View File

@ -1,5 +0,0 @@
{
"addProvider": "プロバイダーを追加",
"addOpenAIProvider": "OpenAIプロバイダーを追加",
"enterNameForProvider": "プロバイダーの名前を入力してください"
}

View File

@ -1,74 +0,0 @@
{
"joyride": {
"chooseProviderTitle": "プロバイダーを選択",
"chooseProviderContent": "使用したいプロバイダーを選択し、そのAPIキーにアクセスできることを確認してください。",
"getApiKeyTitle": "APIキーを取得",
"getApiKeyContent": "プロバイダーのダッシュボードにログインして、APIキーを見つけるか生成してください。",
"insertApiKeyTitle": "APIキーを挿入",
"insertApiKeyContent": "ここにAPIキーを貼り付けて、プロバイダーに接続してアクティベートしてください。",
"back": "戻る",
"close": "閉じる",
"last": "完了",
"next": "次へ",
"skip": "スキップ"
},
"refreshModelsError": "モデルを取得するには、プロバイダーにベースURLとAPIキーが設定されている必要があります。",
"refreshModelsSuccess": "{{provider}}から{{count}}個の新しいモデルを追加しました。",
"noNewModels": "新しいモデルは見つかりませんでした。利用可能なすべてのモデルは既に追加されています。",
"refreshModelsFailed": "{{provider}}からのモデルの取得に失敗しました。APIキーとベースURLを確認してください。",
"models": "モデル",
"refreshing": "更新中...",
"refresh": "更新",
"import": "インポート",
"importModelSuccess": "モデル{{provider}}は正常にインポートされました。",
"importModelError": "モデルのインポートに失敗しました:",
"stop": "停止",
"start": "開始",
"noModelFound": "モデルが見つかりません",
"noModelFoundDesc": "利用可能なモデルはここにリストされます。まだモデルがない場合は、ハブにアクセスしてダウンロードしてください。",
"configuration": "設定",
"apiEndpoint": "APIエンドポイント",
"testConnection": "接続をテスト",
"addModel": {
"title": "新しいモデルを追加",
"description": "{{provider}}プロバイダーに新しいモデルを追加します。",
"modelId": "モデルID",
"enterModelId": "モデルIDを入力",
"exploreModels": "{{provider}}のモデルリストを見る",
"addModel": "モデルを追加",
"modelExists": "モデルは既に存在します",
"modelExistsDesc": "別のモデルIDを選択してください。"
},
"deleteModel": {
"title": "モデルを削除: {{modelId}}",
"description": "本当にこのモデルを削除しますか?この操作は元に戻せません。",
"success": "モデル {{modelId}} は完全に削除されました。",
"cancel": "キャンセル",
"delete": "削除"
},
"deleteProvider": {
"title": "プロバイダーを削除",
"description": "このプロバイダーとすべてのモデルを削除します。この操作は元に戻せません。",
"success": "プロバイダー {{provider}} は完全に削除されました。",
"confirmTitle": "プロバイダーを削除: {{provider}}",
"confirmDescription": "本当にこのプロバイダーを削除しますか?この操作は元に戻せません。",
"cancel": "キャンセル",
"delete": "削除"
},
"editModel": {
"title": "モデルを編集: {{modelId}}",
"description": "以下のオプションを切り替えて、モデルの機能を設定します。",
"capabilities": "機能",
"tools": "ツール",
"vision": "画像認識",
"embeddings": "埋め込み",
"notAvailable": "まだ利用できません",
"warning": {
"title": "注意して進めてください",
"description": "モデルの機能を変更すると、パフォーマンスや機能に影響を与える可能性があります。不正な設定は、予期しない動作やエラーを引き起こす可能性があります。"
}
},
"addProvider": "プロバイダーを追加",
"addOpenAIProvider": "OpenAIプロバイダーを追加",
"enterNameForProvider": "プロバイダーの名前を入力してください"
}

Some files were not shown because too many files have changed in this diff Show More