bring dev changes to web dev (#6557)

* fix: avoid error validate nested dom

* fix: correct context shift flag handling in LlamaCPP extension (#6404) (#6431)

* fix: correct context shift flag handling in LlamaCPP extension

The previous implementation added the `--no-context-shift` flag when `cfg.ctx_shift` was disabled, which conflicted with the llama.cpp CLI where the presence of `--context-shift` enables the feature.
The logic is updated to push `--context-shift` only when `cfg.ctx_shift` is true, ensuring the extension passes the correct argument and behaves as expected.

* feat: detect model out of context during generation

---------

Co-authored-by: Dinh Long Nguyen <dinhlongviolin1@gmail.com>

* chore: add install-rust-targets step for macOS universal builds

* fix: make install-rust-targets a dependency

* enhancement: copy MCP permission

* chore: make action mutton capitalize

* Update web-app/src/locales/en/tool-approval.json

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* chore: simplify macos workflow

* fix: KVCache size calculation and refactor (#6438)

- Removed the unused `getKVCachePerToken` helper and replaced it with a unified `estimateKVCache` that returns both total size and per‑token size.
- Fixed the KV cache size calculation to account for all layers, correcting previous under‑estimation.
- Added proper clamping of user‑requested context lengths to the model’s maximum.
- Refactored VRAM budgeting: introduced explicit reserves, fixed engine overhead, and separate multipliers for VRAM and system RAM based on memory mode.
- Implemented a more robust planning flow with clear GPU, Hybrid, and CPU pathways, including fallback configurations when resources are insufficient.
- Updated default context length handling and safety buffers to prevent OOM situations.
- Adjusted usable memory percentage to 90 % and refined logging for easier debugging.

* fix: detect allocation failures as out-of-memory errors (#6459)

The Llama.cpp backend can emit the phrase “failed to allocate” when it runs out of memory.
Adding this check ensures such messages are correctly classified as out‑of‑memory errors,
providing more accurate error handling CPU backends.

* fix: pathname file install BE

* fix: set default memory mode and clean up unused import (#6463)

Use fallback value 'high' for memory_util config and remove unused GgufMetadata import.

* fix: auto update should not block popup

* fix: remove log

* fix: imporove edit message with attachment image

* fix: imporove edit message with attachment image

* fix: type imageurl

* fix: immediate dropdown value update

* fix: linter

* fix/validate-mmproj-from-general-basename

* fix/revalidate-model-gguf

* fix: loader when importing

* fix/mcp-json-validation

* chore: update locale mcp json

* fix: new extension settings aren't populated properly (#6476)

* chore: embed webview2 bootstrapper in tauri windows

* fix: validat type mcp json

* chore: prevent click outside for edit dialog

* feat: add qa checklist

* chore: remove old checklist

* chore: correct typo in checklist

* fix: correct memory suitability checks in llamacpp extension (#6504)

The previous implementation mixed model size and VRAM checks, leading to inaccurate status reporting (e.g., false RED results).
- Simplified import statement for `readGgufMetadata`.
- Fixed RAM/VRAM comparison by removing unnecessary parentheses.
- Replaced ambiguous `modelSize > usableTotalMemory` check with a clear `totalRequired > usableTotalMemory` hard‑limit condition.
- Refactored the status logic to explicitly handle the CPU‑GPU hybrid scenario, returning **YELLOW** when the total requirement fits combined memory but exceeds VRAM.
- Updated comments for better readability and maintenance.

* fix: thread rerender issue

* chore: clean up console log

* chore: uncomment irrelevant fix

* fix: linter

* chore: remove duplicated block

* fix: tests

* Merge pull request #6469 from menloresearch/fix/deeplink-not-work-on-windows

fix: deeplink issue on Windows

* fix: reduce unnessary rerender due to current thread retrieval

* fix: reduce app layout rerender due to router state update

* fix: avoid the entire app layout re render on route change

* clean: unused import

* Merge pull request #6514 from menloresearch/feat/web-gtag

feat: Add GA Measurement and change keyboard bindings on web

* chore: update build tauri commands

* chore: remove unused task

* fix: should not rerender thread message components when typing

* fix re render issue

* direct tokenspeed access

* chore: sync latest

* feat: Add Jan API server Swagger UI (#6502)

* feat: Add Jan API server Swagger UI

- Serve OpenAPI spec (`static/openapi.json`) directly from the proxy server.
- Implement Swagger UI assets (`swagger-ui.css`, `swagger-ui-bundle.js`, `favicon.ico`) and a simple HTML wrapper under `/docs`.
- Extend the proxy whitelist to include Swagger UI routes.
- Add routing logic for `/openapi.json`, `/docs`, and Swagger UI static files.
- Update whitelisted paths and integrate CORS handling for the new endpoints.

* feat: serve Swagger UI at root path

The Swagger UI endpoint previously lived under `/docs`. The route handling and
exclusion list have been updated so the UI is now served directly at `/`.
This simplifies access, aligns with the expected root URL in the Tauri
frontend, and removes the now‑unused `/docs` path handling.

* feat: add model loading state and translations for local API server

Implemented a loading indicator for model startup, updated the start/stop button to reflect model loading and server starting states, and disabled interactions while pending. Added new translation keys (`loadingModel`, `startingServer`) across all supported locales (en, de, id, pl, vn, zh-CN, zh-TW) and integrated them into the UI. Included a small delay after model start to ensure backend state consistency. This improves user feedback and prevents race conditions during server initialization.

* fix: tests

* fix: linter

* fix: build

* docs: update changelog for v0.6.10

* fix(number-input): preserve '0.0x' format when typing (#6520)

* docs: update url for gifs and videos

* chore: update url for jan-v1 docs

* fix: Typo in openapi JSON (#6528)

* enhancement: toaster delete mcp server

* Update 2025-09-18-auto-optimize-vision-imports.mdx

* Merge pull request #6475 from menloresearch/feat/bump-tokenjs

feat: fix remote provider vision capability

* fix: prevent consecutive messages with same role (#6544)

* fix: prevent consecutive messages with same role

* fix: tests

* fix: first message should not be assistant

* fix: tests

* feat: Prompt progress when streaming (#6503)

* feat: Prompt progress when streaming

- BE changes:
    - Add a `return_progress` flag to `chatCompletionRequest` and a corresponding `prompt_progress` payload in `chatCompletionChunk`. Introduce `chatCompletionPromptProgress` interface to capture cache, processed, time, and total token counts.
    - Update the Llamacpp extension to always request progress data when streaming, enabling UI components to display real‑time generation progress and leverage llama.cpp’s built‑in progress reporting.

* Make return_progress optional

* chore: update ui prompt progress before streaming content

* chore: remove log

* chore: remove progress when percentage >= 100

* chore: set timeout prompt progress

* chore: move prompt progress outside streaming content

* fix: tests

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
Co-authored-by: Louis <louis@jan.ai>

* chore: add ci for web stag (#6550)

* feat: add getTokensCount method to compute token usage (#6467)

* feat: add getTokensCount method to compute token usage

Implemented a new async `getTokensCount` function in the LLaMA.cpp extension.
The method validates the model session, checks process health, applies the request template, and tokenizes the resulting prompt to return the token count. Includes detailed error handling for crashed models and API failures, enabling callers to assess token usage before sending completions.

* Fix: typos

* chore: update ui token usage

* chore: remove unused code

* feat: add image token handling for multimodal LlamaCPP models

Implemented support for counting image tokens when using vision-enabled models:
- Extended `SessionInfo` with optional `mmprojPath` to store the multimodal project file.
- Propagated `mmproj_path` from the Tauri plugin into the session info.
- Added import of `chatCompletionRequestMessage` and enhanced token calculation logic in the LlamaCPP extension:
- Detects image content in messages.
- Reads GGUF metadata from `mmprojPath` to compute accurate image token counts.
- Provides a fallback estimation if metadata reading fails.
- Returns the sum of text and image tokens.
- Introduced helper methods `calculateImageTokens` and `estimateImageTokensFallback`.
- Minor clean‑ups such as comment capitalization and debug logging.

* chore: update FE send params message include content type image_url

* fix mmproj path from session info and num tokens calculation

* fix: Correct image token estimation calculation in llamacpp extension

This commit addresses an inaccurate token count for images in the llama.cpp extension.

The previous logic incorrectly calculated the token count based on image patch size and dimensions. This has been replaced with a more precise method that uses the clip.vision.projection_dim value from the model metadata.

Additionally, unnecessary debug logging was removed, and a new log was added to show the mmproj metadata for improved visibility.

* fix per image calc

* fix: crash due to force unwrap

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
Co-authored-by: Louis <louis@jan.ai>

* fix: custom fetch for all providers (#6538)

* fix: custom fetch for all providers

* fix: run in development should use built-in fetch

* add full-width model names (#6350)

* fix: prevent relocation to root directories (#6547)

* fix: prevent relocation to root directories

* Update web-app/src/locales/zh-TW/settings.json

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* feat: web remote conversation (#6554)

* feat: implement conversation endpoint

* use conversation aware endpoint

* fetch message correctly

* preserve first message

* fix logout

* fix broadcast issue locally + auth not refreshing profile on other tabs+ clean up and sync messages

* add is dev tag

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
Co-authored-by: Akarshan Biswas <akarshan@menlo.ai>
Co-authored-by: Minh141120 <minh.itptit@gmail.com>
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
Co-authored-by: Nguyen Ngoc Minh <91668012+Minh141120@users.noreply.github.com>
Co-authored-by: Louis <louis@jan.ai>
Co-authored-by: Bui Quang Huy <34532913+LazyYuuki@users.noreply.github.com>
Co-authored-by: Roushan Singh <github.rtron18@gmail.com>
Co-authored-by: hiento09 <136591877+hiento09@users.noreply.github.com>
Co-authored-by: Alexey Haidamaka <gdmkaa@gmail.com>
This commit is contained in:
Dinh Long Nguyen 2025-09-23 15:13:15 +07:00 committed by GitHub
commit 7413f1354f
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
226 changed files with 5795 additions and 2334 deletions

View File

@ -11,6 +11,8 @@ on:
jobs:
build-and-preview:
runs-on: [ubuntu-24-04-docker]
env:
JAN_API_BASE: "https://api-dev.jan.ai/v1"
permissions:
pull-requests: write
contents: write
@ -50,7 +52,7 @@ jobs:
- name: Build docker image
run: |
docker build -t ${{ steps.vars.outputs.FULL_IMAGE }} .
docker build --build-arg JAN_API_BASE=${{ env.JAN_API_BASE }} -t ${{ steps.vars.outputs.FULL_IMAGE }} .
- name: Push docker image
if: github.event_name == 'push'

View File

@ -13,7 +13,8 @@ jobs:
deployments: write
pull-requests: write
env:
JAN_API_BASE: "https://api.jan.ai/jan/v1"
JAN_API_BASE: "https://api.jan.ai/v1"
GA_MEASUREMENT_ID: "G-YK53MX8M8M"
CLOUDFLARE_PROJECT_NAME: "jan-server-web"
steps:
- uses: actions/checkout@v4
@ -41,6 +42,9 @@ jobs:
- name: Install dependencies
run: make config-yarn && yarn install && yarn build:core && make build-web-app
env:
JAN_API_BASE: ${{ env.JAN_API_BASE }}
GA_MEASUREMENT_ID: ${{ env.GA_MEASUREMENT_ID }}
- name: Publish to Cloudflare Pages Production
uses: cloudflare/pages-action@v1

View File

@ -0,0 +1,60 @@
name: Jan Web Server build image and push to Harbor Registry
on:
push:
branches:
- stag-web
pull_request:
branches:
- stag-web
jobs:
build-and-preview:
runs-on: [ubuntu-24-04-docker]
env:
JAN_API_BASE: "https://api-stag.jan.ai/v1"
permissions:
pull-requests: write
contents: write
steps:
- name: Checkout source repo
uses: actions/checkout@v4
- name: Login to Harbor Registry
uses: docker/login-action@v3
with:
registry: registry.menlo.ai
username: ${{ secrets.HARBOR_USERNAME }}
password: ${{ secrets.HARBOR_PASSWORD }}
- name: Install dependencies
run: |
(type -p wget >/dev/null || (sudo apt update && sudo apt install wget -y)) \
&& sudo mkdir -p -m 755 /etc/apt/keyrings \
&& out=$(mktemp) && wget -nv -O$out https://cli.github.com/packages/githubcli-archive-keyring.gpg \
&& cat $out | sudo tee /etc/apt/keyrings/githubcli-archive-keyring.gpg > /dev/null \
&& sudo chmod go+r /etc/apt/keyrings/githubcli-archive-keyring.gpg \
&& sudo mkdir -p -m 755 /etc/apt/sources.list.d \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null \
&& sudo apt update
sudo apt-get install -y jq gettext
- name: Set image tag
id: vars
run: |
if [[ "${{ github.event_name }}" == "pull_request" ]]; then
IMAGE_TAG="web:preview-${{ github.sha }}"
else
IMAGE_TAG="web:stag-${{ github.sha }}"
fi
echo "IMAGE_TAG=${IMAGE_TAG}" >> $GITHUB_OUTPUT
echo "FULL_IMAGE=registry.menlo.ai/jan-server/${IMAGE_TAG}" >> $GITHUB_OUTPUT
- name: Build docker image
run: |
docker build --build-arg JAN_API_BASE=${{ env.JAN_API_BASE }} -t ${{ steps.vars.outputs.FULL_IMAGE }} .
- name: Push docker image
if: github.event_name == 'push'
run: |
docker push ${{ steps.vars.outputs.FULL_IMAGE }}

View File

@ -89,7 +89,6 @@ jobs:
- name: Build app
run: |
rustup target add x86_64-apple-darwin
make build
env:
APP_PATH: '.'

View File

@ -92,31 +92,6 @@ jobs:
run: |
cargo install ctoml
- name: Create bun and uv universal
run: |
mkdir -p ./src-tauri/resources/bin/
cd ./src-tauri/resources/bin/
curl -L -o bun-darwin-x64.zip https://github.com/oven-sh/bun/releases/download/bun-v1.2.10/bun-darwin-x64.zip
curl -L -o bun-darwin-aarch64.zip https://github.com/oven-sh/bun/releases/download/bun-v1.2.10/bun-darwin-aarch64.zip
unzip bun-darwin-x64.zip
unzip bun-darwin-aarch64.zip
lipo -create -output bun-universal-apple-darwin bun-darwin-x64/bun bun-darwin-aarch64/bun
cp -f bun-darwin-aarch64/bun bun-aarch64-apple-darwin
cp -f bun-darwin-x64/bun bun-x86_64-apple-darwin
cp -f bun-universal-apple-darwin bun
curl -L -o uv-x86_64.tar.gz https://github.com/astral-sh/uv/releases/download/0.6.17/uv-x86_64-apple-darwin.tar.gz
curl -L -o uv-arm64.tar.gz https://github.com/astral-sh/uv/releases/download/0.6.17/uv-aarch64-apple-darwin.tar.gz
tar -xzf uv-x86_64.tar.gz
tar -xzf uv-arm64.tar.gz
mv uv-x86_64-apple-darwin uv-x86_64
mv uv-aarch64-apple-darwin uv-aarch64
lipo -create -output uv-universal-apple-darwin uv-x86_64/uv uv-aarch64/uv
cp -f uv-x86_64/uv uv-x86_64-apple-darwin
cp -f uv-aarch64/uv uv-aarch64-apple-darwin
cp -f uv-universal-apple-darwin uv
ls -la
- name: Update app version based on latest release tag with build number
run: |
echo "Version: ${{ inputs.new_version }}"
@ -167,7 +142,6 @@ jobs:
- name: Build app
run: |
rustup target add x86_64-apple-darwin
make build
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@ -1,6 +1,9 @@
# Stage 1: Build stage with Node.js and Yarn v4
FROM node:20-alpine AS builder
ARG JAN_API_BASE=https://api-dev.jan.ai/v1
ENV JAN_API_BASE=$JAN_API_BASE
# Install build dependencies
RUN apk add --no-cache \
make \

View File

@ -30,6 +30,17 @@ endif
yarn build:core
yarn build:extensions && yarn build:extensions-web
# Install required Rust targets for macOS universal builds
install-rust-targets:
ifeq ($(shell uname -s),Darwin)
@echo "Detected macOS, installing universal build targets..."
rustup target add x86_64-apple-darwin
rustup target add aarch64-apple-darwin
@echo "Rust targets installed successfully!"
else
@echo "Not macOS; skipping Rust target installation."
endif
dev: install-and-build
yarn download:bin
yarn download:lib
@ -69,13 +80,8 @@ test: lint
cargo test --manifest-path src-tauri/plugins/tauri-plugin-llamacpp/Cargo.toml
cargo test --manifest-path src-tauri/utils/Cargo.toml
# Builds and publishes the app
build-and-publish: install-and-build
yarn build
# Build
build: install-and-build
yarn download:lib
build: install-and-build install-rust-targets
yarn build
clean:

View File

@ -126,16 +126,17 @@ export abstract class BaseExtension implements ExtensionType {
settings.forEach((setting) => {
// Keep setting value
if (setting.controllerProps && Array.isArray(oldSettings))
setting.controllerProps.value = oldSettings.find(
(e: any) => e.key === setting.key
)?.controllerProps?.value
setting.controllerProps.value =
oldSettings.find((e: any) => e.key === setting.key)?.controllerProps?.value ??
setting.controllerProps.value
if ('options' in setting.controllerProps)
setting.controllerProps.options = setting.controllerProps.options?.length
? setting.controllerProps.options
: oldSettings.find((e: any) => e.key === setting.key)?.controllerProps?.options
if ('recommended' in setting.controllerProps) {
const oldRecommended = oldSettings.find((e: any) => e.key === setting.key)?.controllerProps?.recommended
if (oldRecommended !== undefined && oldRecommended !== "") {
const oldRecommended = oldSettings.find((e: any) => e.key === setting.key)
?.controllerProps?.recommended
if (oldRecommended !== undefined && oldRecommended !== '') {
setting.controllerProps.recommended = oldRecommended
}
}

View File

@ -13,7 +13,7 @@ export interface chatCompletionRequestMessage {
}
export interface Content {
type: 'text' | 'input_image' | 'input_audio'
type: 'text' | 'image_url' | 'input_audio'
text?: string
image_url?: string
input_audio?: InputAudio
@ -54,6 +54,8 @@ export type ToolChoice = 'none' | 'auto' | 'required' | ToolCallSpec
export interface chatCompletionRequest {
model: string // Model ID, though for local it might be implicit via sessionInfo
messages: chatCompletionRequestMessage[]
thread_id?: string // Thread/conversation ID for context tracking
return_progress?: boolean
tools?: Tool[]
tool_choice?: ToolChoice
// Core sampling parameters
@ -119,6 +121,13 @@ export interface chatCompletionChunkChoice {
finish_reason?: 'stop' | 'length' | 'tool_calls' | 'content_filter' | 'function_call' | null
}
export interface chatCompletionPromptProgress {
cache: number
processed: number
time_ms: number
total: number
}
export interface chatCompletionChunk {
id: string
object: 'chat.completion.chunk'
@ -126,6 +135,7 @@ export interface chatCompletionChunk {
model: string
choices: chatCompletionChunkChoice[]
system_fingerprint?: string
prompt_progress?: chatCompletionPromptProgress
}
export interface chatCompletionChoice {
@ -173,6 +183,7 @@ export interface SessionInfo {
model_id: string //name of the model
model_path: string // path of the loaded model
api_key: string
mmproj_path?: string
}
export interface UnloadResult {

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 11 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.9 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.6 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.2 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.0 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.0 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.0 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 665 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.9 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.0 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.7 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.9 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.3 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.2 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.4 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.5 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.7 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.7 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.7 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 288 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.5 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.3 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.3 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.3 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.8 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 MiB

View File

@ -3,12 +3,12 @@ title: "Faster inference across: Mac, Windows, Linux, and GPUs"
version: 0.4.3
description: ""
date: 2023-12-21
ogImage: "/assets/images/changelog/Jan_v0.4.3.gif"
ogImage: "https://catalog.jan.ai/docs/Jan_v0.4.3.gif"
---
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
<ChangelogHeader title= "Faster inference across= Mac, Windows, Linux, and GPUs" date= "2023-12-21" ogImage= "/assets/images/changelog/Jan_v0.4.3.gif" />
<ChangelogHeader title= "Faster inference across= Mac, Windows, Linux, and GPUs" date= "2023-12-21" ogImage= "https://catalog.jan.ai/docs/Jan_v0.4.3.gif" />
### Highlights 🎉

View File

@ -3,12 +3,12 @@ title: "Local API server"
version: 0.4.5
description: ""
date: 2024-01-29
ogImage: "/assets/images/changelog/Jan_v0.4.5.gif"
ogImage: "https://catalog.jan.ai/docs/Jan_v0.4.5.gif"
---
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
<ChangelogHeader title= "Local API server" date= "2024-01-29" ogImage= "/assets/images/changelog/Jan_v0.4.5.gif" />
<ChangelogHeader title= "Local API server" date= "2024-01-29" ogImage= "https://catalog.jan.ai/docs/Jan_v0.4.5.gif" />
### Highlights 🎉

View File

@ -3,12 +3,12 @@ title: "Jan Data Folder"
version: 0.4.6
description: ""
date: 2024-02-05
ogImage: "/assets/images/changelog/jan_product_update_feature.gif"
ogImage: "https://catalog.jan.ai/docs/jan_product_update_feature.gif"
---
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
<ChangelogHeader title= "Jan Data Folder" date= "2024-02-05" ogImage= "/assets/images/changelog/jan_product_update_feature.gif" />
<ChangelogHeader title= "Jan Data Folder" date= "2024-02-05" ogImage= "https://catalog.jan.ai/docs/jan_product_update_feature.gif" />
### Highlights 🎉

View File

@ -3,12 +3,12 @@ title: "New UI & Codestral Support"
version: 0.5.0
description: "Revamped Jan's UI to make it clearer and more user-friendly"
date: 2024-06-03
ogImage: "/assets/images/changelog/jan_v0.5.0.gif"
ogImage: "https://catalog.jan.ai/docs/jan_v0.5.0.gif"
---
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
<ChangelogHeader title= "New UI & Codestral Support" date= "2024-06-03" ogImage= "/assets/images/changelog/jan_v0.5.0.gif" />
<ChangelogHeader title= "New UI & Codestral Support" date= "2024-06-03" ogImage= "https://catalog.jan.ai/docs/jan_v0.5.0.gif" />
Revamped Jan's UI to make it clearer and more user-friendly.

View File

@ -3,12 +3,12 @@ title: "Groq API Integration"
version: 0.4.10
description: ""
date: 2024-04-02
ogImage: "/assets/images/changelog/jan_update_groq.gif"
ogImage: "https://catalog.jan.ai/docs/jan_update_groq.gif"
---
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
<ChangelogHeader title= "Groq API Integration" date= "2024-04-02" ogImage= "/assets/images/changelog/jan_update_groq.gif" />
<ChangelogHeader title= "Groq API Integration" date= "2024-04-02" ogImage= "https://catalog.jan.ai/docs/jan_update_groq.gif" />
### Highlights 🎉

View File

@ -3,12 +3,12 @@ title: "New Mistral Extension"
version: 0.4.11
description: "Jan has a new Mistral Extension letting you chat with larger Mistral models via Mistral API"
date: 2024-04-15
ogImage: "/assets/images/changelog/jan_mistral_api.gif"
ogImage: "https://catalog.jan.ai/docs/jan_mistral_api.gif"
---
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
<ChangelogHeader title= "New Mistral Extension" date= '2024-04-15' ogImage= "/assets/images/changelog/jan_mistral_api.gif"/>
<ChangelogHeader title= "New Mistral Extension" date= '2024-04-15' ogImage= "https://catalog.jan.ai/docs/jan_mistral_api.gif"/>
### Highlights 🎉

View File

@ -3,29 +3,29 @@ title: 'Jan now supports Llama3 and Command R+'
version: 0.4.12
description: "Jan has added compatibility with Llama3 & Command R+"
date: 2024-04-25
ogImage: "/assets/images/changelog/jan_llama3.gif"
ogImage: "https://catalog.jan.ai/docs/jan_llama3.gif"
---
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
<ChangelogHeader title= 'Jan now supports Llama3 and Command R+' date= "2024-04-25" ogImage= '/assets/images/changelog/jan_llama3.gif' />
<ChangelogHeader title= 'Jan now supports Llama3 and Command R+' date= "2024-04-25" ogImage= 'https://catalog.jan.ai/docs/jan_llama3.gif' />
Jan has added compatibility with Metas open-source language model, `Llama3`, through the integration with `llamacpp` (thanks to [@ggerganov](https://github.com/ggerganov)).
Additionally, `Command R+` is now supported. It is the first open-source model to surpass GPT-4 on the [LMSys leaderboard](https://chat.lmsys.org/?leaderboard).
![Commandr](/assets/images/changelog/jan_cohere_commandr.gif)
![Commandr](https://catalog.jan.ai/docs/jan_cohere_commandr.gif)
## Import Huggingface models directly
Users can now import Huggingface models into Jan. Simply copy the models link from Huggingface and paste it into the search bar on Jan Hub.
![HugginFace](/assets/images/changelog/jan_hugging_face.gif)
![HugginFace](https://catalog.jan.ai/docs/jan_hugging_face.gif)
## Enhanced LaTeX understanding
Jan now understands LaTeX, allowing users to process and understand complex mathematical expressions more effectively.
![Latex](/assets/images/changelog/jan_update_latex.gif)
![Latex](https://catalog.jan.ai/docs/jan_update_latex.gif)
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.4.12).

View File

@ -3,12 +3,12 @@ title: "Jan now supports more GGUF models"
version: 0.4.13
description: "We rebased to llamacpp b2865."
date: 2024-05-20
ogImage: "/assets/images/changelog/jan_v0.4.13_update.gif"
ogImage: "https://catalog.jan.ai/docs/jan_v0.4.13_update.gif"
---
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
<ChangelogHeader title= "Jan now supports more GGUF models" date= '2024-05-20' ogImage= "/assets/images/changelog/jan_v0.4.13_update.gif" />
<ChangelogHeader title= "Jan now supports more GGUF models" date= '2024-05-20' ogImage= "https://catalog.jan.ai/docs/jan_v0.4.13_update.gif" />
With this release, more GGUF models should work now! We rebased to llamacpp b2865!
@ -20,12 +20,12 @@ Jan now supports `Anthropic API` models `Command R` and `Command R+`, along with
Jan supports `Martian`, a dynamic LLM router that routes between multiple models and allows users to reduce costs by 20% to 97%. Jan also supports `OpenRouter`, helping users select the best model for each query.
![New_Integrations](/assets/images/changelog/jan_v0.4.13_update.gif)
![New_Integrations](https://catalog.jan.ai/docs/jan_v0.4.13_update.gif)
## GPT-4o Access
Users can now connect to OpenAI's new model GPT-4o.
![GPT4o](/assets/images/changelog/jan_v0_4_13_openai_gpt4o.gif)
![GPT4o](https://catalog.jan.ai/docs/jan_v0_4_13_openai_gpt4o.gif)
For more details, see the [GitHub release notes.](https://github.com/menloresearch/jan/releases/tag/v0.4.13)

View File

@ -3,12 +3,12 @@ title: "Jan now compatible with Aya 23 8B & 35B and Phi-3-Medium"
version: 0.4.14
description: "Jan now supports Cohere's Aya 23 8B & 35B and Microsoft's Phi-3-Medium."
date: 2024-05-28
ogImage: "/assets/images/changelog/jan-v0-4-14-phi3.gif"
ogImage: "https://catalog.jan.ai/docs/jan-v0-4-14-phi3.gif"
---
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
<ChangelogHeader title= "Jan now compatible with Aya 23 8B & 35B and Phi-3-Medium" date= "2024-05-28" ogImage= "/assets/images/changelog/jan-v0-4-14-phi3.gif" />
<ChangelogHeader title= "Jan now compatible with Aya 23 8B & 35B and Phi-3-Medium" date= "2024-05-28" ogImage= "https://catalog.jan.ai/docs/jan-v0-4-14-phi3.gif" />
Jan now supports `Cohere`'s new models `Aya 23 (8B)` & `Aya 23 (35B)` and `Microsoft`'s `Phi-3-Medium`.

View File

@ -3,12 +3,12 @@ title: "Jan supports NVIDIA NIM"
version: 0.5.1
description: "Jan has integrated NVIDIA NIM and supports Qwen 2 7B"
date: 2024-06-21
ogImage: "/assets/images/changelog/jan_nvidia_nim_support.gif"
ogImage: "https://catalog.jan.ai/docs/jan_nvidia_nim_support.gif"
---
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
<ChangelogHeader title= "Jan supports NVIDIA NIM" date= '2024-06-21' ogImage= "/assets/images/changelog/jan_nvidia_nim_support.gif"/>
<ChangelogHeader title= "Jan supports NVIDIA NIM" date= '2024-06-21' ogImage= "https://catalog.jan.ai/docs/jan_nvidia_nim_support.gif"/>
## NVIDIA NIM

View File

@ -3,12 +3,12 @@ title: "Jan supports Claude 3.5 Sonnet"
version: 0.5.2
description: "You can run Claude 3.5 Sonnet in Jan"
date: 2024-07-15
ogImage: "/assets/images/changelog/jan_supports_claude_3_5.gif"
ogImage: "https://catalog.jan.ai/docs/jan_supports_claude_3_5.gif"
---
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
<ChangelogHeader title= "Jan supports Claude 3.5 Sonnet" date= "2024-07-15" ogImage= "/assets/images/changelog/jan_supports_claude_3_5.gif" />
<ChangelogHeader title= "Jan supports Claude 3.5 Sonnet" date= "2024-07-15" ogImage= "https://catalog.jan.ai/docs/jan_supports_claude_3_5.gif" />
## Claude 3.5 Sonnet

View File

@ -3,12 +3,12 @@ title: "v0.5.3 is out with stability improvements!"
version: 0.5.3
description: "You can run Llama 3.1 and Gemma 2 in Jan"
date: 2024-08-29
ogImage: "/assets/images/changelog/janv0.5.3.gif"
ogImage: "https://catalog.jan.ai/docs/janv0.5.3.gif"
---
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
<ChangelogHeader title= "v0.5.3 is out with stability improvements!" date="2024-09-01" ogImage= "/assets/images/changelog/janv0.5.3.gif" />
<ChangelogHeader title= "v0.5.3 is out with stability improvements!" date="2024-09-01" ogImage= "https://catalog.jan.ai/docs/janv0.5.3.gif" />
## Llama 3.1 and Gemma 2 Support

View File

@ -3,12 +3,12 @@ title: "Jan has Stable, Beta and Nightly versions"
version: 0.5.7
description: "This release is mostly focused on bug fixes."
date: 2024-10-24
ogImage: "/assets/images/changelog/jan-v0.5.7.gif"
ogImage: "https://catalog.jan.ai/docs/jan-v0.5.7.gif"
---
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
<ChangelogHeader title= "Jan is faster now" date="2024-09-01" ogImage= "/assets/images/changelog/jan-v0.5.7.gif" />
<ChangelogHeader title= "Jan is faster now" date="2024-09-01" ogImage= "https://catalog.jan.ai/docs/jan-v0.5.7.gif" />
Highlights 🎉

View File

@ -3,12 +3,12 @@ title: "Model downloads & running issues fixed"
version: 0.5.9
description: "Jan v0.5.9 is here: fixing what needed fixing."
date: 2024-11-22
ogImage: "/assets/images/changelog/jan-v0.5.9.gif"
ogImage: "https://catalog.jan.ai/docs/jan-v0.5.9.gif"
---
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
<ChangelogHeader title= "Jan v0.5.9 is here:fixing what needed fixing" date="2024-11-22" ogImage= "/assets/images/changelog/jan-v0.5.9.gif" />
<ChangelogHeader title= "Jan v0.5.9 is here:fixing what needed fixing" date="2024-11-22" ogImage= "https://catalog.jan.ai/docs/jan-v0.5.9.gif" />
Jan v0.5.9 is here: fixing what needed fixing

View File

@ -3,12 +3,12 @@ title: "Jan supports Qwen2.5-Coder 14B & 32B"
version: 0.5.8
description: "Jan v0.5.8 is out: Jan supports Qwen2.5-Coder 14B & 32B through Cortex"
date: 2024-11-14
ogImage: "/assets/images/changelog/jan-v0.5.8.gif"
ogImage: "https://catalog.jan.ai/docs/jan-v0.5.8.gif"
---
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
<ChangelogHeader title= "Jan supports Qwen2.5-Coder 14B & 32B" date="2024-11-14" ogImage= "/assets/images/changelog/jan-v0.5.7.gif" />
<ChangelogHeader title= "Jan supports Qwen2.5-Coder 14B & 32B" date="2024-11-14" ogImage= "https://catalog.jan.ai/docs/jan-v0.5.7.gif" />
Jan v0.5.8 is out: Jan supports Qwen2.5-Coder 14B & 32B through Cortex

View File

@ -3,12 +3,12 @@ title: "Jan v0.5.10 is live"
version: 0.5.10
description: "Jan is faster, smoother, and more reliable."
date: 2024-12-03
ogImage: "/assets/images/changelog/jan-v0.5.10.gif"
ogImage: "https://catalog.jan.ai/docs/jan-v0.5.10.gif"
---
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
<ChangelogHeader title= "Jan v0.5.10: Jan is faster, smoother, and more reliable." date="2024-12-03" ogImage= "/assets/images/changelog/jan-v0.5.10.gif" />
<ChangelogHeader title= "Jan v0.5.10: Jan is faster, smoother, and more reliable." date="2024-12-03" ogImage= "https://catalog.jan.ai/docs/jan-v0.5.10.gif" />
Jan v0.5.10 is live: Jan is faster, smoother, and more reliable.

View File

@ -3,12 +3,12 @@ title: "Jan v0.5.11 is here!"
version: 0.5.11
description: "Critical issues fixed, Mac installation updated."
date: 2024-12-05
ogImage: "/assets/images/changelog/jan-v0.5.11.gif"
ogImage: "https://catalog.jan.ai/docs/jan-v0.5.11.gif"
---
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
<ChangelogHeader title= "Jan v0.5.11: Jan is faster, smoother, and more reliable." date="2024-12-05" ogImage= "/assets/images/changelog/jan-v0.5.11.gif" />
<ChangelogHeader title= "Jan v0.5.11: Jan is faster, smoother, and more reliable." date="2024-12-05" ogImage= "https://catalog.jan.ai/docs/jan-v0.5.11.gif" />
Jan v0.5.11 is here - critical issues fixed, Mac installation updated.

View File

@ -3,12 +3,12 @@ title: "Jan gives you full control over your privacy"
version: 0.5.12
description: "Improved Privacy settings to give full control over analytics"
date: 2024-12-30
ogImage: "/assets/images/changelog/jan-v0.5.12.gif"
ogImage: "https://catalog.jan.ai/docs/jan-v0.5.12.gif"
---
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
<ChangelogHeader title= "Jan v0.5.12: Improved Privacy settings to give full control over analytics." date="2024-12-30" ogImage= "/assets/images/changelog/jan-v0.5.12.gif" />
<ChangelogHeader title= "Jan v0.5.12: Improved Privacy settings to give full control over analytics." date="2024-12-30" ogImage= "https://catalog.jan.ai/docs/jan-v0.5.12.gif" />
Jan v0.5.11 is here - critical issues fixed, Mac installation updated.

View File

@ -3,12 +3,12 @@ title: "Qwen3 support is now more reliable."
version: 0.5.17
description: "Jan v0.5.17 is out: Qwen3 support is now more reliable"
date: 2025-05-14
ogImage: "/assets/images/changelog/jan-v0-5-17-gemm3-patch.gif"
ogImage: "https://catalog.jan.ai/docs/jan-v0-5-17-gemm3-patch.gif"
---
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
<ChangelogHeader title="Qwen3 support is now more reliable" date="2025-05-14" ogImage="/assets/images/changelog/jan-v0-5-17-gemm3-patch.gif" />
<ChangelogHeader title="Qwen3 support is now more reliable" date="2025-05-14" ogImage="https://catalog.jan.ai/docs/jan-v0-5-17-gemm3-patch.gif" />
👋 Jan v0.5.17 is out: Qwen3 support is now more reliable

View File

@ -3,12 +3,12 @@ title: "Jan v0.6.3 brings new features and models!"
version: 0.6.3
description: "Unlocking MCP for everyone and bringing our latest model to Jan!"
date: 2025-06-26
ogImage: "/assets/images/changelog/jn128.gif"
ogImage: "https://catalog.jan.ai/docs/jn128.gif"
---
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
<ChangelogHeader title="Jan v0.6.3 brings with it MCP and our latest model!" date="2025-06-26" ogImage="/assets/images/changelog/jn128.gif" />
<ChangelogHeader title="Jan v0.6.3 brings with it MCP and our latest model!" date="2025-06-26" ogImage="https://catalog.jan.ai/docs/jn128.gif" />
## Highlights 🎉

View File

@ -3,12 +3,12 @@ title: "Jan v0.6.5 brings responsive UI and MCP examples!"
version: 0.6.5
description: "New MCP examples, updated pages, and bug fixes!"
date: 2025-07-17
ogImage: "/assets/images/changelog/release_v0_6_5.gif"
ogImage: "https://catalog.jan.ai/docs/release_v0_6_5.gif"
---
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
<ChangelogHeader title="Jan v0.6.5 brings responsive UI and MCP examples!" date="2025-07-17" ogImage="/assets/images/changelog/release_v0_6_5.gif" />
<ChangelogHeader title="Jan v0.6.5 brings responsive UI and MCP examples!" date="2025-07-17" ogImage="https://catalog.jan.ai/docs/release_v0_6_5.gif" />
## Highlights 🎉

View File

@ -3,12 +3,12 @@ title: "Jan v0.6.6: Enhanced llama.cpp integration and smarter model management"
version: 0.6.6
description: "Major llama.cpp improvements, Hugging Face provider support, and refined MCP experience"
date: 2025-07-31
ogImage: "/assets/images/changelog/changelog0.6.6.gif"
ogImage: "https://catalog.jan.ai/docs/changelog0.6.6.gif"
---
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
<ChangelogHeader title="Jan v0.6.6: Enhanced llama.cpp integration and smarter model management" date="2025-01-31" ogImage="/assets/images/changelog/changelog0.6.6.gif" />
<ChangelogHeader title="Jan v0.6.6: Enhanced llama.cpp integration and smarter model management" date="2025-01-31" ogImage="https://catalog.jan.ai/docs/changelog0.6.6.gif" />
## Highlights 🎉

View File

@ -3,13 +3,13 @@ title: "Jan v0.6.8: Engine fixes, new MCP tutorials, and cleaner docs"
version: 0.6.8
description: "Llama.cpp stability upgrades, Linear/Todoist MCP tutorials, new model pages (Lucy, Janv1), and docs reorganization"
date: 2025-08-14
ogImage: "/assets/images/changelog/mcplinear2.gif"
ogImage: "https://catalog.jan.ai/docs/mcplinear2.gif"
---
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
import { Callout } from 'nextra/components'
<ChangelogHeader title="Jan v0.6.8: Engine fixes, new MCP tutorials, and cleaner docs" date="2025-08-14" ogImage="/assets/images/changelog/mcplinear2.gif" />
<ChangelogHeader title="Jan v0.6.8: Engine fixes, new MCP tutorials, and cleaner docs" date="2025-08-14" ogImage="https://catalog.jan.ai/docs/mcplinear2.gif" />
## Highlights 🎉

View File

@ -3,13 +3,13 @@ title: "Jan v0.6.9: Image support, stable MCP, and powerful model tools"
version: 0.6.9
description: "Major multimodal support with image uploads, MCP out of experimental, auto-detect model capabilities, and enhanced tool calling"
date: 2025-08-28
ogImage: "/assets/images/changelog/jan-images.gif"
ogImage: "https://catalog.jan.ai/docs/jan-images.gif"
---
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
import { Callout } from 'nextra/components'
<ChangelogHeader title="Jan v0.6.9: Image support, stable MCP, and powerful model tools" date="2025-08-28" ogImage="/assets/images/changelog/jan-images.gif" />
<ChangelogHeader title="Jan v0.6.9: Image support, stable MCP, and powerful model tools" date="2025-08-28" ogImage="https://catalog.jan.ai/docs/jan-images.gif" />
## Highlights 🎉

View File

@ -0,0 +1,48 @@
---
title: "Jan v0.6.10: Auto Optimize, custom backends, and vision model imports"
version: 0.6.10
description: "New experimental Auto Optimize feature, custom llama.cpp backend support, vision model imports, and critical bug fixes"
date: 2025-09-18
ogImage: "/assets/images/changelog/jan-v0.6.10-auto-optimize.gif"
---
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
import { Callout } from 'nextra/components'
<ChangelogHeader title="Jan v0.6.10: Auto Optimize, custom backends, and vision model imports" date="2025-09-18" ogImage="/assets/images/changelog/jan-v0.6.10-auto-optimize.gif" />
## Highlights 🎉
- **Auto Optimize**: One-click hardware-aware performance tuning for llama.cpp.
- **Custom Backend Support**: Import and manage your preferred llama.cpp versions.
- **Import Vision Models**: Seamlessly import and use vision-capable models.
### 🚀 Auto Optimize (Experimental)
**Intelligent performance tuning** — Jan can now apply the best llama.cpp settings for your specific hardware:
- **Hardware analysis**: Automatically detects your CPU, GPU, and memory configuration
- **One-click optimization**: Applies optimal parameters with a single click in model settings
<Callout type="info">
Auto Optimize is currently experimental and will be refined based on user feedback. It analyzes your system specs and applies proven configurations for optimal llama.cpp performance.
</Callout>
### 👁️ Vision Model Imports
<img src="/assets/images/changelog/jan-import-vlm-model.gif" alt="Vision Model Import Demo" width="600" />
**Enhanced multimodal support** — Import and use vision models seamlessly:
- **Direct vision model import**: Import vision-capable models from any source
- **Improved compatibility**: Better handling of multimodal model formats
### 🔧 Custom Backend Support
**Import your preferred llama.cpp version** — Full control over your AI backend:
- **Custom llama.cpp versions**: Import and use any llama.cpp build you prefer
- **Version flexibility**: Use bleeding-edge builds or stable releases
- **Backup CDN**: New CDN fallback when GitHub downloads fail
- **User confirmation**: Prompts before auto-updating llama.cpp
Update your Jan or [download the latest](https://jan.ai/).
For the complete list of changes, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.6.10).

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.4 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.4 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.7 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.8 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.4 MiB

View File

@ -87,7 +87,7 @@ Jan-Nano-128k has been rigorously evaluated on the SimpleQA benchmark using our
### Demo
<video width="100%" controls>
<source src="/assets/videos/jan-nano-demo.mp4" type="video/mp4" />
<source src="https://catalog.jan.ai/docs/jan-nano-demo.mp4" type="video/mp4" />
Your browser does not support the video tag.
</video>

View File

@ -20,7 +20,7 @@ import { Callout } from 'nextra/components'
# Jan Nano
![Jan Nano](../_assets/jan-nano0.png)
![Jan Nano](https://catalog.jan.ai/docs/jan-nano0.png)
## Why Jan Nano?
@ -81,7 +81,7 @@ Add the serper MCP to Jan via the **Settings** > **MCP Servers** tab.
**Step 6**
Open up a new chat and ask Jan-Nano to search the web for you.
![Jan Nano](../_assets/jan-nano-demo.gif)
![Jan Nano](https://catalog.jan.ai/docs/jan-nano-demo.gif)
## Queries to Try

View File

@ -58,7 +58,7 @@ These benchmarks (EQBench, CreativeWriting, and IFBench) measure the model's abi
### Demo
![Jan-v1 Demo](../_assets/jan_v1_demo.gif)
![Jan-v1 Demo](https://catalog.jan.ai/docs/jan_v1_demo.gif)
### Deployment Options

View File

@ -55,7 +55,7 @@ To use Lucy's web search capabilities, you'll need a Serper API key. Get one at
### Demo
![Lucy Demo](../_assets/lucy_demo.gif)
![Lucy Demo](https://catalog.jan.ai/docs/lucy_demo.gif)
### Deployment Options

View File

@ -204,7 +204,7 @@ Generate synthetic data with numpy, move it to a pandas dataframe and create a p
Watch the complete output unfold:
<video width="100%" controls>
<source src="/assets/videos/mcpjupyter.mp4" type="video/mp4" />
<source src="https://catalog.jan.ai/docs/mcpjupyter.mp4" type="video/mp4" />
Your browser does not support the video tag.
</video>

View File

@ -98,7 +98,7 @@ When you first use Canva tools:
- Canva authentication page appears in your default browser
- Log in with your Canva account
![Canva authentication page](../../_assets/canva2.png)
![Canva authentication page](https://catalog.jan.ai/docs/canva2.png)
2. **Team Selection & Permissions**
- Select your team (if you have multiple)

View File

@ -128,7 +128,7 @@ You should see all Linear tools in the chat interface:
Watch AI transform mundane tasks into epic narratives:
![Linear MCP creating Shakespearean war epic tasks](../../_assets/mcplinear2.gif)
![Linear MCP creating Shakespearean war epic tasks](https://catalog.jan.ai/docs/mcplinear2.gif)
## Creative Examples

View File

@ -101,7 +101,7 @@ You should see the Todoist tools in the tools panel:
Now you can manage your todo list through natural conversation:
![Todoist MCP in action](../../_assets/mcptodoist_extreme.gif)
![Todoist MCP in action](https://catalog.jan.ai/docs/mcptodoist_extreme.gif)
## Example Prompts

View File

@ -103,7 +103,7 @@ Note: `ngl` is the abbreviation of `Number of GPU Layers` with the range from `0
### NVIDIA GeForce RTX 4090 GPU
![image](./_assets/4090s.png)
![image](https://catalog.jan.ai/docs/4090s.png)
*Jan is built on this Dual-4090 workstation, which recently got upgraded to a nice case*
![image](./_assets/og-4090s.webp)

View File

@ -13,7 +13,7 @@ date: 2025-08-22
This cookbook will transform your Jan-V1 from a basic Q&A tool into a comprehensive research assistant. By the end of this guide, you'll have a custom-configured model that generates detailed reports with proper citations instead of surface-level answers.
![Jan-V1 research comparison](./_assets/deep_research_compare_jan.gif)
![Jan-V1 research comparison](https://catalog.jan.ai/docs/deep_research_compare_jan.gif)
## Key Points

View File

@ -0,0 +1,160 @@
/**
* Conversation API wrapper using JanAuthProvider
*/
import { getSharedAuthService, JanAuthService } from '../shared/auth'
import { CONVERSATION_API_ROUTES } from './const'
import {
Conversation,
ConversationResponse,
ListConversationsParams,
ListConversationsResponse,
PaginationParams,
PaginatedResponse,
ConversationItem,
ListConversationItemsParams,
ListConversationItemsResponse
} from './types'
declare const JAN_API_BASE: string
export class RemoteApi {
private authService: JanAuthService
constructor() {
this.authService = getSharedAuthService()
}
async createConversation(
data: Conversation
): Promise<ConversationResponse> {
const url = `${JAN_API_BASE}${CONVERSATION_API_ROUTES.CONVERSATIONS}`
return this.authService.makeAuthenticatedRequest<ConversationResponse>(
url,
{
method: 'POST',
body: JSON.stringify(data),
}
)
}
async updateConversation(
conversationId: string,
data: Conversation
): Promise<ConversationResponse> {
const url = `${JAN_API_BASE}${CONVERSATION_API_ROUTES.CONVERSATION_BY_ID(conversationId)}`
return this.authService.makeAuthenticatedRequest<ConversationResponse>(
url,
{
method: 'PATCH',
body: JSON.stringify(data),
}
)
}
async listConversations(
params?: ListConversationsParams
): Promise<ListConversationsResponse> {
const queryParams = new URLSearchParams()
if (params?.limit !== undefined) {
queryParams.append('limit', params.limit.toString())
}
if (params?.after) {
queryParams.append('after', params.after)
}
if (params?.order) {
queryParams.append('order', params.order)
}
const queryString = queryParams.toString()
const url = `${JAN_API_BASE}${CONVERSATION_API_ROUTES.CONVERSATIONS}${queryString ? `?${queryString}` : ''}`
return this.authService.makeAuthenticatedRequest<ListConversationsResponse>(
url,
{
method: 'GET',
}
)
}
/**
* Generic method to fetch all pages of paginated data
*/
async fetchAllPaginated<T>(
fetchFn: (params: PaginationParams) => Promise<PaginatedResponse<T>>,
initialParams?: Partial<PaginationParams>
): Promise<T[]> {
const allItems: T[] = []
let after: string | undefined = undefined
let hasMore = true
const limit = initialParams?.limit || 100
while (hasMore) {
const response = await fetchFn({
limit,
after,
...initialParams,
})
allItems.push(...response.data)
hasMore = response.has_more
after = response.last_id
}
return allItems
}
async getAllConversations(): Promise<ConversationResponse[]> {
return this.fetchAllPaginated<ConversationResponse>(
(params) => this.listConversations(params)
)
}
async deleteConversation(conversationId: string): Promise<void> {
const url = `${JAN_API_BASE}${CONVERSATION_API_ROUTES.CONVERSATION_BY_ID(conversationId)}`
await this.authService.makeAuthenticatedRequest(
url,
{
method: 'DELETE',
}
)
}
async listConversationItems(
conversationId: string,
params?: Omit<ListConversationItemsParams, 'conversation_id'>
): Promise<ListConversationItemsResponse> {
const queryParams = new URLSearchParams()
if (params?.limit !== undefined) {
queryParams.append('limit', params.limit.toString())
}
if (params?.after) {
queryParams.append('after', params.after)
}
if (params?.order) {
queryParams.append('order', params.order)
}
const queryString = queryParams.toString()
const url = `${JAN_API_BASE}${CONVERSATION_API_ROUTES.CONVERSATION_ITEMS(conversationId)}${queryString ? `?${queryString}` : ''}`
return this.authService.makeAuthenticatedRequest<ListConversationItemsResponse>(
url,
{
method: 'GET',
}
)
}
async getAllConversationItems(conversationId: string): Promise<ConversationItem[]> {
return this.fetchAllPaginated<ConversationItem>(
(params) => this.listConversationItems(conversationId, params),
{ limit: 100, order: 'asc' }
)
}
}

View File

@ -0,0 +1,17 @@
/**
* API Constants for Conversational Web
*/
export const CONVERSATION_API_ROUTES = {
CONVERSATIONS: '/conversations',
CONVERSATION_BY_ID: (id: string) => `/conversations/${id}`,
CONVERSATION_ITEMS: (id: string) => `/conversations/${id}/items`,
} as const
export const DEFAULT_ASSISTANT = {
id: 'jan',
name: 'Jan',
avatar: '👋',
created_at: 1747029866.542,
}

View File

@ -0,0 +1,154 @@
/**
* Web Conversational Extension
* Implements thread and message management using IndexedDB
*/
import {
Thread,
ThreadMessage,
ConversationalExtension,
ThreadAssistantInfo,
} from '@janhq/core'
import { RemoteApi } from './api'
import { getDefaultAssistant, ObjectParser, combineConversationItemsToMessages } from './utils'
export default class ConversationalExtensionWeb extends ConversationalExtension {
private remoteApi: RemoteApi | undefined
async onLoad() {
console.log('Loading Web Conversational Extension')
this.remoteApi = new RemoteApi()
}
onUnload() {}
// Thread Management
async listThreads(): Promise<Thread[]> {
try {
if (!this.remoteApi) {
throw new Error('RemoteApi not initialized')
}
const conversations = await this.remoteApi.getAllConversations()
console.log('!!!Listed threads:', conversations.map(ObjectParser.conversationToThread))
return conversations.map(ObjectParser.conversationToThread)
} catch (error) {
console.error('Failed to list threads:', error)
return []
}
}
async createThread(thread: Thread): Promise<Thread> {
try {
if (!this.remoteApi) {
throw new Error('RemoteApi not initialized')
}
const response = await this.remoteApi.createConversation(
ObjectParser.threadToConversation(thread)
)
// Create a new thread object with the server's ID
const createdThread = {
...thread,
id: response.id,
assistants: thread.assistants.map(getDefaultAssistant)
}
console.log('!!!Created thread:', createdThread)
return createdThread
} catch (error) {
console.error('Failed to create thread:', error)
throw error
}
}
async modifyThread(thread: Thread): Promise<void> {
try {
if (!this.remoteApi) {
throw new Error('RemoteApi not initialized')
}
await this.remoteApi.updateConversation(
thread.id,
ObjectParser.threadToConversation(thread)
)
console.log('!!!Modified thread:', thread)
} catch (error) {
console.error('Failed to modify thread:', error)
throw error
}
}
async deleteThread(threadId: string): Promise<void> {
try {
if (!this.remoteApi) {
throw new Error('RemoteApi not initialized')
}
await this.remoteApi.deleteConversation(threadId)
console.log('!!!Deleted thread:', threadId)
} catch (error) {
console.error('Failed to delete thread:', error)
throw error
}
}
// Message Management
async createMessage(message: ThreadMessage): Promise<ThreadMessage> {
console.log('!!!Created message:', message)
return message
}
async listMessages(threadId: string): Promise<ThreadMessage[]> {
try {
if (!this.remoteApi) {
throw new Error('RemoteApi not initialized')
}
console.log('!!!Listing messages for thread:', threadId)
// Fetch all conversation items from the API
const items = await this.remoteApi.getAllConversationItems(threadId)
// Convert and combine conversation items to thread messages
const messages = combineConversationItemsToMessages(items, threadId)
console.log('!!!Fetched messages:', messages)
return messages
} catch (error) {
console.error('Failed to list messages:', error)
return []
}
}
async modifyMessage(message: ThreadMessage): Promise<ThreadMessage> {
console.log('!!!Modified message:', message)
return message
}
async deleteMessage(threadId: string, messageId: string): Promise<void> {
console.log('!!!Deleted message:', threadId, messageId)
}
async getThreadAssistant(threadId: string): Promise<ThreadAssistantInfo> {
console.log('!!!Getting assistant for thread:', threadId)
return { id: 'jan', name: 'Jan', model: { id: 'jan-v1-4b' } }
}
async createThreadAssistant(
threadId: string,
assistant: ThreadAssistantInfo
): Promise<ThreadAssistantInfo> {
console.log('!!!Creating assistant for thread:', threadId, assistant)
return assistant
}
async modifyThreadAssistant(
threadId: string,
assistant: ThreadAssistantInfo
): Promise<ThreadAssistantInfo> {
console.log('!!!Modifying assistant for thread:', threadId, assistant)
return assistant
}
async getThreadAssistantInfo(
threadId: string
): Promise<ThreadAssistantInfo | undefined> {
console.log('!!!Getting assistant info for thread:', threadId)
return { id: 'jan', name: 'Jan', model: { id: 'jan-v1-4b' } }
}
}

View File

@ -1,347 +1,3 @@
/**
* Web Conversational Extension
* Implements thread and message management using IndexedDB
*/
import ConversationalExtensionWeb from './extension'
import { Thread, ThreadMessage, ConversationalExtension, ThreadAssistantInfo } from '@janhq/core'
import { getSharedDB } from '../shared/db'
export default class ConversationalExtensionWeb extends ConversationalExtension {
private db: IDBDatabase | null = null
async onLoad() {
console.log('Loading Web Conversational Extension')
this.db = await getSharedDB()
}
onUnload() {
// Don't close shared DB, other extensions might be using it
this.db = null
}
private ensureDB(): void {
if (!this.db) {
throw new Error('Database not initialized. Call onLoad() first.')
}
}
// Thread Management
async listThreads(): Promise<Thread[]> {
return this.getThreads()
}
async getThreads(): Promise<Thread[]> {
this.ensureDB()
return new Promise((resolve, reject) => {
const transaction = this.db!.transaction(['threads'], 'readonly')
const store = transaction.objectStore('threads')
const request = store.getAll()
request.onsuccess = () => {
const threads = request.result || []
// Sort by updated desc (most recent first)
threads.sort((a, b) => (b.updated || 0) - (a.updated || 0))
resolve(threads)
}
request.onerror = () => {
reject(request.error)
}
})
}
async createThread(thread: Thread): Promise<Thread> {
await this.saveThread(thread)
return thread
}
async modifyThread(thread: Thread): Promise<void> {
await this.saveThread(thread)
}
async saveThread(thread: Thread): Promise<void> {
this.ensureDB()
return new Promise((resolve, reject) => {
const transaction = this.db!.transaction(['threads'], 'readwrite')
const store = transaction.objectStore('threads')
const threadToStore = {
...thread,
created: thread.created || Date.now() / 1000,
updated: Date.now() / 1000,
}
const request = store.put(threadToStore)
request.onsuccess = () => {
console.log('Thread saved:', thread.id)
resolve()
}
request.onerror = () => {
console.error('Failed to save thread:', request.error)
reject(request.error)
}
})
}
async deleteThread(threadId: string): Promise<void> {
this.ensureDB()
return new Promise((resolve, reject) => {
const transaction = this.db!.transaction(['threads', 'messages'], 'readwrite')
const threadsStore = transaction.objectStore('threads')
const messagesStore = transaction.objectStore('messages')
// Delete thread
const deleteThreadRequest = threadsStore.delete(threadId)
// Delete all messages in the thread
const messageIndex = messagesStore.index('thread_id')
const messagesRequest = messageIndex.openCursor(IDBKeyRange.only(threadId))
messagesRequest.onsuccess = (event) => {
const cursor = (event.target as IDBRequest<IDBCursorWithValue>).result
if (cursor) {
cursor.delete()
cursor.continue()
}
}
transaction.oncomplete = () => {
console.log('Thread and messages deleted:', threadId)
resolve()
}
transaction.onerror = () => {
console.error('Failed to delete thread:', transaction.error)
reject(transaction.error)
}
})
}
// Message Management
async createMessage(message: ThreadMessage): Promise<ThreadMessage> {
await this.addNewMessage(message)
return message
}
async listMessages(threadId: string): Promise<ThreadMessage[]> {
return this.getAllMessages(threadId)
}
async modifyMessage(message: ThreadMessage): Promise<ThreadMessage> {
this.ensureDB()
return new Promise((resolve, reject) => {
const transaction = this.db!.transaction(['messages'], 'readwrite')
const store = transaction.objectStore('messages')
const messageToStore = {
...message,
updated: Date.now() / 1000,
}
const request = store.put(messageToStore)
request.onsuccess = () => {
console.log('Message updated:', message.id)
resolve(message)
}
request.onerror = () => {
console.error('Failed to update message:', request.error)
reject(request.error)
}
})
}
async deleteMessage(threadId: string, messageId: string): Promise<void> {
this.ensureDB()
return new Promise((resolve, reject) => {
const transaction = this.db!.transaction(['messages'], 'readwrite')
const store = transaction.objectStore('messages')
const request = store.delete(messageId)
request.onsuccess = () => {
console.log('Message deleted:', messageId)
resolve()
}
request.onerror = () => {
console.error('Failed to delete message:', request.error)
reject(request.error)
}
})
}
async addNewMessage(message: ThreadMessage): Promise<void> {
this.ensureDB()
return new Promise((resolve, reject) => {
const transaction = this.db!.transaction(['messages'], 'readwrite')
const store = transaction.objectStore('messages')
const messageToStore = {
...message,
created_at: message.created_at || Date.now() / 1000,
}
const request = store.add(messageToStore)
request.onsuccess = () => {
console.log('Message added:', message.id)
resolve()
}
request.onerror = () => {
console.error('Failed to add message:', request.error)
reject(request.error)
}
})
}
async writeMessages(threadId: string, messages: ThreadMessage[]): Promise<void> {
this.ensureDB()
return new Promise((resolve, reject) => {
const transaction = this.db!.transaction(['messages'], 'readwrite')
const store = transaction.objectStore('messages')
// First, delete existing messages for this thread
const index = store.index('thread_id')
const deleteRequest = index.openCursor(IDBKeyRange.only(threadId))
deleteRequest.onsuccess = (event) => {
const cursor = (event.target as IDBRequest<IDBCursorWithValue>).result
if (cursor) {
cursor.delete()
cursor.continue()
} else {
// After deleting old messages, add new ones
const addPromises = messages.map(message => {
return new Promise<void>((resolveAdd, rejectAdd) => {
const messageToStore = {
...message,
thread_id: threadId,
created_at: message.created_at || Date.now() / 1000,
}
const addRequest = store.add(messageToStore)
addRequest.onsuccess = () => resolveAdd()
addRequest.onerror = () => rejectAdd(addRequest.error)
})
})
Promise.all(addPromises)
.then(() => {
console.log(`${messages.length} messages written for thread:`, threadId)
resolve()
})
.catch(reject)
}
}
deleteRequest.onerror = () => {
reject(deleteRequest.error)
}
})
}
async getAllMessages(threadId: string): Promise<ThreadMessage[]> {
this.ensureDB()
return new Promise((resolve, reject) => {
const transaction = this.db!.transaction(['messages'], 'readonly')
const store = transaction.objectStore('messages')
const index = store.index('thread_id')
const request = index.getAll(threadId)
request.onsuccess = () => {
const messages = request.result || []
// Sort by created_at asc (chronological order)
messages.sort((a, b) => (a.created_at || 0) - (b.created_at || 0))
resolve(messages)
}
request.onerror = () => {
reject(request.error)
}
})
}
// Thread Assistant Info (simplified - stored with thread)
async getThreadAssistant(threadId: string): Promise<ThreadAssistantInfo> {
const info = await this.getThreadAssistantInfo(threadId)
if (!info) {
throw new Error(`Thread assistant info not found for thread ${threadId}`)
}
return info
}
async createThreadAssistant(threadId: string, assistant: ThreadAssistantInfo): Promise<ThreadAssistantInfo> {
await this.saveThreadAssistantInfo(threadId, assistant)
return assistant
}
async modifyThreadAssistant(threadId: string, assistant: ThreadAssistantInfo): Promise<ThreadAssistantInfo> {
await this.saveThreadAssistantInfo(threadId, assistant)
return assistant
}
async saveThreadAssistantInfo(threadId: string, assistantInfo: ThreadAssistantInfo): Promise<void> {
this.ensureDB()
return new Promise((resolve, reject) => {
const transaction = this.db!.transaction(['threads'], 'readwrite')
const store = transaction.objectStore('threads')
// Get existing thread and update with assistant info
const getRequest = store.get(threadId)
getRequest.onsuccess = () => {
const thread = getRequest.result
if (!thread) {
reject(new Error(`Thread ${threadId} not found`))
return
}
const updatedThread = {
...thread,
assistantInfo,
updated_at: Date.now() / 1000,
}
const putRequest = store.put(updatedThread)
putRequest.onsuccess = () => resolve()
putRequest.onerror = () => reject(putRequest.error)
}
getRequest.onerror = () => {
reject(getRequest.error)
}
})
}
async getThreadAssistantInfo(threadId: string): Promise<ThreadAssistantInfo | undefined> {
this.ensureDB()
return new Promise((resolve, reject) => {
const transaction = this.db!.transaction(['threads'], 'readonly')
const store = transaction.objectStore('threads')
const request = store.get(threadId)
request.onsuccess = () => {
const thread = request.result
resolve(thread?.assistantInfo)
}
request.onerror = () => {
reject(request.error)
}
})
}
}
export default ConversationalExtensionWeb

View File

@ -0,0 +1,93 @@
/**
* TypeScript Types for Conversational API
*/
export interface PaginationParams {
limit?: number
after?: string
order?: 'asc' | 'desc'
}
export interface PaginatedResponse<T> {
data: T[]
has_more: boolean
object: 'list'
first_id?: string
last_id?: string
}
export interface ConversationMetadata {
model_provider?: string
model_id?: string
is_favorite?: string
}
export interface Conversation {
title?: string
metadata?: ConversationMetadata
}
export interface ConversationResponse {
id: string
object: 'conversation'
title?: string
created_at: number
metadata: ConversationMetadata
}
export type ListConversationsParams = PaginationParams
export type ListConversationsResponse = PaginatedResponse<ConversationResponse>
// Conversation Items types
export interface ConversationItemAnnotation {
end_index?: number
file_id?: string
index?: number
start_index?: number
text?: string
type?: string
url?: string
}
export interface ConversationItemContent {
file?: {
file_id?: string
mime_type?: string
name?: string
size?: number
}
finish_reason?: string
image?: {
detail?: string
file_id?: string
url?: string
}
input_text?: string
output_text?: {
annotations?: ConversationItemAnnotation[]
text?: string
}
reasoning_content?: string
text?: {
value?: string
}
type?: string
}
export interface ConversationItem {
content?: ConversationItemContent[]
created_at: number
id: string
object: string
role: string
status?: string
type?: string
}
export interface ListConversationItemsParams extends PaginationParams {
conversation_id: string
}
export interface ListConversationItemsResponse extends PaginatedResponse<ConversationItem> {
total?: number
}

View File

@ -0,0 +1,247 @@
import { Thread, ThreadAssistantInfo, ThreadMessage, ContentType } from '@janhq/core'
import { Conversation, ConversationResponse, ConversationItem } from './types'
import { DEFAULT_ASSISTANT } from './const'
export class ObjectParser {
static threadToConversation(thread: Thread): Conversation {
const modelName = thread.assistants?.[0]?.model?.id || undefined
const modelProvider = thread.assistants?.[0]?.model?.engine || undefined
const isFavorite = thread.metadata?.is_favorite?.toString() || 'false'
let metadata = {}
if (modelName && modelProvider) {
metadata = {
model_id: modelName,
model_provider: modelProvider,
is_favorite: isFavorite,
}
}
return {
title: shortenConversationTitle(thread.title),
metadata,
}
}
static conversationToThread(conversation: ConversationResponse): Thread {
const assistants: ThreadAssistantInfo[] = []
if (
conversation.metadata?.model_id &&
conversation.metadata?.model_provider
) {
assistants.push({
...DEFAULT_ASSISTANT,
model: {
id: conversation.metadata.model_id,
engine: conversation.metadata.model_provider,
},
})
} else {
assistants.push({
...DEFAULT_ASSISTANT,
model: {
id: 'jan-v1-4b',
engine: 'jan',
},
})
}
const isFavorite = conversation.metadata?.is_favorite === 'true'
return {
id: conversation.id,
title: conversation.title || '',
assistants,
created: conversation.created_at,
updated: conversation.created_at,
model: {
id: conversation.metadata.model_id,
provider: conversation.metadata.model_provider,
},
isFavorite,
metadata: { is_favorite: isFavorite },
} as unknown as Thread
}
static conversationItemToThreadMessage(
item: ConversationItem,
threadId: string
): ThreadMessage {
// Extract text content and metadata from the item
let textContent = ''
let reasoningContent = ''
const imageUrls: string[] = []
let toolCalls: any[] = []
let finishReason = ''
if (item.content && item.content.length > 0) {
for (const content of item.content) {
// Handle text content
if (content.text?.value) {
textContent = content.text.value
}
// Handle output_text for assistant messages
if (content.output_text?.text) {
textContent = content.output_text.text
}
// Handle reasoning content
if (content.reasoning_content) {
reasoningContent = content.reasoning_content
}
// Handle image content
if (content.image?.url) {
imageUrls.push(content.image.url)
}
// Extract finish_reason
if (content.finish_reason) {
finishReason = content.finish_reason
}
}
}
// Handle tool calls parsing for assistant messages
if (item.role === 'assistant' && finishReason === 'tool_calls') {
try {
// Tool calls are embedded as JSON string in textContent
const toolCallMatch = textContent.match(/\[.*\]/)
if (toolCallMatch) {
const toolCallsData = JSON.parse(toolCallMatch[0])
toolCalls = toolCallsData.map((toolCall: any) => ({
tool: {
id: toolCall.id || 'unknown',
function: {
name: toolCall.function?.name || 'unknown',
arguments: toolCall.function?.arguments || '{}'
},
type: toolCall.type || 'function'
},
response: {
error: '',
content: []
},
state: 'ready'
}))
// Remove tool calls JSON from text content, keep only reasoning
textContent = ''
}
} catch (error) {
console.error('Failed to parse tool calls:', error)
}
}
// Format final content with reasoning if present
let finalTextValue = ''
if (reasoningContent) {
finalTextValue = `<think>${reasoningContent}</think>`
}
if (textContent) {
finalTextValue += textContent
}
// Build content array for ThreadMessage
const messageContent: any[] = [
{
type: ContentType.Text,
text: {
value: finalTextValue || '',
annotations: [],
},
},
]
// Add image content if present
for (const imageUrl of imageUrls) {
messageContent.push({
type: 'image_url' as ContentType,
image_url: {
url: imageUrl,
},
})
}
// Build metadata
const metadata: any = {}
if (toolCalls.length > 0) {
metadata.tool_calls = toolCalls
}
// Map status from server format to frontend format
const mappedStatus = item.status === 'completed' ? 'ready' : item.status || 'ready'
return {
type: 'text',
id: item.id,
object: 'thread.message',
thread_id: threadId,
role: item.role as 'user' | 'assistant',
content: messageContent,
created_at: item.created_at * 1000, // Convert to milliseconds
completed_at: 0,
status: mappedStatus,
metadata,
} as ThreadMessage
}
}
const shortenConversationTitle = (title: string): string => {
const maxLength = 50
return title.length <= maxLength ? title : title.substring(0, maxLength)
}
export const getDefaultAssistant = (
assistant: ThreadAssistantInfo
): ThreadAssistantInfo => {
return { ...assistant, instructions: undefined }
}
/**
* Utility function to combine conversation items into thread messages
* Handles tool response merging and message consolidation
*/
export const combineConversationItemsToMessages = (
items: ConversationItem[],
threadId: string
): ThreadMessage[] => {
const messages: ThreadMessage[] = []
const toolResponseMap = new Map<string, any>()
// First pass: collect tool responses
for (const item of items) {
if (item.role === 'tool') {
const toolContent = item.content?.[0]?.text?.value || ''
toolResponseMap.set(item.id, {
error: '',
content: [
{
type: 'text',
text: toolContent
}
]
})
}
}
// Second pass: build messages and merge tool responses
for (const item of items) {
// Skip tool messages as they will be merged into assistant messages
if (item.role === 'tool') {
continue
}
const message = ObjectParser.conversationItemToThreadMessage(item, threadId)
// If this is an assistant message with tool calls, merge tool responses
if (message.role === 'assistant' && message.metadata?.tool_calls && Array.isArray(message.metadata.tool_calls)) {
const toolCalls = message.metadata.tool_calls as any[]
let toolResponseIndex = 0
for (const [responseId, responseData] of toolResponseMap.entries()) {
if (toolResponseIndex < toolCalls.length) {
toolCalls[toolResponseIndex].response = responseData
toolResponseIndex++
}
}
}
messages.push(message)
}
return messages
}

View File

@ -24,6 +24,7 @@ export interface JanChatMessage {
export interface JanChatCompletionRequest {
model: string
messages: JanChatMessage[]
conversation_id?: string
temperature?: number
max_tokens?: number
top_p?: number
@ -93,7 +94,7 @@ export class JanApiClient {
janProviderStore.clearError()
const response = await this.authService.makeAuthenticatedRequest<JanModelsResponse>(
`${JAN_API_BASE}/models`
`${JAN_API_BASE}/conv/models`
)
const models = response.data || []
@ -115,12 +116,16 @@ export class JanApiClient {
janProviderStore.clearError()
return await this.authService.makeAuthenticatedRequest<JanChatCompletionResponse>(
`${JAN_API_BASE}/chat/completions`,
`${JAN_API_BASE}/conv/chat/completions`,
{
method: 'POST',
body: JSON.stringify({
...request,
stream: false,
store: true,
store_reasoning: true,
conversation: request.conversation_id,
conversation_id: undefined,
}),
}
)
@ -142,7 +147,7 @@ export class JanApiClient {
const authHeader = await this.authService.getAuthHeader()
const response = await fetch(`${JAN_API_BASE}/chat/completions`, {
const response = await fetch(`${JAN_API_BASE}/conv/chat/completions`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
@ -151,6 +156,10 @@ export class JanApiClient {
body: JSON.stringify({
...request,
stream: true,
store: true,
store_reasoning: true,
conversation: request.conversation_id,
conversation_id: undefined,
}),
})

View File

@ -144,6 +144,7 @@ export default class JanProviderWeb extends AIEngine {
const janRequest = {
model: modelId,
messages: janMessages,
conversation_id: opts.thread_id,
temperature: opts.temperature ?? undefined,
max_tokens: opts.n_predict ?? undefined,
top_p: opts.top_p ?? undefined,

View File

@ -13,7 +13,7 @@ declare const JAN_API_BASE: string
*/
export async function logoutUser(): Promise<void> {
const response = await fetch(`${JAN_API_BASE}${AUTH_ENDPOINTS.LOGOUT}`, {
method: 'POST',
method: 'GET',
credentials: 'include',
headers: {
'Content-Type': 'application/json',

View File

@ -1,16 +1,69 @@
/**
* Authentication Broadcast Channel Handler
* Manages cross-tab communication for auth state changes
* Manages both cross-tab and same-tab communication for auth state changes
*
* Architecture:
* - BroadcastChannel API: For cross-tab communication
* - LocalBroadcastChannel: For same-tab communication via CustomEvents
*/
import { AUTH_BROADCAST_CHANNEL, AUTH_EVENTS } from './const'
import { AUTH_BROADCAST_CHANNEL, AUTH_EVENT_NAME, AUTH_EVENTS } from './const'
import type { AuthBroadcastMessage } from './types'
/**
* LocalBroadcastChannel - Handles same-tab communication via custom events
* Mimics the BroadcastChannel API but uses CustomEvents internally
* This is needed because BroadcastChannel doesn't deliver messages to the same context
*/
class LocalBroadcastChannel {
private eventName: string
constructor(eventName: string) {
this.eventName = eventName
}
/**
* Post a message via custom event (same-tab only)
*/
postMessage(data: any): void {
const customEvent = new CustomEvent(this.eventName, {
detail: data
})
window.dispatchEvent(customEvent)
}
/**
* Listen for custom events
*/
addEventListener(type: 'message', listener: (event: MessageEvent) => void): void {
const customEventListener = (event: Event) => {
const customEvent = event as CustomEvent
// Convert CustomEvent to MessageEvent format for consistency
const messageEvent = {
data: customEvent.detail
} as MessageEvent
listener(messageEvent)
}
window.addEventListener(this.eventName, customEventListener)
}
/**
* Remove custom event listener
*/
removeEventListener(type: 'message', listener: (event: MessageEvent) => void): void {
// Note: This won't work perfectly due to function reference issues
// In practice, we handle this with cleanup functions in AuthBroadcast
window.removeEventListener(this.eventName, listener as any)
}
}
export class AuthBroadcast {
private broadcastChannel: BroadcastChannel | null = null
private localBroadcastChannel: LocalBroadcastChannel
constructor() {
this.setupBroadcastChannel()
this.localBroadcastChannel = new LocalBroadcastChannel(AUTH_EVENT_NAME)
}
/**
@ -27,17 +80,22 @@ export class AuthBroadcast {
}
/**
* Broadcast auth event to other tabs
* Broadcast auth event to all tabs (including current)
*/
broadcastEvent(type: AuthBroadcastMessage): void {
const message = { type }
// Broadcast to other tabs via BroadcastChannel
if (this.broadcastChannel) {
try {
const message = { type }
this.broadcastChannel.postMessage(message)
} catch (error) {
console.warn('Failed to broadcast auth event:', error)
}
}
// Also broadcast to same tab via LocalBroadcastChannel
this.localBroadcastChannel.postMessage(message)
}
/**
@ -55,22 +113,41 @@ export class AuthBroadcast {
}
/**
* Subscribe to auth events
* Subscribe to auth events (from all sources)
*/
onAuthEvent(
listener: (event: MessageEvent<{ type: AuthBroadcastMessage }>) => void
): () => void {
const cleanupFunctions: Array<() => void> = []
// Subscribe to BroadcastChannel for cross-tab events
if (this.broadcastChannel) {
this.broadcastChannel.addEventListener('message', listener)
// Return cleanup function
return () => {
cleanupFunctions.push(() => {
this.broadcastChannel?.removeEventListener('message', listener)
}
})
}
// Return no-op cleanup if no broadcast channel
return () => {}
// Subscribe to LocalBroadcastChannel for same-tab events
// We need to keep track of the actual listener function for proper cleanup
const localEventListener = (event: Event) => {
const customEvent = event as CustomEvent
const messageEvent = {
data: customEvent.detail
} as MessageEvent<{ type: AuthBroadcastMessage }>
listener(messageEvent)
}
// Add listener directly to window since LocalBroadcastChannel's removeEventListener has limitations
window.addEventListener(AUTH_EVENT_NAME, localEventListener)
cleanupFunctions.push(() => {
window.removeEventListener(AUTH_EVENT_NAME, localEventListener)
})
// Return combined cleanup function
return () => {
cleanupFunctions.forEach(cleanup => cleanup())
}
}
/**

View File

@ -19,9 +19,14 @@ export const AUTH_ENDPOINTS = {
// Token expiry buffer
export const TOKEN_EXPIRY_BUFFER = 60 * 1000 // 1 minute buffer before expiry
// Broadcast channel for cross-tab communication
// Broadcast channel name for cross-tab communication (BroadcastChannel API)
// Used to sync auth state between different browser tabs
export const AUTH_BROADCAST_CHANNEL = 'jan_auth_channel'
// Custom event name for same-tab communication (window.dispatchEvent)
// Used to notify components within the same tab about auth state changes
export const AUTH_EVENT_NAME = 'jan-auth-event'
// Auth events
export const AUTH_EVENTS = {
LOGIN: 'auth:login',

View File

@ -158,7 +158,7 @@ export class JanAuthService {
/**
* Get current authenticated user
*/
async getCurrentUser(): Promise<User | null> {
async getCurrentUser(forceRefresh: boolean = false): Promise<User | null> {
await this.ensureInitialized()
const authType = this.getAuthState()
@ -166,7 +166,8 @@ export class JanAuthService {
return null
}
if (this.currentUser) {
// Force refresh if requested or if cache is cleared
if (!forceRefresh && this.currentUser) {
return this.currentUser
}
@ -200,6 +201,9 @@ export class JanAuthService {
this.clearAuthState()
// Ensure guest access after logout
await this.ensureGuestAccess()
this.authBroadcast.broadcastLogout()
if (window.location.pathname !== '/') {
@ -208,6 +212,8 @@ export class JanAuthService {
} catch (error) {
console.error('Logout failed:', error)
this.clearAuthState()
// Try to ensure guest access even on error
this.ensureGuestAccess().catch(console.error)
}
}
@ -359,8 +365,12 @@ export class JanAuthService {
this.authBroadcast.onAuthEvent((event) => {
switch (event.data.type) {
case AUTH_EVENTS.LOGIN:
// Another tab logged in, refresh our state
this.initialize().catch(console.error)
// Another tab logged in, clear cached data to force refresh
// Clear current user cache so next getCurrentUser() call fetches fresh data
this.currentUser = null
// Clear token cache so next getValidAccessToken() call refreshes
this.accessToken = null
this.tokenExpiryTime = 0
break
case AUTH_EVENTS.LOGOUT:

View File

@ -1,105 +0,0 @@
/**
* Shared IndexedDB utilities for web extensions
*/
import type { IndexedDBConfig } from '../types'
/**
* Default database configuration for Jan web extensions
*/
const DEFAULT_DB_CONFIG: IndexedDBConfig = {
dbName: 'jan-web-db',
version: 1,
stores: [
{
name: 'assistants',
keyPath: 'id',
indexes: [
{ name: 'name', keyPath: 'name' },
{ name: 'created_at', keyPath: 'created_at' }
]
},
{
name: 'threads',
keyPath: 'id',
indexes: [
{ name: 'title', keyPath: 'title' },
{ name: 'created_at', keyPath: 'created_at' },
{ name: 'updated_at', keyPath: 'updated_at' }
]
},
{
name: 'messages',
keyPath: 'id',
indexes: [
{ name: 'thread_id', keyPath: 'thread_id' },
{ name: 'created_at', keyPath: 'created_at' }
]
}
]
}
/**
* Shared IndexedDB instance
*/
let sharedDB: IDBDatabase | null = null
/**
* Get or create the shared IndexedDB instance
*/
export const getSharedDB = async (config: IndexedDBConfig = DEFAULT_DB_CONFIG): Promise<IDBDatabase> => {
if (sharedDB && sharedDB.name === config.dbName) {
return sharedDB
}
return new Promise((resolve, reject) => {
const request = indexedDB.open(config.dbName, config.version)
request.onerror = () => {
reject(new Error(`Failed to open database: ${request.error?.message}`))
}
request.onsuccess = () => {
sharedDB = request.result
resolve(sharedDB)
}
request.onupgradeneeded = (event) => {
const db = (event.target as IDBOpenDBRequest).result
// Create object stores
for (const store of config.stores) {
let objectStore: IDBObjectStore
if (db.objectStoreNames.contains(store.name)) {
// Store exists, might need to update indexes
continue
} else {
// Create new store
objectStore = db.createObjectStore(store.name, { keyPath: store.keyPath })
}
// Create indexes
if (store.indexes) {
for (const index of store.indexes) {
try {
objectStore.createIndex(index.name, index.keyPath, { unique: index.unique || false })
} catch (error) {
// Index might already exist, ignore
}
}
}
}
}
})
}
/**
* Close the shared database connection
*/
export const closeSharedDB = () => {
if (sharedDB) {
sharedDB.close()
sharedDB = null
}
}

Some files were not shown because too many files have changed in this diff Show More