Compare commits

...

143 Commits

Author SHA1 Message Date
Vitor Alcantara Batista
154301b3ad
Brazilian Portuguese translation (#6809)
Co-authored-by: Vitor Alcantara Batista <vitor.alcantara@petrobras.com.br>
2025-10-29 23:36:35 +05:30
Nghia Doan
e7b7ac9e94
Merge pull request #6831 from janhq/feat/proactive_mode
feat: Proactive mode
2025-10-29 21:02:05 +07:00
Nguyen Ngoc Minh
e531eaa4ad
Merge pull request #6836 from janhq/chore/deprecate-webhook-discord
chore: deprecate webhook discord
2025-10-29 12:15:07 +07:00
Minh141120
23b03da714 chore: deprecate webhook discord 2025-10-29 11:48:32 +07:00
Vanalite
22be93807d Merge remote-tracking branch 'origin/dev' into feat/proactive_mode 2025-10-28 17:56:47 +07:00
Nguyen Ngoc Minh
653ecdb494
Merge pull request #6834 from janhq/chore/update-org-name
chore: update org name
2025-10-28 17:56:07 +07:00
Minh141120
15c426aefc chore: update org name 2025-10-28 17:26:27 +07:00
Vanalite
2fa153ac34 fix: Remove unused Proactive icon on chatInput
This icon doesn't do anything on chatInput but just an indicator when the proactive capability is activated. Safely remove since this can be indicated from the model dropdown
2025-10-28 17:04:31 +07:00
Dinh Long Nguyen
62bd91a1e1
fix: model should not include file attachment tools if not supported (#6833) 2025-10-28 16:58:18 +07:00
Vanalite
f7e0e790b6 feat: remove unnecessary TODO 2025-10-28 15:49:17 +07:00
hiento09
c854c54c0c
chore: update api domain to jan.ai (#6832) 2025-10-28 15:45:42 +07:00
Vanalite
a14872666a feat: Add tests for proactive mode 2025-10-28 12:19:00 +07:00
Vanalite
e9f469b623 feat: Proactively take screenshot and snapshot for every browser tool call 2025-10-28 11:48:55 +07:00
utenadev
5a016860aa
feat: Add Japanese translation (#6806)
This commit introduces Japanese as a supported language in the web application.

Key changes include:
- Addition of a new `ja` locale with 15 translated JSON resource files, making the application accessible to Japanese-speaking users.
- Update of the `LanguageSwitcher.tsx` component to include '日本語' in the language selection dropdown menu, allowing users to switch to the new language.
- The localization files were added by creating a new `ja` directory under `web-app/src/locales` and translating the content from the `en` directory.

Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
2025-10-27 19:53:36 +05:30
Vanalite
c773abb688 feat: Adding proactive button as experimental feature 2025-10-27 18:18:23 +07:00
Akarshan Biswas
2561fcd78a
feat: support multimodal tool results and improve tool message handling (#6816)
* feat: support multimodal tool results and improve tool message handling

- Added a temporary `ToolResult` type that mirrors the structure returned by tools (text, image data, URLs, errors).
- Implemented `convertToolPartToApiContentPart` to translate each tool output part into the format expected by the OpenAI chat completion API.
- Updated `CompletionMessagesBuilder.addToolMessage` to accept a full `ToolResult` instead of a plain string and to:
  - Detect multimodal content (base64 images, image URLs) and build a structured `content` array.
  - Properly handle plain‑text results, tool execution errors, and unexpected formats with sensible fallbacks.
  - Cast the final content to `any` for the `tool` role as required by the API.
- Modified `postMessageProcessing` to pass the raw tool result (`result as any`) to `addToolMessage`, avoiding premature extraction of only the first text part.
- Refactored several formatting and type‑annotation sections:
  - Added multiline guard for empty user messages to insert a placeholder.
  - Split the image URL construction into a clearer multiline object.
  - Adjusted method signatures and added minor line‑breaks for readability.
- Included extensive comments explaining the new logic and edge‑case handling.

These changes enable the chat system to handle richer tool outputs (e.g., images, mixed content) and provide more robust error handling.

* Satisfy ts linter

* Make ts linter happy x2

* chore: update test message creation

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-10-24 20:15:15 +05:30
locnguyen1986
28ed5e2af2
Merge pull request #6817 from menloresearch/fix/conversation-saving
we use POST to update now
2025-10-24 14:51:57 +07:00
nguyen.ngo
4c5c8e6aed we use POST to update now 2025-10-24 13:09:35 +07:00
Dinh Long Nguyen
f07e43cfe0
fix: conversation items (#6815) 2025-10-24 09:01:31 +07:00
Dinh Long Nguyen
e46200868e
web: update model capabilites (#6814)
* update model capabilites

* refactor + remove projects
2025-10-24 01:31:21 +07:00
Akarshan Biswas
147cab94a8
fix: Escape dollar signs followed by numbers in Markdown (#6797)
This commit introduces a change to prevent **Markdown** rendering issues where a dollar sign followed by a number (like **`$1`**) is incorrectly interpreted as **LaTeX** by the rendering engine.

---

The `normalizeLatex` function in `RenderMarkdown.tsx` now explicitly escapes these sequences (e.g., **`$1`** becomes **`\$1`**), ensuring they are displayed literally instead of being processed as mathematical expressions. This improves the fidelity of text that might contain currency or similar numerical notations.
2025-10-16 12:15:24 +05:30
Nguyen Ngoc Minh
2fb956ccaf
Merge pull request #6798 from menloresearch/docs/changelog-v0.7.2
docs: update changelog for Jan v0.7.2
2025-10-16 13:26:36 +07:00
Minh141120
4dee0a4ba1 docs: update changelog for Jan v0.7.2 2025-10-16 13:18:20 +07:00
Nguyen Ngoc Minh
418a48ab39
Merge pull request #6790 from menloresearch/chore/happy-dom-update
chore: update happy dom deps version
2025-10-15 02:53:24 -07:00
Minh141120
9bc56f6e30 chore: remove redudant deps in yarn lock file 2025-10-15 15:15:38 +07:00
Minh141120
f0ca9cce35 chore: update happy-dom version 2025-10-15 14:43:58 +07:00
Faisal Amir
746dbc632b
Merge pull request #6766 from menloresearch/feat/file-attachment
feat: file attachment
2025-10-15 11:01:40 +07:00
Faisal Amir
462b05e612 chore: fix conflict revert analytic 2025-10-15 10:35:36 +07:00
dinhlongviolin1
946b347f44 fix: lint 2025-10-15 00:21:10 +07:00
Dinh Long Nguyen
b23e88f078
Merge branch 'dev' into feat/file-attachment 2025-10-14 14:06:17 +07:00
Trang Le
476fdd6040
feat: Enable new prompt input while waiting for an answer (#6676)
* enable new prompt input while waiting for an answer

* correct spelling of handleSendMessage function

* remove test for disabling input while streaming content
2025-10-14 14:04:52 +07:00
Dinh Long Nguyen
fa8b3664cb
Merge branch 'dev' into feat/file-attachment 2025-10-14 14:00:10 +07:00
Nguyen Ngoc Minh
8b687619b2
Merge pull request #6783 from menloresearch/docs/update-jan-web-url
docs: update jan server url
2025-10-13 23:58:49 -07:00
Minh141120
176ad07f1d docs: update jan server url 2025-10-14 13:54:43 +07:00
Faisal Amir
7b5060c9be
Merge pull request #6774 from menloresearch/chore/disable-posthog-event
chore: revert track event posthog
2025-10-13 10:13:45 +07:00
Faisal Amir
584daa9682 chore: revert track event posthog 2025-10-11 21:46:15 +07:00
Akarshan
31f9501d8e
feat: Optimize state updates in server and model checks
- Added shallow equality guard for `connectedServers` state to prevent redundant updates when the fetched server list hasn't changed.
- Updated error handling for server fetch to only clear the state when it actually contains data.
- Introduced `newHasActiveModels` variable and conditional updater for `hasActiveModels` to avoid unnecessary state changes.
- Adjusted error handling for active model fetch to only set `hasActiveModels` to `false` when the current state differs.

These changes reduce needless re‑renders and improve component performance.
2025-10-10 20:25:17 +05:30
Roushan Kumar Singh
c096929d8b
fix(amd/linux): show dedicated VRAM on device list (override Vulkan UMA) (#6533) 2025-10-09 23:33:07 +07:00
Akarshan Biswas
01050f3103
fix: Gracefully handle offline mode during backend check (#6767)
The `listSupportedBackends` function now includes error handling for the `fetchRemoteSupportedBackends` call.

This addresses an issue where an error thrown during the remote fetch (e.g., due to no network connection in offline mode) would prevent the subsequent loading of locally installed or manually provided llama.cpp backends.

The remote backend versions array will now default to empty if the fetch fails, allowing the rest of the backend initialization process to proceed as expected.
2025-10-09 07:21:53 +05:30
Dinh Long Nguyen
45d57dd34d
Update web-app/src/services/uploads/default.ts
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-09 04:53:19 +07:00
Dinh Long Nguyen
f4066e6e5a
Update web-app/src/lib/fileMetadata.ts
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-09 04:50:31 +07:00
Dinh Long Nguyen
a2fbce698f fix thread scrolling 2025-10-09 04:41:18 +07:00
Dinh Long Nguyen
fc784620e0 fix tests 2025-10-09 04:28:08 +07:00
Dinh Long Nguyen
340042682a ui ux enhancement 2025-10-09 03:48:51 +07:00
Dinh Long Nguyen
6dd2d2d6c1
Merge branch 'dev' into feat/file-attachment 2025-10-09 02:21:22 +07:00
Akarshan
7762cea10a
feat: Distinguish and preserve embedding model sessions
This commit introduces a new field, `is_embedding`, to the `SessionInfo` structure to clearly mark sessions running dedicated embedding models.

Key changes:
- Adds `is_embedding` to the `SessionInfo` interface in `AIEngine.ts` and the Rust backend.
- Updates the `loadLlamaModel` command signatures to pass this new flag.
- Modifies the llama.cpp extension's **auto-unload logic** to explicitly **filter out** and **not unload** any currently loaded embedding models when a new text generation model is loaded. This is a critical performance fix to prevent the embedding model (e.g., used for RAG) from being repeatedly reloaded.

Also includes minor code style cleanup/reformatting in `jan-provider-web/provider.ts` for improved readability.
2025-10-08 20:03:35 +05:30
Faisal Amir
610b741db2
Merge pull request #6763 from menloresearch/chore/turn-off-zoomHotkeysEnabled
chore: turn off zoomHotkeysEnabled
2025-10-08 19:16:34 +07:00
Faisal Amir
814034d3d7
Merge pull request #6762 from menloresearch/fix/remove-setup-screen
fix: remove setup screen on project id to make same behavior with thread
2025-10-08 19:16:05 +07:00
Nguyen Ngoc Minh
839672b82f
Merge pull request #6765 from menloresearch/chore/license-path
chore: update license path
2025-10-08 03:28:43 -07:00
Minh141120
03762c3634 chore: revert packageManger 2025-10-08 16:57:21 +07:00
Minh141120
59c76bcb1c chore: revert copy asset script 2025-10-08 16:56:36 +07:00
Minh141120
1905f9a9ce chore: move license to resources 2025-10-08 16:55:24 +07:00
Dinh Long Nguyen
ff93dc3c5c Merge branch 'dev' into feat/file-attachment 2025-10-08 16:34:45 +07:00
Dinh Long Nguyen
510c4a5188 working attachments 2025-10-08 16:08:40 +07:00
Minh141120
c7d1a3c65d chore: update license path 2025-10-08 15:48:16 +07:00
hiento09
999b7b3cd8
chore: api change domain to menlo.ai (#6764) 2025-10-08 13:22:26 +07:00
Faisal Amir
f224d18d7f chore: turn off zoomHotkeysEnabled 2025-10-08 12:54:04 +07:00
Nghia Doan
1bf5c770cf
Merge pull request #6757 from menloresearch/fix/resolve-web-extensions-conflict
fix: resolve extensions conflict with correct path for web-app
2025-10-08 12:45:44 +07:00
Faisal Amir
613bc85a13 fix: remove setup screen on project id to make same behavior with thread 2025-10-08 12:41:26 +07:00
Faisal Amir
b1abc97bda
Merge pull request #6759 from menloresearch/fix/font-json-editor
fix: font mono default from mcp json ediitor
2025-10-08 10:20:11 +07:00
Faisal Amir
eec94c47dd chore: make class important 2025-10-07 22:28:58 +07:00
Faisal Amir
b2632a005c fix: font mono default from mcp json ediitor 2025-10-07 22:10:53 +07:00
Akarshan Biswas
706dad2687
feat: Add support for llamacpp MoE offloading setting (#6748)
* feat: Add support for llamacpp MoE offloading setting

Introduces the n_cpu_moe configuration setting for the llamacpp provider. This allows users to specify the number of Mixture of Experts (MoE) layers whose weights should be offloaded to the CPU via the --n-cpu-moe flag in llama.cpp.

This is useful for running large MoE models by balancing resource usage, for example, by keeping attention on the GPU and offloading expert FFNs to the CPU.

The changes include:

 - Updating the llamacpp-extension to accept and pass the --n-cpu-moe argument.

 - Adding the input field to the Model Settings UI (ModelSetting.tsx).

 - Including model setting migration logic and bumping the store version to 4.

* remove unused import

* feat: add cpu-moe boolean flag

* chore: remove unused migration cont_batching

* chore: fix migration delete old key and add new one

* chore: fix migration

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-10-07 19:37:58 +05:30
Faisal Amir
e5be683a97
Merge pull request #6755 from menloresearch/chore/analytic-model-used
chore: create event to track model provider and id model
2025-10-07 20:36:42 +07:00
Louis
e7fcc809e7
Merge pull request #6756 from menloresearch/sync/release-7-1-into-dev
Sync release 0.7.1 to dev
2025-10-07 19:57:28 +07:00
Louis
26006c143e
fix: build 2025-10-07 19:33:49 +07:00
Louis
28afafaad7
Update .github/workflows/template-tauri-build-windows-x64.yml
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-07 18:36:56 +07:00
Vanalite
dd1b3c98bf fix: resolve extensions conflict with correct path for web-app 2025-10-07 18:33:51 +07:00
Louis
3919cd0306
fix: build error 2025-10-07 18:32:43 +07:00
Faisal Amir
7d615b4163 chore: type fixed 2025-10-07 18:10:25 +07:00
Nguyen Ngoc Minh
4828f34fec
Merge pull request #6728 from menloresearch/fix/anthropic-model-load
Fix: Anthropic request to add models
2025-10-07 18:05:33 +07:00
Faisal Amir
61c3fd4b5a
Merge pull request #6727 from menloresearch/fix/prompt-token
fix: prompt token
2025-10-07 18:05:29 +07:00
Nguyen Ngoc Minh
816d60b22a
Merge pull request #6721 from menloresearch/chore/use-custom-nsis-template
chore: use custom nsis template
# Conflicts:
#	Makefile
#	package.json
#	src-tauri/tauri.windows.conf.json
2025-10-07 18:05:14 +07:00
Faisal Amir
310ca7cb23 chore: create message_sent event to track model provider and id model 2025-10-07 18:04:58 +07:00
Faisal Amir
fa397038ef
Merge pull request #6753 from menloresearch/fix/auto-select-download-model
fix: auto select download model
2025-10-07 17:16:19 +07:00
Faisal Amir
dabc49567c
Merge pull request #6743 from menloresearch/chore/dropdown-submenu-scrollable
chore: make dropdown sub menu assign projects scrollable
2025-10-07 13:36:01 +07:00
Faisal Amir
d8dcba3552 fix: auto select download model 2025-10-07 13:29:56 +07:00
Nghia Doan
f4efd479d5
Merge pull request #6746 from menloresearch/feat/hide-project-mobile
feat: Hide projects for mobile version
2025-10-07 10:38:03 +07:00
Dinh Long Nguyen
a72c74dbf9 initial layout 2025-10-07 10:36:45 +07:00
Louis
6c4dd85e6f
Merge pull request #6720 from menloresearch/release/v0.7.0
Sync release v0.7.0 to dev
2025-10-06 22:31:06 +07:00
Louis
9bfec5c7b3
Sync dev into release (#6747)
* feat: Init mobile app from current Tauri v2 framework

Feat:
- Using Tauri v2 by default
- Add new configuration to initiate mobile app
- Add dependencies needed for mobile build
Test:
- Confirm to be built successfully
- Confirm to keep settings for desktop and build successfully
- Reuse most of components from desktop version

* fix: Fix tests

* feat: Add android target

* fix: Reconfigure and add toolchain to wake up Android app

* fix: Fix parsing datatype inconsistent across platforms

* feat: Adjust UI for mobile res

Feature:
- Adjust homecreen and chatscreen for mobile device
- Fix tests for both FE and BE
Self-test:
- Confirm runnable on both Android and iOS
- Confirm runnable on desktop app
- All test suites passed
- Working with ChatGPT API

* fix: Restore dedupe command

* chore: Adjust paddings to save some space for the top nav bar

* Update web-app/src/routes/index.tsx

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* chore: keep gen icon on Android

* chore: add command to ease the mobile dev

* chore: Separate configuration for android build in release mode

* chore: Shrink the Android app size to minimal, release type

* feat: Disable zoom and setup mobile viewport

* chore: Configure iOS to use the same build mechanic to remove unnecessary plugin

* fix: Remove redundant yarn command for ios dev build

* enhancement: fit mobile layout

* chore: update chatscreen padding

* chore: update checking platform using config isntead navigation agent

* chore: update height of thread detail

* remove gen android

* Update web-app/src/index.css

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* fix: Add frontendDist to ios configuration

* feat: Experiment removing hardware permission

* fix: Android releasable build

* feat: Add dev-android to makefile

* feat: Add dev-ios to makefile for ios development

* refactor(utils): add helper to remove extensions from file paths

* chore: fix Encoded logging

* refactor: safely strip prefix and extensions from filename

* chore: add logging for TauriDialog Service

* Update handbook content with Nextra callout and content improvements

- Convert blockquote to Nextra callout in open-superintelligence.mdx
- Add Edison link and improve content flow
- Refine language for better clarity

* docs: enhance overview page with improved structure and internal linking

- Restructured main content with cleaner formatting
- Added comprehensive internal linking for better navigation
- Improved visual hierarchy and readability
- Enhanced acknowledgements section with better organization
- Updated product suite section with consistent formatting

* Update handbook navigation structure and meta.json files

- Updated handbook/_meta.json to properly organize navigation
- Fixed duplicate entries by removing files that belong in subfolders
- Updated why folder title to 'Why does Jan exist?'
- Cleaned up why/_meta.json with proper titles for Open Superintelligence and Open-Source sections

* docs: fix broken internal links and remove privacy page

- Fix broken links in troubleshooting.mdx pointing to install pages
- Remove privacy.mdx page and update _meta.json navigation
- Update various documentation links for consistency
- Ensure all internal links use proper absolute paths

* Optimize installation pages SEO meta titles and descriptions

 SEO Improvements:
- Mac: 'Run AI models locally on your Mac - Jan'
- Linux: 'Run AI models locally on Linux - Jan'
- Windows: 'Run AI models locally on Windows - Jan'

🎯 Meta descriptions now include:
- Target keywords (local AI, LLM, offline, ChatGPT-like)
- Platform-specific details (Apple Silicon, Ubuntu/Debian, Windows 10/11)
- Key benefits (GPU acceleration, privacy, no internet required)

📍 Sidebar navigation titles unchanged - only SEO meta data optimized

* Clean up installation page titles and descriptions

- Revert titles to clean sidebar navigation (Mac, Linux, Windows)
- Improve meta descriptions to be concise but SEO-friendly
- Keep key terms: local AI, offline, GPU acceleration, platform details

* Update README.md

* Update README.md

* trigger PR banner

* docs: update missing redirect links

* enhancement: social media navbar and update menu footer

* Update docs/src/components/Navbar.tsx

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update docs/src/components/Navbar.tsx

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* fix: Apply model name change correctly

# Conflicts:
#	web-app/src/lib/utils.ts

* feat: Disable text selection on Toaster

Disable the option to select text on the Toaster to consist the swiping action on the toast in order to dismiss it

* fix: scroll issue padding not re render correctly (#6639)

* trigger PR banner

* Improve FAQ section and content updates for offline ChatGPT alternative post

* Add SEO-optimized Twitter meta titles for installation pages

- Add Twitter meta tags to Windows, Linux, and Mac installation pages
- Optimize meta titles: 'Jan on [Platform]' for better SEO
- Maintain consistent meta descriptions across all platforms
- Keep original page titles unchanged for user experience

* Update content files

- Update tabby server example
- Update troubleshooting documentation
- Update NVIDIA TensorRT-LLM benchmarking post

* Add ChatGPT alternative blog post and update installation docs

* Update ChatGPT alternative blog post

* Rename blog post from chatgpt-alternative-jan.mdx to chatgpt-alternatives.mdx

* feat: add real-time ChatGPT status checker blog post

- Add new blog post: 'is-chatgpt-down-use-jan'
- Create OpenAIStatusChecker React component with real-time data
- Use CORS proxy to fetch live OpenAI status from status.openai.com
- Include SEO-optimized status indicators and error messages
- Add ChatGPT downtime promotion for Jan alternative
- Component features: auto-refresh, fallback handling, dark mode support

* chore: fix typo

* chore: fix failed build

* refactor: deprecate Vulkan external binaries (#6638)

* refactor: deprecate vulkan binary

refactor: clean up vulkan lib

chore: cleanup

chore: clean up

chore: clean up

fix: build

* fix: skip binaries download env

* Update src-tauri/utils/src/system.rs

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update src-tauri/utils/src/system.rs

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* feat: Add tests for the model displayName modification

* fix: Fix linter error

* fix: Fix nvidia and vulkan after upgrade to be compatible with mobile compiling too

* Fix OG image paths and move images to general folder

- Move OG images from _assets/ to public/assets/images/general/
- Update all blog post references to use correct paths
- Remove duplicate images from _assets/ folder
- Fix image paths in content to use /assets/images/general/ format
- Update Twitter image references to match OG image paths

Files updated:
- chatgpt-alternatives.mdx
- deepresearch.mdx
- deepseek-r1-locally.mdx
- how-we-benchmark-kernels.mdx
- is-chatgpt-down-use-jan.mdx
- offline-chatgpt-alternative.mdx
- qwen3-settings.mdx
- run-ai-models-locally.mdx
- run-gpt-oss-locally.mdx

* fix: remove Jan prefix from blog post titles for better SEO

- Blog posts now use only frontmatter title without 'Jan -' prefix
- Other pages maintain existing branding (Jan Desktop, Jan Server, Jan)
- Improves SEO for blog content while preserving site branding

* update blog post content

* Feat: web temporary chat (#6650)

* temporray chat stage1

* temporary page in root

* temporary chat

* handle redirection properly
`

* temporary chat header

* Update extensions-web/src/conversational-web/extension.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* update routetree

* better error handling

* fix strecthed assitant on desktop

* update yarn link to workspace for better link consistency

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Add Guides category to blog navigation

- Add 'guides' category to staticCategories array in Blog component
- Update plopfile.js to include guides in category choices
- Add guides category entry to _meta.json
- Position guides category after research in navigation

* docs: update redirect links

* Update AI for Law blog post with images and content improvements

- Add hero image for AI for Law blog post
- Add images to assistant creation and contract review sections
- Improve content structure and readability
- Add proper image assets for legal AI use cases
- Update ogImage and twitter image references

* fix: revert the modification of vulkan

* Add AI for Teachers blog post with images and video

- Create comprehensive AI for Teachers blog post
- Add hero image and assistant creation interface images
- Include video demonstration of Jan for teachers
- Add proper ogImage and twitter image references
- Cover lesson planning, grading, parent communication, and classroom resources
- Focus on privacy and offline AI for educational use

* fix: Fix linter and tests

* fix: Restore default permission on desktop build

Restore desktop capabilities
Restore linter correctness
Restore different capabilities on each platform

* fix: Fix cargo test

* feat: web add search button for extension (#6671)

* add search button for web extension

* change button color and behavior

* Update extensions-web/src/mcp-web/components/WebSearchButton.tsx

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* add eof new line missing (#6673)

* fix lint issue

* fix: mcp bin path (#6667)

* fix: mcp bin path

* chore: clean up unused structs

* fix: bin name

* fix: tests

* remove test conflict

* add missing closing test

* fix tauri test

* feat: disable all web mcp by default (new users) (#6677)

* fix: chat completion usage - token speed (#6675)

* resolve TypeScript and Rust warnings (#6612)

* chore: fix warnings

* fix: add missing scrollContainerRef dependencies to React hooks

* fix: typo

* fix: remove unsupported fetch option and enable AsyncIterable types

- Removed `connectTimeout` from fetch init (not supported in RequestInit)
- Updated tsconfig to target ES2018

* chore: refactor rename

* fix(hooks): update dependency arrays for useThreadScrolling effects

* Add type.d.ts to extend requestinit with connectionTimeout

* remove commentd unused import

* fix: Fix editing model without saving should restore original name

* fix: thread item overfetching (#6699)

* fix: thread item overfetching

* chore: cleanup left over import

* feat: improve projects (#6698)

* decouple successfully

* only show movable projects for project items

* handle delete covnersations when projects is removed

* fix leftpanel assignemtn

* fix lint

* fix gg tag (#6702)

* refactor: resolve rust analyzer warnings and improve code quality (#6696)

- Update string formatting to use modern interpolation syntax
- Simplify expressions and remove unnecessary intermediate variables
- Improve logging statements for better readability
- Clean up code across core modules (app, downloads, mcp, server, etc.)

* docs: add Jan v0.7.0 changelog

* docs: update Jan v0.7.0 changelog content

* docs: rename changelog file to remove trailing dash

* feat: use sql for mobile storage

* feat: organize code for proper import

Move platform checker for db access to helper
Add test for to threads controller

* feat: better structure for MobileCoreService

MobileCoreService should inherit TauriCoreService to match Tauri architecture patterns

* fix: Extract model capabilities correctly for various providers on various platforms

* fix: yarn lint

* ci: remove upload msi

* fix: extensions missing on Unix dev (#6724)

* fix: extensions missing on Unix dev

* re add bun uv for mcp

* fix: Local API Server - disable settings on run (#6707)

* fix: Fix tests in threads with proper mock folder properly

* changelog: release 0.7.1

* chore: wrong version in detail changelog

* fix: update detail changelog 0.7.1

* fix(ui): restore missing border on model selector (#6692)

* fix: Fix openssl issue on mobile after merging

* fix: Remove yarn.lock changes

* chore(ui): refine className for dropdown menu with animation states

* chore: Reposition 'Remove project' option for better usability

* feat: add project search and scrollable thread lists

- Add search bar to filter projects by name in real-time
- Implement scrollable thread container with max 4 visible threads
- Add empty state for no search results
- Add clear button (X) to reset search query

* (chore): rename translation keys to collapseProject/expandProject

* Fix Translation changes across locales

* (chore): remove duplicate keys from de-DE/common.json

* Add SearchProjects to missing locales

* fix: theme native system and check os support blur

* fix: new window theme

* fix: open new window theme

* fix: test use case appearance

* chore: fix window type theme service

* chore: update permission windows

* chore: fix desktop capabilities

* chore: check support blur using hardware api

* chore: check support blur on FE

* chore: fix new chat with update last selected model dropdown

* fix: tittle recent when no result found

---------

Co-authored-by: Vanalite <dhnghia0604@gmail.com>
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
Co-authored-by: Roushan Singh <github.rtron18@gmail.com>
Co-authored-by: eckartal <emre@jan.ai>
Co-authored-by: Emre Can Kartal <159995642+eckartal@users.noreply.github.com>
Co-authored-by: Roushan Kumar Singh <158602016+github-roushan@users.noreply.github.com>
Co-authored-by: Nguyen Ngoc Minh <91668012+Minh141120@users.noreply.github.com>
Co-authored-by: Minh141120 <minh.itptit@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Dinh Long Nguyen <dinhlongviolin1@gmail.com>
2025-10-06 21:43:24 +07:00
Louis
fe2c2a8687 Merge branch 'dev' into release/v0.7.0
# Conflicts:
#	web-app/src/containers/DropdownModelProvider.tsx
#	web-app/src/containers/ThreadList.tsx
#	web-app/src/containers/__tests__/DropdownModelProvider.displayName.test.tsx
#	web-app/src/hooks/__tests__/useModelProvider.test.ts
#	web-app/src/hooks/useChat.ts
#	web-app/src/lib/utils.ts
2025-10-06 20:42:05 +07:00
Vanalite
b23aa68254 feat: Hide projects for mobile version 2025-10-06 18:14:03 +07:00
Faisal Amir
0588cb34c6
Merge pull request #6713 from menloresearch/fix/theme-system
fix: theme system cross platform
2025-10-06 16:58:53 +07:00
Faisal Amir
b32e3ebd60
Merge pull request #6742 from menloresearch/fix/last-used-model
fix: new chat with update last selected model dropdown
2025-10-06 15:03:00 +07:00
Faisal Amir
cc77ae3430
Merge pull request #6744 from menloresearch/fix/no-result-found
fix: title recent when no result found
2025-10-06 15:01:58 +07:00
Faisal Amir
ac6eda063a fix: tittle recent when no result found 2025-10-06 11:35:27 +07:00
Faisal Amir
f160d83ca9 chore: make drodpwon sub menu assisgn project scroll able 2025-10-06 11:16:02 +07:00
Faisal Amir
13c7ad707e chore: fix new chat with update last selected model dropdown 2025-10-06 11:06:15 +07:00
Faisal Amir
17dced03c0 chore: check support blur on FE 2025-10-06 10:55:17 +07:00
Faisal Amir
39b1ba4691 chore: check support blur using hardware api 2025-10-06 10:55:17 +07:00
Faisal Amir
8c7ad408a9 chore: fix desktop capabilities 2025-10-06 10:55:17 +07:00
Faisal Amir
f0c4784b7b chore: update permission windows 2025-10-06 10:55:17 +07:00
Faisal Amir
83fc68e27d chore: fix window type theme service 2025-10-06 10:55:17 +07:00
Faisal Amir
be9a6c0254 fix: test use case appearance 2025-10-06 10:55:17 +07:00
Faisal Amir
1acdb77ad1 fix: open new window theme 2025-10-06 10:55:17 +07:00
Faisal Amir
51e7a08118 fix: new window theme 2025-10-06 10:55:17 +07:00
Faisal Amir
aa0c4b0d1b fix: theme native system and check os support blur 2025-10-06 10:55:17 +07:00
Faisal Amir
80ee8fd2b2
Merge pull request #6726 from github-roushan/dropdown-ui
UI enhancement for projects
2025-10-06 10:54:57 +07:00
Roushan Singh
291482cc16 Add SearchProjects to missing locales 2025-10-06 10:46:18 +07:00
Roushan Singh
154bc17778 (chore): remove duplicate keys from de-DE/common.json 2025-10-06 10:46:18 +07:00
Roushan Singh
cc5130c1af Fix Translation changes across locales 2025-10-06 10:46:18 +07:00
Roushan Singh
2d9f20ffb6 (chore): rename translation keys to collapseProject/expandProject 2025-10-06 10:46:18 +07:00
Roushan Singh
3e332eceae feat: add project search and scrollable thread lists
- Add search bar to filter projects by name in real-time
- Implement scrollable thread container with max 4 visible threads
- Add empty state for no search results
- Add clear button (X) to reset search query
2025-10-06 10:46:18 +07:00
Roushan Singh
73b241c16f chore: Reposition 'Remove project' option for better usability 2025-10-06 10:46:18 +07:00
Roushan Singh
8ed68d9c19 chore(ui): refine className for dropdown menu with animation states 2025-10-06 10:46:18 +07:00
Nghia Doan
b5e57a429a
Merge pull request #6714 from menloresearch/mobile/persistence_store
Feat: Jan mobile has persistence store
2025-10-06 10:02:53 +07:00
Vanalite
62fa0ffa57 fix: Remove yarn.lock changes 2025-10-06 09:43:45 +07:00
Faisal Amir
481e9c1130
Merge pull request #6736 from github-roushan/border-fix
fix(ui): restore missing border on model selector (#6692)
2025-10-06 09:39:10 +07:00
Roushan Kumar Singh
93652ce884
Merge branch 'dev' into border-fix 2025-10-05 17:29:40 +05:30
Vanalite
fa61163350 fix: Fix openssl issue on mobile after merging 2025-10-05 14:40:39 +07:00
Roushan Singh
cb9eb6d238 fix(ui): restore missing border on model selector (#6692) 2025-10-04 22:21:02 +05:30
Vanalite
41a93690a1 Merge remote-tracking branch 'origin/dev' into mobile/persistence_store 2025-10-04 12:28:11 +07:00
Faisal Amir
b309d34274
Merge pull request #6732 from menloresearch/chore/update-changelog
fix: update detail changelog 0.7.1
2025-10-03 23:33:57 +07:00
Faisal Amir
ca485b4a35 fix: update detail changelog 0.7.1 2025-10-03 23:33:02 +07:00
Faisal Amir
252336d95c
Merge pull request #6731 from menloresearch/chore/update-change
chore: wrong version in detail changelog
2025-10-03 23:28:25 +07:00
Faisal Amir
1d620df625
Merge pull request #6730 from menloresearch/release/docs-0.7.1
changelog: release 0.7.1
2025-10-03 23:27:22 +07:00
Faisal Amir
e346b293f6 chore: wrong version in detail changelog 2025-10-03 23:23:46 +07:00
Faisal Amir
8b448d1c0b changelog: release 0.7.1 2025-10-03 23:17:54 +07:00
Vanalite
b628b3d9ab fix: Fix tests in threads with proper mock folder properly 2025-10-03 14:17:59 +07:00
Louis
cef351bfd0
fix: Local API Server - disable settings on run (#6707) 2025-10-03 14:12:16 +07:00
Vanalite
4da0fd1ca3 fix: yarn lint 2025-10-03 10:25:41 +07:00
Vanalite
524ac11294 feat: better structure for MobileCoreService
MobileCoreService should inherit TauriCoreService to match Tauri architecture patterns
2025-10-02 21:20:07 +07:00
Vanalite
1747e0ad41 Merge remote-tracking branch 'origin/dev' into mobile/persistence_store
# Conflicts:
#	src-tauri/src/core/extensions/commands.rs
2025-10-02 20:59:34 +07:00
Vanalite
08d527366e feat: organize code for proper import
Move platform checker for db access to helper
Add test for to threads controller
2025-10-02 20:53:46 +07:00
Vanalite
9720ad368e feat: use sql for mobile storage 2025-10-02 18:09:33 +07:00
Nguyen Ngoc Minh
f537429d2c
Merge pull request #6706 from menloresearch/qa/v0.7.0
feat: update checklist for 0.7.0
2025-10-02 08:30:30 +00:00
Minh141120
f6f9813ef2 feat: update checklist for 0.7.0 2025-10-02 15:26:37 +07:00
Nghia Doan
87db633b7d
Merge pull request #6700 from menloresearch/fix/edit-model-name
fix: Fix editing model without saving should restore original name
# Conflicts:
#	web-app/src/containers/__tests__/EditModel.test.tsx
2025-10-02 09:28:50 +07:00
Nguyen Ngoc Minh
8e10f27cc2
Merge pull request #6701 from menloresearch/cherry-pick/projects
cherry pick : projects + performance enhancement
2025-10-01 16:42:20 +00:00
Dinh Long Nguyen
9f72debc17 fix: thread item overfetching (#6699)
* fix: thread item overfetching

* chore: cleanup left over import
2025-10-01 22:53:53 +07:00
Dinh Long Nguyen
1b9efee52c feat: improve projects (#6698)
* decouple successfully

* only show movable projects for project items

* handle delete covnersations when projects is removed

* fix leftpanel assignemtn

* fix lint
2025-10-01 22:53:34 +07:00
Akarshan Biswas
0f0ba43b7f
feat: Adjust RAM/VRAM calculation for unified memory systems (#6687)
* feat: Adjust RAM/VRAM calculation for unified memory systems

This commit refactors the logic for calculating **total RAM** and **total VRAM** in `is_model_supported` and `plan_model_load` commands, specifically targeting systems with **unified memory** (like modern macOS devices where the GPU list may be empty).

The changes are as follows:

* **Total RAM Calculation:** If no GPUs are detected (`sys_info.gpus.is_empty()` is true), **total RAM** is now set to $0$. This avoids confusing total system memory with dedicated GPU memory when planning model placement.
* **Total VRAM Calculation:** If no GPUs are detected, **total VRAM** is still calculated as the system's **total memory (RAM)**, as this shared memory acts as VRAM on unified memory architectures.

This adjustment improves the accuracy of memory availability checks and model planning on unified memory systems.

* fix: total usable memory in case there is no system vram reported

* chore: temporarily change to self-hosted runner mac

* ci: revert back to github hosted runner macos

---------

Co-authored-by: Louis <louis@jan.ai>
Co-authored-by: Minh141120 <minh.itptit@gmail.com>
2025-10-01 18:58:14 +07:00
Nguyen Ngoc Minh
6a4aaaec87
Merge pull request #6694 from menloresearch/ci/revert-msi-installer
ci: revert upload msi to github release
2025-10-01 10:30:55 +00:00
Minh141120
a5574eaacb ci: revert upload msi to github release 2025-10-01 17:00:03 +07:00
Faisal Amir
771d097309
Merge pull request #6688 from menloresearch/fix/dropdown-type-assistant 2025-10-01 14:47:54 +07:00
Louis
e0ab77cb24
fix: token count error (#6680) 2025-10-01 14:07:32 +07:00
Faisal Amir
7a36ed238c
Merge pull request #6681 from menloresearch/fix/local-api-server
fix: local api server auto start first model when last used missing
2025-10-01 14:01:22 +07:00
Faisal Amir
99d1713517 fix: dropdown type assistant 2025-10-01 14:00:55 +07:00
Faisal Amir
d102165028 chore: move auto start server setting 2025-10-01 11:44:49 +07:00
Faisal Amir
199623b414 chore: clear flow loacl api server 2025-10-01 11:23:59 +07:00
Faisal Amir
2679b19e32 fix: local api server auto start first model when missing last used 2025-10-01 11:04:28 +07:00
Nghia Doan
c5a5968bf8
Merge pull request #6643 from menloresearch/fix/model-name-change
fix: Apply model name change correctly
2025-09-30 22:41:05 +07:00
323 changed files with 14627 additions and 2222 deletions

View File

@ -1,5 +1,5 @@
blank_issues_enabled: true
contact_links:
- name: Jan Discussions
url: https://github.com/orgs/menloresearch/discussions/categories/q-a
url: https://github.com/orgs/janhq/discussions/categories/q-a
about: Get help, discuss features & roadmap, and share your projects

View File

@ -12,7 +12,7 @@ jobs:
build-and-preview:
runs-on: [ubuntu-24-04-docker]
env:
JAN_API_BASE: "https://api-dev.jan.ai/v1"
MENLO_PLATFORM_BASE_URL: "https://api-dev.jan.ai/v1"
permissions:
pull-requests: write
contents: write
@ -52,7 +52,7 @@ jobs:
- name: Build docker image
run: |
docker build --build-arg JAN_API_BASE=${{ env.JAN_API_BASE }} -t ${{ steps.vars.outputs.FULL_IMAGE }} .
docker build --build-arg MENLO_PLATFORM_BASE_URL=${{ env.MENLO_PLATFORM_BASE_URL }} -t ${{ steps.vars.outputs.FULL_IMAGE }} .
- name: Push docker image
if: github.event_name == 'push'

View File

@ -13,7 +13,7 @@ jobs:
deployments: write
pull-requests: write
env:
JAN_API_BASE: "https://api.jan.ai/v1"
MENLO_PLATFORM_BASE_URL: "https://api.jan.ai/v1"
GA_MEASUREMENT_ID: "G-YK53MX8M8M"
CLOUDFLARE_PROJECT_NAME: "jan-server-web"
steps:
@ -43,7 +43,7 @@ jobs:
- name: Install dependencies
run: make config-yarn && yarn install && yarn build:core && make build-web-app
env:
JAN_API_BASE: ${{ env.JAN_API_BASE }}
MENLO_PLATFORM_BASE_URL: ${{ env.MENLO_PLATFORM_BASE_URL }}
GA_MEASUREMENT_ID: ${{ env.GA_MEASUREMENT_ID }}
- name: Publish to Cloudflare Pages Production

View File

@ -12,7 +12,7 @@ jobs:
build-and-preview:
runs-on: [ubuntu-24-04-docker]
env:
JAN_API_BASE: "https://api-stag.jan.ai/v1"
MENLO_PLATFORM_BASE_URL: "https://api-stag.jan.ai/v1"
permissions:
pull-requests: write
contents: write
@ -52,7 +52,7 @@ jobs:
- name: Build docker image
run: |
docker build --build-arg JAN_API_BASE=${{ env.JAN_API_BASE }} -t ${{ steps.vars.outputs.FULL_IMAGE }} .
docker build --build-arg MENLO_PLATFORM_BASE_URL=${{ env.MENLO_PLATFORM_BASE_URL }} -t ${{ steps.vars.outputs.FULL_IMAGE }} .
- name: Push docker image
if: github.event_name == 'push'

View File

@ -168,62 +168,62 @@ jobs:
AWS_DEFAULT_REGION: ${{ secrets.DELTA_AWS_REGION }}
AWS_EC2_METADATA_DISABLED: 'true'
noti-discord-nightly-and-update-url-readme:
needs:
[
build-macos,
build-windows-x64,
build-linux-x64,
get-update-version,
set-public-provider,
sync-temp-to-latest,
]
secrets: inherit
if: github.event_name == 'schedule'
uses: ./.github/workflows/template-noti-discord-and-update-url-readme.yml
with:
ref: refs/heads/dev
build_reason: Nightly
push_to_branch: dev
new_version: ${{ needs.get-update-version.outputs.new_version }}
# noti-discord-nightly-and-update-url-readme:
# needs:
# [
# build-macos,
# build-windows-x64,
# build-linux-x64,
# get-update-version,
# set-public-provider,
# sync-temp-to-latest,
# ]
# secrets: inherit
# if: github.event_name == 'schedule'
# uses: ./.github/workflows/template-noti-discord-and-update-url-readme.yml
# with:
# ref: refs/heads/dev
# build_reason: Nightly
# push_to_branch: dev
# new_version: ${{ needs.get-update-version.outputs.new_version }}
noti-discord-pre-release-and-update-url-readme:
needs:
[
build-macos,
build-windows-x64,
build-linux-x64,
get-update-version,
set-public-provider,
sync-temp-to-latest,
]
secrets: inherit
if: github.event_name == 'push'
uses: ./.github/workflows/template-noti-discord-and-update-url-readme.yml
with:
ref: refs/heads/dev
build_reason: Pre-release
push_to_branch: dev
new_version: ${{ needs.get-update-version.outputs.new_version }}
# noti-discord-pre-release-and-update-url-readme:
# needs:
# [
# build-macos,
# build-windows-x64,
# build-linux-x64,
# get-update-version,
# set-public-provider,
# sync-temp-to-latest,
# ]
# secrets: inherit
# if: github.event_name == 'push'
# uses: ./.github/workflows/template-noti-discord-and-update-url-readme.yml
# with:
# ref: refs/heads/dev
# build_reason: Pre-release
# push_to_branch: dev
# new_version: ${{ needs.get-update-version.outputs.new_version }}
noti-discord-manual-and-update-url-readme:
needs:
[
build-macos,
build-windows-x64,
build-linux-x64,
get-update-version,
set-public-provider,
sync-temp-to-latest,
]
secrets: inherit
if: github.event_name == 'workflow_dispatch' && github.event.inputs.public_provider == 'aws-s3'
uses: ./.github/workflows/template-noti-discord-and-update-url-readme.yml
with:
ref: refs/heads/dev
build_reason: Manual
push_to_branch: dev
new_version: ${{ needs.get-update-version.outputs.new_version }}
# noti-discord-manual-and-update-url-readme:
# needs:
# [
# build-macos,
# build-windows-x64,
# build-linux-x64,
# get-update-version,
# set-public-provider,
# sync-temp-to-latest,
# ]
# secrets: inherit
# if: github.event_name == 'workflow_dispatch' && github.event.inputs.public_provider == 'aws-s3'
# uses: ./.github/workflows/template-noti-discord-and-update-url-readme.yml
# with:
# ref: refs/heads/dev
# build_reason: Manual
# push_to_branch: dev
# new_version: ${{ needs.get-update-version.outputs.new_version }}
comment-pr-build-url:
needs:

View File

@ -82,11 +82,11 @@ jobs:
VERSION=${{ needs.get-update-version.outputs.new_version }}
PUB_DATE=$(date -u +"%Y-%m-%dT%H:%M:%S.%3NZ")
LINUX_SIGNATURE="${{ needs.build-linux-x64.outputs.APPIMAGE_SIG }}"
LINUX_URL="https://github.com/menloresearch/jan/releases/download/v${{ needs.get-update-version.outputs.new_version }}/${{ needs.build-linux-x64.outputs.APPIMAGE_FILE_NAME }}"
LINUX_URL="https://github.com/janhq/jan/releases/download/v${{ needs.get-update-version.outputs.new_version }}/${{ needs.build-linux-x64.outputs.APPIMAGE_FILE_NAME }}"
WINDOWS_SIGNATURE="${{ needs.build-windows-x64.outputs.WIN_SIG }}"
WINDOWS_URL="https://github.com/menloresearch/jan/releases/download/v${{ needs.get-update-version.outputs.new_version }}/${{ needs.build-windows-x64.outputs.FILE_NAME }}"
WINDOWS_URL="https://github.com/janhq/jan/releases/download/v${{ needs.get-update-version.outputs.new_version }}/${{ needs.build-windows-x64.outputs.FILE_NAME }}"
DARWIN_SIGNATURE="${{ needs.build-macos.outputs.MAC_UNIVERSAL_SIG }}"
DARWIN_URL="https://github.com/menloresearch/jan/releases/download/v${{ needs.get-update-version.outputs.new_version }}/${{ needs.build-macos.outputs.TAR_NAME }}"
DARWIN_URL="https://github.com/janhq/jan/releases/download/v${{ needs.get-update-version.outputs.new_version }}/${{ needs.build-macos.outputs.TAR_NAME }}"
jq --arg version "$VERSION" \
--arg pub_date "$PUB_DATE" \

View File

@ -29,7 +29,7 @@ jobs:
local max_retries=3
local tag
while [ $retries -lt $max_retries ]; do
tag=$(curl -s https://api.github.com/repos/menloresearch/jan/releases/latest | jq -r .tag_name)
tag=$(curl -s https://api.github.com/repos/janhq/jan/releases/latest | jq -r .tag_name)
if [ -n "$tag" ] && [ "$tag" != "null" ]; then
echo $tag
return

View File

@ -50,6 +50,6 @@ jobs:
- macOS Universal: https://delta.jan.ai/nightly/Jan-nightly_{{ VERSION }}_universal.dmg
- Linux Deb: https://delta.jan.ai/nightly/Jan-nightly_{{ VERSION }}_amd64.deb
- Linux AppImage: https://delta.jan.ai/nightly/Jan-nightly_{{ VERSION }}_amd64.AppImage
- Github action run: https://github.com/menloresearch/jan/actions/runs/{{ GITHUB_RUN_ID }}
- Github action run: https://github.com/janhq/jan/actions/runs/{{ GITHUB_RUN_ID }}
env:
DISCORD_WEBHOOK: ${{ secrets.DISCORD_WEBHOOK }}

View File

@ -49,6 +49,8 @@ jobs:
# Update tauri.conf.json
jq --arg version "${{ inputs.new_version }}" '.version = $version | .bundle.createUpdaterArtifacts = false' ./src-tauri/tauri.conf.json > /tmp/tauri.conf.json
mv /tmp/tauri.conf.json ./src-tauri/tauri.conf.json
jq '.bundle.windows.nsis.template = "tauri.bundle.windows.nsis.template"' ./src-tauri/tauri.windows.conf.json > /tmp/tauri.windows.conf.json
mv /tmp/tauri.windows.conf.json ./src-tauri/tauri.windows.conf.json
jq '.bundle.windows.signCommand = "echo External build - skipping signature: %1"' ./src-tauri/tauri.windows.conf.json > /tmp/tauri.windows.conf.json
mv /tmp/tauri.windows.conf.json ./src-tauri/tauri.windows.conf.json
jq --arg version "${{ inputs.new_version }}" '.version = $version' web-app/package.json > /tmp/package.json
@ -80,6 +82,36 @@ jobs:
echo "---------./src-tauri/Cargo.toml---------"
cat ./src-tauri/Cargo.toml
generate_build_version() {
### Examble
### input 0.5.6 output will be 0.5.6 and 0.5.6.0
### input 0.5.6-rc2-beta output will be 0.5.6 and 0.5.6.2
### input 0.5.6-1213 output will be 0.5.6 and and 0.5.6.1213
local new_version="$1"
local base_version
local t_value
# Check if it has a "-"
if [[ "$new_version" == *-* ]]; then
base_version="${new_version%%-*}" # part before -
suffix="${new_version#*-}" # part after -
# Check if it is rcX-beta
if [[ "$suffix" =~ ^rc([0-9]+)-beta$ ]]; then
t_value="${BASH_REMATCH[1]}"
else
t_value="$suffix"
fi
else
base_version="$new_version"
t_value="0"
fi
# Export two values
new_base_version="$base_version"
new_build_version="${base_version}.${t_value}"
}
generate_build_version ${{ inputs.new_version }}
sed -i "s/jan_version/$new_base_version/g" ./src-tauri/tauri.bundle.windows.nsis.template
sed -i "s/jan_build/$new_build_version/g" ./src-tauri/tauri.bundle.windows.nsis.template
if [ "${{ inputs.channel }}" != "stable" ]; then
jq '.plugins.updater.endpoints = ["https://delta.jan.ai/${{ inputs.channel }}/latest.json"]' ./src-tauri/tauri.conf.json > /tmp/tauri.conf.json
mv /tmp/tauri.conf.json ./src-tauri/tauri.conf.json
@ -103,7 +135,14 @@ jobs:
chmod +x .github/scripts/rename-workspace.sh
.github/scripts/rename-workspace.sh ./package.json ${{ inputs.channel }}
cat ./package.json
sed -i "s/jan_productname/Jan-${{ inputs.channel }}/g" ./src-tauri/tauri.bundle.windows.nsis.template
sed -i "s/jan_mainbinaryname/jan-${{ inputs.channel }}/g" ./src-tauri/tauri.bundle.windows.nsis.template
else
sed -i "s/jan_productname/Jan/g" ./src-tauri/tauri.bundle.windows.nsis.template
sed -i "s/jan_mainbinaryname/jan/g" ./src-tauri/tauri.bundle.windows.nsis.template
fi
echo "---------nsis.template---------"
cat ./src-tauri/tauri.bundle.windows.nsis.template
- name: Build app
shell: bash
run: |

View File

@ -98,9 +98,15 @@ jobs:
# Update tauri.conf.json
jq --arg version "${{ inputs.new_version }}" '.version = $version | .bundle.createUpdaterArtifacts = true' ./src-tauri/tauri.conf.json > /tmp/tauri.conf.json
mv /tmp/tauri.conf.json ./src-tauri/tauri.conf.json
jq '.bundle.windows.nsis.template = "tauri.bundle.windows.nsis.template"' ./src-tauri/tauri.windows.conf.json > /tmp/tauri.windows.conf.json
mv /tmp/tauri.windows.conf.json ./src-tauri/tauri.windows.conf.json
jq --arg version "${{ inputs.new_version }}" '.version = $version' web-app/package.json > /tmp/package.json
mv /tmp/package.json web-app/package.json
# Add sign commands to tauri.windows.conf.json
jq '.bundle.windows.signCommand = "powershell -ExecutionPolicy Bypass -File ./sign.ps1 %1"' ./src-tauri/tauri.windows.conf.json > /tmp/tauri.windows.conf.json
mv /tmp/tauri.windows.conf.json ./src-tauri/tauri.windows.conf.json
# Update tauri plugin versions
jq --arg version "${{ inputs.new_version }}" '.version = $version' ./src-tauri/plugins/tauri-plugin-hardware/package.json > /tmp/package.json
@ -127,9 +133,35 @@ jobs:
echo "---------./src-tauri/Cargo.toml---------"
cat ./src-tauri/Cargo.toml
# Add sign commands to tauri.windows.conf.json
jq '.bundle.windows.signCommand = "powershell -ExecutionPolicy Bypass -File ./sign.ps1 %1"' ./src-tauri/tauri.windows.conf.json > /tmp/tauri.windows.conf.json
mv /tmp/tauri.windows.conf.json ./src-tauri/tauri.windows.conf.json
generate_build_version() {
### Example
### input 0.5.6 output will be 0.5.6 and 0.5.6.0
### input 0.5.6-rc2-beta output will be 0.5.6 and 0.5.6.2
### input 0.5.6-1213 output will be 0.5.6 and and 0.5.6.1213
local new_version="$1"
local base_version
local t_value
# Check if it has a "-"
if [[ "$new_version" == *-* ]]; then
base_version="${new_version%%-*}" # part before -
suffix="${new_version#*-}" # part after -
# Check if it is rcX-beta
if [[ "$suffix" =~ ^rc([0-9]+)-beta$ ]]; then
t_value="${BASH_REMATCH[1]}"
else
t_value="$suffix"
fi
else
base_version="$new_version"
t_value="0"
fi
# Export two values
new_base_version="$base_version"
new_build_version="${base_version}.${t_value}"
}
generate_build_version ${{ inputs.new_version }}
sed -i "s/jan_version/$new_base_version/g" ./src-tauri/tauri.bundle.windows.nsis.template
sed -i "s/jan_build/$new_build_version/g" ./src-tauri/tauri.bundle.windows.nsis.template
echo "---------tauri.windows.conf.json---------"
cat ./src-tauri/tauri.windows.conf.json
@ -163,7 +195,14 @@ jobs:
chmod +x .github/scripts/rename-workspace.sh
.github/scripts/rename-workspace.sh ./package.json ${{ inputs.channel }}
cat ./package.json
sed -i "s/jan_productname/Jan-${{ inputs.channel }}/g" ./src-tauri/tauri.bundle.windows.nsis.template
sed -i "s/jan_mainbinaryname/jan-${{ inputs.channel }}/g" ./src-tauri/tauri.bundle.windows.nsis.template
else
sed -i "s/jan_productname/Jan/g" ./src-tauri/tauri.bundle.windows.nsis.template
sed -i "s/jan_mainbinaryname/jan/g" ./src-tauri/tauri.bundle.windows.nsis.template
fi
echo "---------nsis.template---------"
cat ./src-tauri/tauri.bundle.windows.nsis.template
- name: Install AzureSignTool
run: |
@ -250,13 +289,3 @@ jobs:
asset_path: ./src-tauri/target/release/bundle/nsis/${{ steps.metadata.outputs.FILE_NAME }}
asset_name: ${{ steps.metadata.outputs.FILE_NAME }}
asset_content_type: application/octet-stream
- name: Upload release assert if public provider is github
if: inputs.public_provider == 'github'
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
uses: actions/upload-release-asset@v1.0.1
with:
upload_url: ${{ inputs.upload_url }}
asset_path: ./src-tauri/target/release/bundle/msi/${{ steps.metadata.outputs.MSI_FILE_NAME }}
asset_name: ${{ steps.metadata.outputs.MSI_FILE_NAME }}
asset_content_type: application/octet-stream

View File

@ -143,7 +143,7 @@ jan/
**Option 1: The Easy Way (Make)**
```bash
git clone https://github.com/menloresearch/jan
git clone https://github.com/janhq/jan
cd jan
make dev
```
@ -152,8 +152,8 @@ make dev
### Reporting Bugs
- **Ensure the bug was not already reported** by searching on GitHub under [Issues](https://github.com/menloresearch/jan/issues)
- If you're unable to find an open issue addressing the problem, [open a new one](https://github.com/menloresearch/jan/issues/new)
- **Ensure the bug was not already reported** by searching on GitHub under [Issues](https://github.com/janhq/jan/issues)
- If you're unable to find an open issue addressing the problem, [open a new one](https://github.com/janhq/jan/issues/new)
- Include your system specs and error logs - it helps a ton
### Suggesting Enhancements

View File

@ -1,8 +1,8 @@
# Stage 1: Build stage with Node.js and Yarn v4
FROM node:20-alpine AS builder
ARG JAN_API_BASE=https://api-dev.jan.ai/v1
ENV JAN_API_BASE=$JAN_API_BASE
ARG MENLO_PLATFORM_BASE_URL=https://api-dev.menlo.ai/v1
ENV MENLO_PLATFORM_BASE_URL=$MENLO_PLATFORM_BASE_URL
# Install build dependencies
RUN apk add --no-cache \

View File

@ -117,7 +117,6 @@ lint: install-and-build
test: lint
yarn download:bin
ifeq ($(OS),Windows_NT)
yarn download:windows-installer
endif
yarn test
yarn copy:assets:tauri

View File

@ -4,10 +4,10 @@
<p align="center">
<!-- ALL-CONTRIBUTORS-BADGE:START - Do not remove or modify this section -->
<img alt="GitHub commit activity" src="https://img.shields.io/github/commit-activity/m/menloresearch/jan"/>
<img alt="Github Last Commit" src="https://img.shields.io/github/last-commit/menloresearch/jan"/>
<img alt="Github Contributors" src="https://img.shields.io/github/contributors/menloresearch/jan"/>
<img alt="GitHub closed issues" src="https://img.shields.io/github/issues-closed/menloresearch/jan"/>
<img alt="GitHub commit activity" src="https://img.shields.io/github/commit-activity/m/janhq/jan"/>
<img alt="Github Last Commit" src="https://img.shields.io/github/last-commit/janhq/jan"/>
<img alt="Github Contributors" src="https://img.shields.io/github/contributors/janhq/jan"/>
<img alt="GitHub closed issues" src="https://img.shields.io/github/issues-closed/janhq/jan"/>
<img alt="Discord" src="https://img.shields.io/discord/1107178041848909847?label=discord"/>
</p>
@ -15,7 +15,7 @@
<a href="https://www.jan.ai/docs/desktop">Getting Started</a>
- <a href="https://discord.gg/Exe46xPMbK">Community</a>
- <a href="https://jan.ai/changelog">Changelog</a>
- <a href="https://github.com/menloresearch/jan/issues">Bug reports</a>
- <a href="https://github.com/janhq/jan/issues">Bug reports</a>
</p>
Jan is bringing the best of open-source AI in an easy-to-use product. Download and run LLMs with **full control** and **privacy**.
@ -48,7 +48,7 @@ The easiest way to get started is by downloading one of the following versions f
</table>
Download from [jan.ai](https://jan.ai/) or [GitHub Releases](https://github.com/menloresearch/jan/releases).
Download from [jan.ai](https://jan.ai/) or [GitHub Releases](https://github.com/janhq/jan/releases).
## Features
@ -73,7 +73,7 @@ For those who enjoy the scenic route:
### Run with Make
```bash
git clone https://github.com/menloresearch/jan
git clone https://github.com/janhq/jan
cd jan
make dev
```
@ -128,7 +128,7 @@ Contributions welcome. See [CONTRIBUTING.md](CONTRIBUTING.md) for the full spiel
## Contact
- **Bugs**: [GitHub Issues](https://github.com/menloresearch/jan/issues)
- **Bugs**: [GitHub Issues](https://github.com/janhq/jan/issues)
- **Business**: hello@jan.ai
- **Jobs**: hr@jan.ai
- **General Discussion**: [Discord](https://discord.gg/FTk2MvZwJH)

View File

@ -1,7 +1,7 @@
# Core dependencies
cua-computer[all]~=0.3.5
cua-agent[all]~=0.3.0
cua-agent @ git+https://github.com/menloresearch/cua.git@compute-agent-0.3.0-patch#subdirectory=libs/python/agent
cua-agent @ git+https://github.com/janhq/cua.git@compute-agent-0.3.0-patch#subdirectory=libs/python/agent
# ReportPortal integration
reportportal-client~=5.6.5

View File

@ -25,8 +25,8 @@ export RANLIB_aarch64_linux_android="$NDK_HOME/toolchains/llvm/prebuilt/darwin-x
# Additional environment variables for Rust cross-compilation
export CARGO_TARGET_AARCH64_LINUX_ANDROID_LINKER="$NDK_HOME/toolchains/llvm/prebuilt/darwin-x86_64/bin/aarch64-linux-android21-clang"
# Only set global CC and AR for Android builds (when TAURI_ANDROID_BUILD is set)
if [ "$TAURI_ANDROID_BUILD" = "true" ]; then
# Only set global CC and AR for Android builds (when IS_ANDROID is set)
if [ "$IS_ANDROID" = "true" ]; then
export CC="$NDK_HOME/toolchains/llvm/prebuilt/darwin-x86_64/bin/aarch64-linux-android21-clang"
export AR="$NDK_HOME/toolchains/llvm/prebuilt/darwin-x86_64/bin/llvm-ar"
echo "Global CC and AR set for Android build"

View File

@ -13,7 +13,7 @@ import * as core from '@janhq/core'
## Build an Extension
1. Download an extension template, for example, [https://github.com/menloresearch/extension-template](https://github.com/menloresearch/extension-template).
1. Download an extension template, for example, [https://github.com/janhq/extension-template](https://github.com/janhq/extension-template).
2. Update the source code:

View File

@ -31,7 +31,7 @@
"@vitest/coverage-v8": "^2.1.8",
"@vitest/ui": "^2.1.8",
"eslint": "8.57.0",
"happy-dom": "^15.11.6",
"happy-dom": "^20.0.0",
"pacote": "^21.0.0",
"react": "19.0.0",
"request": "^2.88.2",

View File

@ -11,6 +11,8 @@ export enum ExtensionTypeEnum {
HuggingFace = 'huggingFace',
Engine = 'engine',
Hardware = 'hardware',
RAG = 'rag',
VectorDB = 'vectorDB',
}
export interface ExtensionType {

View File

@ -182,6 +182,7 @@ export interface SessionInfo {
port: number // llama-server output port (corrected from portid)
model_id: string //name of the model
model_path: string // path of the loaded model
is_embedding: boolean
api_key: string
mmproj_path?: string
}

View File

@ -23,3 +23,8 @@ export { MCPExtension } from './mcp'
* Base AI Engines.
*/
export * from './engines'
export { RAGExtension, RAG_INTERNAL_SERVER } from './rag'
export type { AttachmentInput, IngestAttachmentsResult } from './rag'
export { VectorDBExtension } from './vector-db'
export type { SearchMode, VectorDBStatus, VectorChunkInput, VectorSearchResult, AttachmentFileInfo, VectorDBFileInput, VectorDBIngestOptions } from './vector-db'

View File

@ -0,0 +1,36 @@
import { BaseExtension, ExtensionTypeEnum } from '../extension'
import type { MCPTool, MCPToolCallResult } from '../../types'
import type { AttachmentFileInfo } from './vector-db'
export interface AttachmentInput {
path: string
name?: string
type?: string
size?: number
}
export interface IngestAttachmentsResult {
filesProcessed: number
chunksInserted: number
files: AttachmentFileInfo[]
}
export const RAG_INTERNAL_SERVER = 'rag-internal'
/**
* RAG extension base: exposes RAG tools and orchestration API.
*/
export abstract class RAGExtension extends BaseExtension {
type(): ExtensionTypeEnum | undefined {
return ExtensionTypeEnum.RAG
}
abstract getTools(): Promise<MCPTool[]>
/**
* Lightweight list of tool names for quick routing/lookup.
*/
abstract getToolNames(): Promise<string[]>
abstract callTool(toolName: string, args: Record<string, unknown>): Promise<MCPToolCallResult>
abstract ingestAttachments(threadId: string, files: AttachmentInput[]): Promise<IngestAttachmentsResult>
}

View File

@ -0,0 +1,82 @@
import { BaseExtension, ExtensionTypeEnum } from '../extension'
export type SearchMode = 'auto' | 'ann' | 'linear'
export interface VectorDBStatus {
ann_available: boolean
}
export interface VectorChunkInput {
text: string
embedding: number[]
}
export interface VectorSearchResult {
id: string
text: string
score?: number
file_id: string
chunk_file_order: number
}
export interface AttachmentFileInfo {
id: string
name?: string
path?: string
type?: string
size?: number
chunk_count: number
}
// High-level input types for file ingestion
export interface VectorDBFileInput {
path: string
name?: string
type?: string
size?: number
}
export interface VectorDBIngestOptions {
chunkSize: number
chunkOverlap: number
}
/**
* Vector DB extension base: abstraction over local vector storage and search.
*/
export abstract class VectorDBExtension extends BaseExtension {
type(): ExtensionTypeEnum | undefined {
return ExtensionTypeEnum.VectorDB
}
abstract getStatus(): Promise<VectorDBStatus>
abstract createCollection(threadId: string, dimension: number): Promise<void>
abstract insertChunks(
threadId: string,
fileId: string,
chunks: VectorChunkInput[]
): Promise<void>
abstract ingestFile(
threadId: string,
file: VectorDBFileInput,
opts: VectorDBIngestOptions
): Promise<AttachmentFileInfo>
abstract searchCollection(
threadId: string,
query_embedding: number[],
limit: number,
threshold: number,
mode?: SearchMode,
fileIds?: string[]
): Promise<VectorSearchResult[]>
abstract deleteChunks(threadId: string, ids: string[]): Promise<void>
abstract deleteFile(threadId: string, fileId: string): Promise<void>
abstract deleteCollection(threadId: string): Promise<void>
abstract listAttachments(threadId: string, limit?: number): Promise<AttachmentFileInfo[]>
abstract getChunks(
threadId: string,
fileId: string,
startOrder: number,
endOrder: number
): Promise<VectorSearchResult[]>
}

View File

@ -12,6 +12,8 @@ export type SettingComponentProps = {
extensionName?: string
requireModelReload?: boolean
configType?: ConfigType
titleKey?: string
descriptionKey?: string
}
export type ConfigType = 'runtime' | 'setting'

View File

@ -18,7 +18,7 @@ We try to **keep routes consistent** to maintain SEO.
## How to Contribute
Refer to the [Contributing Guide](https://github.com/menloresearch/jan/blob/main/CONTRIBUTING.md) for more comprehensive information on how to contribute to the Jan project.
Refer to the [Contributing Guide](https://github.com/janhq/jan/blob/main/CONTRIBUTING.md) for more comprehensive information on how to contribute to the Jan project.
### Pre-requisites and Installation

View File

@ -1581,7 +1581,7 @@
},
"cover": {
"type": "string",
"example": "https://raw.githubusercontent.com/menloresearch/jan/main/models/trinity-v1.2-7b/cover.png"
"example": "https://raw.githubusercontent.com/janhq/jan/main/models/trinity-v1.2-7b/cover.png"
},
"engine": {
"type": "string",

View File

@ -27,7 +27,7 @@ export const APIReference = () => {
<ApiReferenceReact
configuration={{
spec: {
url: 'https://raw.githubusercontent.com/menloresearch/docs/main/public/openapi/jan.json',
url: 'https://raw.githubusercontent.com/janhq/docs/main/public/openapi/jan.json',
},
theme: 'alternate',
hideModels: true,

View File

@ -57,7 +57,7 @@ const Changelog = () => {
<p className="text-base mt-2 leading-relaxed">
Latest release updates from the Jan team. Check out our&nbsp;
<a
href="https://github.com/orgs/menloresearch/projects/30"
href="https://github.com/orgs/janhq/projects/30"
className="text-blue-600 dark:text-blue-400 cursor-pointer"
>
Roadmap
@ -150,7 +150,7 @@ const Changelog = () => {
<div className="text-center">
<Link
href="https://github.com/menloresearch/jan/releases"
href="https://github.com/janhq/jan/releases"
target="_blank"
className="dark:nx-bg-neutral-900 dark:text-white bg-black text-white hover:text-white justify-center dark:border dark:border-neutral-800 flex-shrink-0 px-4 py-3 rounded-xl inline-flex items-center"
>

View File

@ -72,7 +72,7 @@ export default function CardDownload({ lastRelease }: Props) {
return {
...system,
href: `https://github.com/menloresearch/jan/releases/download/${lastRelease.tag_name}/${downloadUrl}`,
href: `https://github.com/janhq/jan/releases/download/${lastRelease.tag_name}/${downloadUrl}`,
size: asset ? formatFileSize(asset.size) : undefined,
}
})

View File

@ -139,7 +139,7 @@ const DropdownDownload = ({ lastRelease }: Props) => {
return {
...system,
href: `https://github.com/menloresearch/jan/releases/download/${lastRelease.tag_name}/${downloadUrl}`,
href: `https://github.com/janhq/jan/releases/download/${lastRelease.tag_name}/${downloadUrl}`,
size: asset ? formatFileSize(asset.size) : undefined,
}
})

View File

@ -23,7 +23,7 @@ const BuiltWithLove = () => {
</div>
<div className="flex flex-col lg:flex-row gap-8 mt-8 items-center justify-center">
<a
href="https://github.com/menloresearch/jan"
href="https://github.com/janhq/jan"
target="_blank"
className="dark:bg-white bg-black inline-flex w-56 px-4 py-3 rounded-xl cursor-pointer justify-center items-start space-x-4 "
>

View File

@ -44,7 +44,7 @@ const Hero = () => {
<div className="mt-10 text-center">
<div>
<Link
href="https://github.com/menloresearch/jan/releases"
href="https://github.com/janhq/jan/releases"
target="_blank"
className="hidden lg:inline-block"
>

View File

@ -95,7 +95,7 @@ const Home = () => {
<div className="container mx-auto relative z-10">
<div className="flex justify-center items-center mt-14 lg:mt-20 px-4">
<a
href={`https://github.com/menloresearch/jan/releases/tag/${lastVersion}`}
href={`https://github.com/janhq/jan/releases/tag/${lastVersion}`}
target="_blank"
rel="noopener noreferrer"
className="bg-black/40 px-3 lg:px-4 rounded-full h-10 inline-flex items-center max-w-full animate-fade-in delay-100"
@ -270,7 +270,7 @@ const Home = () => {
data-delay="600"
>
<a
href="https://github.com/menloresearch/jan"
href="https://github.com/janhq/jan"
target="_blank"
rel="noopener noreferrer"
>
@ -387,7 +387,7 @@ const Home = () => {
</div>
<a
className="hidden md:block"
href="https://github.com/menloresearch/jan"
href="https://github.com/janhq/jan"
target="_blank"
rel="noopener noreferrer"
>
@ -413,7 +413,7 @@ const Home = () => {
</p>
<a
className="md:hidden mt-4 block w-full"
href="https://github.com/menloresearch/jan"
href="https://github.com/janhq/jan"
target="_blank"
rel="noopener noreferrer"
>

View File

@ -95,7 +95,7 @@ const Navbar = ({ noScroll }: { noScroll?: boolean }) => {
})}
<li>
<a
href="https://github.com/menloresearch/jan/releases/latest"
href="https://github.com/janhq/jan/releases/latest"
target="_blank"
rel="noopener noreferrer"
>
@ -141,7 +141,7 @@ const Navbar = ({ noScroll }: { noScroll?: boolean }) => {
<FaLinkedinIn className="size-5" />
</a>
<a
href="https://github.com/menloresearch/jan"
href="https://github.com/janhq/jan"
target="_blank"
rel="noopener noreferrer"
className="rounded-lg flex items-center justify-center"
@ -156,7 +156,7 @@ const Navbar = ({ noScroll }: { noScroll?: boolean }) => {
{/* Mobile Download Button and Hamburger */}
<div className="lg:hidden flex items-center gap-3">
<a
href="https://github.com/menloresearch/jan/releases/latest"
href="https://github.com/janhq/jan/releases/latest"
target="_blank"
rel="noopener noreferrer"
>
@ -278,7 +278,7 @@ const Navbar = ({ noScroll }: { noScroll?: boolean }) => {
<FaLinkedinIn className="size-5" />
</a>
<a
href="https://github.com/menloresearch/jan"
href="https://github.com/janhq/jan"
target="_blank"
rel="noopener noreferrer"
className="text-black rounded-lg flex items-center justify-center"
@ -296,7 +296,7 @@ const Navbar = ({ noScroll }: { noScroll?: boolean }) => {
asChild
>
<a
href="https://github.com/menloresearch/jan/releases/latest"
href="https://github.com/janhq/jan/releases/latest"
target="_blank"
rel="noopener noreferrer"
>

View File

@ -120,7 +120,7 @@ export function DropdownButton({
return {
...option,
href: `https://github.com/menloresearch/jan/releases/download/${lastRelease.tag_name}/${fileName}`,
href: `https://github.com/janhq/jan/releases/download/${lastRelease.tag_name}/${fileName}`,
size: asset ? formatFileSize(asset.size) : 'N/A',
}
})

View File

@ -18,7 +18,7 @@ description: Development setup, workflow, and contribution guidelines for Jan Se
1. **Clone Repository**
```bash
git clone https://github.com/menloresearch/jan-server
git clone https://github.com/janhq/jan-server
cd jan-server
```

View File

@ -19,7 +19,7 @@ Jan Server currently supports minikube for local development. Production Kuberne
1. **Clone the repository**
```bash
git clone https://github.com/menloresearch/jan-server
git clone https://github.com/janhq/jan-server
cd jan-server
```

View File

@ -24,4 +24,4 @@ Fixes 💫
Update your product or download the latest: https://jan.ai
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.5).
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.5).

View File

@ -24,4 +24,4 @@ Jan now supports Mistral's new model Codestral. Thanks [Bartowski](https://huggi
More GGUF models can run in Jan - we rebased to llama.cpp b3012.Big thanks to [ggerganov](https://github.com/ggerganov)
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.0).
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.0).

View File

@ -28,4 +28,4 @@ Jan now understands LaTeX, allowing users to process and understand complex math
![Latex](https://catalog.jan.ai/docs/jan_update_latex.gif)
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.4.12).
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.4.12).

View File

@ -28,4 +28,4 @@ Users can now connect to OpenAI's new model GPT-4o.
![GPT4o](https://catalog.jan.ai/docs/jan_v0_4_13_openai_gpt4o.gif)
For more details, see the [GitHub release notes.](https://github.com/menloresearch/jan/releases/tag/v0.4.13)
For more details, see the [GitHub release notes.](https://github.com/janhq/jan/releases/tag/v0.4.13)

View File

@ -16,4 +16,4 @@ More GGUF models can run in Jan - we rebased to llama.cpp b2961.
Huge shoutouts to [ggerganov](https://github.com/ggerganov) and contributors for llama.cpp, and [Bartowski](https://huggingface.co/bartowski) for GGUF models.
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.4.14).
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.4.14).

View File

@ -26,4 +26,4 @@ We've updated to llama.cpp b3088 for better performance - thanks to [GG](https:/
- Reduced chat font weight (back to normal!)
- Restored the maximize button
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.1).
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.1).

View File

@ -32,4 +32,4 @@ We've restored the tooltip hover functionality, which makes it easier to access
The right-click options for thread settings are now fully operational again. You can now manage your threads with this fix.
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.2).
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.2).

View File

@ -23,4 +23,4 @@ We've been working on stability issues over the last few weeks. Jan is now more
- Fixed the GPU memory utilization bar
- Some UX and copy improvements
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.3).
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.3).

View File

@ -32,4 +32,4 @@ Switching between threads used to reset your instruction settings. Thats fixe
### Minor UI Tweaks & Bug Fixes
Weve also resolved issues with the input slider on the right panel and tackled several smaller bugs to keep everything running smoothly.
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.4).
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.4).

View File

@ -23,4 +23,4 @@ Fixes 💫
Update your product or download the latest: https://jan.ai
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.7).
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.7).

View File

@ -22,4 +22,4 @@ Jan v0.5.9 is here: fixing what needed fixing
Update your product or download the latest: https://jan.ai
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.9).
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.9).

View File

@ -22,4 +22,4 @@ and various UI/UX enhancements 💫
Update your product or download the latest: https://jan.ai
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.8).
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.8).

View File

@ -19,4 +19,4 @@ Jan v0.5.10 is live: Jan is faster, smoother, and more reliable.
Update your product or download the latest: https://jan.ai
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.10).
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.10).

View File

@ -23,4 +23,4 @@ Jan v0.5.11 is here - critical issues fixed, Mac installation updated.
Update your product or download the latest: https://jan.ai
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.11).
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.11).

View File

@ -25,4 +25,4 @@ Jan v0.5.11 is here - critical issues fixed, Mac installation updated.
Update your product or download the latest: https://jan.ai
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.12).
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.12).

View File

@ -20,4 +20,4 @@ import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
Update your product or download the latest: https://jan.ai
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.13).
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.13).

View File

@ -33,4 +33,4 @@ Llama
Update your Jan or [download the latest](https://jan.ai/).
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.14).
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.14).

View File

@ -25,4 +25,4 @@ import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
Update your Jan or [download the latest](https://jan.ai/).
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.15).
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.15).

View File

@ -26,4 +26,4 @@ import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
Update your Jan or [download the latest](https://jan.ai/).
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.16).
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.16).

View File

@ -20,4 +20,4 @@ import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
Update your Jan or [download the latest](https://jan.ai/).
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.5.17).
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.17).

View File

@ -18,4 +18,4 @@ import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
Update your Jan or [download the latest](https://jan.ai/).
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.6.1).
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.6.1).

View File

@ -18,4 +18,4 @@ import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
Update your Jan or [download the latest](https://jan.ai/).
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.6.3).
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.6.3).

View File

@ -23,4 +23,4 @@ new MCP examples.
Update your Jan or [download the latest](https://jan.ai/).
For more details, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.6.5).
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.6.5).

View File

@ -116,4 +116,4 @@ integrations. Stay tuned!
Update your Jan or [download the latest](https://jan.ai/).
For the complete list of changes, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.6.6).
For the complete list of changes, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.6.6).

View File

@ -89,4 +89,4 @@ We're continuing to optimize performance for large models, expand MCP integratio
Update your Jan or [download the latest](https://jan.ai/).
For the complete list of changes, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.6.7).
For the complete list of changes, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.6.7).

View File

@ -74,4 +74,4 @@ v0.6.8 focuses on stability and real workflows: major llama.cpp hardening, two n
Update your Jan or [download the latest](https://jan.ai/).
For the complete list of changes, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.6.8).
For the complete list of changes, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.6.8).

View File

@ -1,6 +1,6 @@
---
title: "Jan v0.7.0: Jan Projects"
version: v0.7.0
version: 0.7.0
description: "Jan v0.7.0 introduces Projects, model renaming, llama.cpp auto-tuning, model stats, and Azure support."
date: 2025-10-02
ogImage: "/assets/images/changelog/jan-release-v0.7.0.jpeg"

View File

@ -0,0 +1,26 @@
---
title: "Jan v0.7.1: Fixes Windows Version Revert & OpenRouter Models"
version: 0.7.1
description: "Jan v0.7.1 focuses on bug fixes, including a windows version revert and improvements to OpenRouter models."
date: 2025-10-03
---
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
import { Callout } from 'nextra/components'
<ChangelogHeader title="Jan v0.7.1" date="2025-10-03" />
### Bug Fixes: Windows Version Revert & OpenRouter Models
#### Two quick fixes:
- Jan no longer reverts to an older version on load
- OpenRouter can now add models again
- Add headers for anthropic request to fetch models
---
Update your Jan or [download the latest version](https://jan.ai/).
For the complete list of changes, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.7.1).

View File

@ -0,0 +1,25 @@
---
title: "Jan v0.7.2: Security Update"
version: 0.7.2
description: "Jan v0.7.2 updates the happy-dom dependency to v20.0.0 to address a recently disclosed sandbox vulnerability."
date: 2025-10-16
---
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
import { Callout } from 'nextra/components'
<ChangelogHeader title="Jan v0.7.2" date="2025-10-16" />
## Jan v0.7.2: Security Update (happy-dom v20)
This release focuses on **security and stability improvements**.
It updates the `happy-dom` dependency to the latest version to address a recently disclosed vulnerability.
### Security Fix
- Updated `happy-dom` to **^20.0.0**, preventing untrusted JavaScript executed within HAPPY DOM from accessing process-level functions and executing arbitrary code outside the intended sandbox.
---
Update your Jan or [download the latest version](https://jan.ai/).
For the complete list of changes, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.7.2).

View File

@ -41,7 +41,7 @@ Jan is an open-source replacement for ChatGPT:
Jan is a full [product suite](https://en.wikipedia.org/wiki/Software_suite) that offers an alternative to Big AI:
- [Jan Desktop](/docs/desktop/quickstart): macOS, Windows, and Linux apps with offline mode
- [Jan Web](https://chat.jan.ai): Jan on browser, a direct alternative to chatgpt.com
- [Jan Web](https://chat.menlo.ai): Jan on browser, a direct alternative to chatgpt.com
- Jan Mobile: iOS and Android apps (Coming Soon)
- [Jan Server](/docs/server): deploy locally, in your cloud, or on-prem
- [Jan Models](/docs/models): Open-source models optimized for deep research, tool use, and reasoning

View File

@ -135,5 +135,5 @@ Min-p: 0.0
## 🤝 Community & Support
- **Discussions**: [HuggingFace Community](https://huggingface.co/Menlo/Jan-nano-128k/discussions)
- **Issues**: [GitHub Repository](https://github.com/menloresearch/deep-research/issues)
- **Issues**: [GitHub Repository](https://github.com/janhq/deep-research/issues)
- **Discord**: Join our research community for tips and best practices

View File

@ -9,7 +9,7 @@ Jan Server is a comprehensive self-hosted AI server platform that provides OpenA
Jan Server is a Kubernetes-native platform consisting of multiple microservices that work together to provide a complete AI infrastructure solution. It offers:
![System Architecture Diagram](https://raw.githubusercontent.com/menloresearch/jan-server/main/docs/Architect.png)
![System Architecture Diagram](https://raw.githubusercontent.com/janhq/jan-server/main/docs/Architect.png)
### Key Features
- **OpenAI-Compatible API**: Full compatibility with OpenAI's chat completion API

View File

@ -3,7 +3,7 @@ title: Development
description: Development setup, workflow, and contribution guidelines for Jan Server.
---
## Core Domain Models
![Domain Models Diagram](https://github.com/menloresearch/jan-server/raw/main/apps/jan-api-gateway/docs/System_Design.png)
![Domain Models Diagram](https://github.com/janhq/jan-server/raw/main/apps/jan-api-gateway/docs/System_Design.png)
## Development Setup
### Prerequisites
@ -42,7 +42,7 @@ description: Development setup, workflow, and contribution guidelines for Jan Se
1. **Clone Repository**
```bash
git clone https://github.com/menloresearch/jan-server
git clone https://github.com/janhq/jan-server
cd jan-server
```

View File

@ -40,7 +40,7 @@ Jan Server is a Kubernetes-native platform consisting of multiple microservices
- **Monitoring & Profiling**: Built-in performance monitoring and health checks
## System Architecture
![System Architecture Diagram](https://raw.githubusercontent.com/menloresearch/jan-server/main/docs/Architect.png)
![System Architecture Diagram](https://raw.githubusercontent.com/janhq/jan-server/main/docs/Architect.png)
## Services
### Jan API Gateway

View File

@ -19,7 +19,7 @@ keywords:
import Download from "@/components/Download"
export const getStaticProps = async() => {
const resRelease = await fetch('https://api.github.com/repos/menloresearch/jan/releases/latest')
const resRelease = await fetch('https://api.github.com/repos/janhq/jan/releases/latest')
const release = await resRelease.json()
return {

View File

@ -19,9 +19,9 @@ keywords:
import Home from "@/components/Home"
export const getStaticProps = async() => {
const resReleaseLatest = await fetch('https://api.github.com/repos/menloresearch/jan/releases/latest')
const resRelease = await fetch('https://api.github.com/repos/menloresearch/jan/releases?per_page=500')
const resRepo = await fetch('https://api.github.com/repos/menloresearch/jan')
const resReleaseLatest = await fetch('https://api.github.com/repos/janhq/jan/releases/latest')
const resRelease = await fetch('https://api.github.com/repos/janhq/jan/releases?per_page=500')
const resRepo = await fetch('https://api.github.com/repos/janhq/jan')
const repo = await resRepo.json()
const latestRelease = await resReleaseLatest.json()
const release = await resRelease.json()

View File

@ -14,12 +14,12 @@ import CTABlog from '@/components/Blog/CTA'
Jan now supports [NVIDIA TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) in addition to [llama.cpp](https://github.com/ggerganov/llama.cpp), making Jan multi-engine and ultra-fast for users with Nvidia GPUs.
We've been excited for TensorRT-LLM for a while, and [had a lot of fun implementing it](https://github.com/menloresearch/nitro-tensorrt-llm). As part of the process, we've run some benchmarks, to see how TensorRT-LLM fares on consumer hardware (e.g. [4090s](https://www.nvidia.com/en-us/geforce/graphics-cards/40-series/), [3090s](https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/)) we commonly see in the [Jan's hardware community](https://discord.com/channels/1107178041848909847/1201834752206974996).
We've been excited for TensorRT-LLM for a while, and [had a lot of fun implementing it](https://github.com/janhq/nitro-tensorrt-llm). As part of the process, we've run some benchmarks, to see how TensorRT-LLM fares on consumer hardware (e.g. [4090s](https://www.nvidia.com/en-us/geforce/graphics-cards/40-series/), [3090s](https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/)) we commonly see in the [Jan's hardware community](https://discord.com/channels/1107178041848909847/1201834752206974996).
<Callout type="info" >
**Give it a try!** Jan's TensorRT-LLM extension is available in Jan v0.4.9. We precompiled some TensorRT-LLM models for you to try: `Mistral 7b`, `TinyLlama-1.1b`, `TinyJensen-1.1b` 😂
Bugs or feedback? Let us know on [GitHub](https://github.com/menloresearch/jan) or via [Discord](https://discord.com/channels/1107178041848909847/1201832734704795688).
Bugs or feedback? Let us know on [GitHub](https://github.com/janhq/jan) or via [Discord](https://discord.com/channels/1107178041848909847/1201832734704795688).
</Callout>
<Callout type="info" >

View File

@ -70,34 +70,34 @@ brief survey of how other players approach deep research:
| Kimi | Interactive synthesis | 50100 | 3060+ | PDF, Interactive website | Free |
In our testing, we used the following prompt to assess the quality of the generated report by
the providers above. You can refer to the reports generated [here](https://github.com/menloresearch/prompt-experiments).
the providers above. You can refer to the reports generated [here](https://github.com/janhq/prompt-experiments).
```
Generate a comprehensive report about the state of AI in the past week. Include all
new model releases and notable architectural improvements from a variety of sources.
```
[Google's generated report](https://github.com/menloresearch/prompt-experiments/blob/main/Gemini%202.5%20Flash%20Report.pdf) was the most verbose, with a whopping 23 pages that reads
[Google's generated report](https://github.com/janhq/prompt-experiments/blob/main/Gemini%202.5%20Flash%20Report.pdf) was the most verbose, with a whopping 23 pages that reads
like a professional intelligence briefing. It opens with an executive summary,
systematically categorizes developments, and provides forward-looking strategic
insights—connecting OpenAI's open-weight release to broader democratization trends
and linking infrastructure investments to competitive positioning.
[OpenAI](https://github.com/menloresearch/prompt-experiments/blob/main/OpenAI%20Deep%20Research.pdf) produced the most citation-heavy output with 134 references throughout 10 pages
[OpenAI](https://github.com/janhq/prompt-experiments/blob/main/OpenAI%20Deep%20Research.pdf) produced the most citation-heavy output with 134 references throughout 10 pages
(albeit most of them being from the same source).
[Perplexity](https://github.com/menloresearch/prompt-experiments/blob/main/Perplexity%20Deep%20Research.pdf) delivered the most actionable 6-page report that maximizes information
[Perplexity](https://github.com/janhq/prompt-experiments/blob/main/Perplexity%20Deep%20Research.pdf) delivered the most actionable 6-page report that maximizes information
density while maintaining scannability. Despite being the shortest, it captures all
major developments with sufficient context for decision-making.
[Claude](https://github.com/menloresearch/prompt-experiments/blob/main/Claude%20Deep%20Research.pdf) produced a comprehensive analysis that interestingly ignored the time constraint,
[Claude](https://github.com/janhq/prompt-experiments/blob/main/Claude%20Deep%20Research.pdf) produced a comprehensive analysis that interestingly ignored the time constraint,
covering an 8-month period from January-August 2025 instead of the requested week (Jul 31-Aug
7th 2025). Rather than cataloging recent events, Claude traced the evolution of trends over months.
[Grok](https://github.com/menloresearch/prompt-experiments/blob/main/Grok%203%20Deep%20Research.pdf) produced a well-structured but relatively shallow 5-page academic-style report that
[Grok](https://github.com/janhq/prompt-experiments/blob/main/Grok%203%20Deep%20Research.pdf) produced a well-structured but relatively shallow 5-page academic-style report that
read more like an event catalog than strategic analysis.
[Kimi](https://github.com/menloresearch/prompt-experiments/blob/main/Kimi%20AI%20Deep%20Research.pdf) produced a comprehensive 13-page report with systematic organization covering industry developments, research breakthroughs, and policy changes, but notably lacks proper citations throughout most of the content despite claiming to use 50-100 sources.
[Kimi](https://github.com/janhq/prompt-experiments/blob/main/Kimi%20AI%20Deep%20Research.pdf) produced a comprehensive 13-page report with systematic organization covering industry developments, research breakthroughs, and policy changes, but notably lacks proper citations throughout most of the content despite claiming to use 50-100 sources.
### Understanding Search Strategies

View File

@ -13,7 +13,7 @@ import CTABlog from '@/components/Blog/CTA'
## Abstract
We present a straightforward approach to customizing small, open-source models using fine-tuning and RAG that outperforms GPT-3.5 for specialized use cases. With it, we achieved superior Q&A results of [technical documentation](https://nitro.jan.ai/docs) for a small codebase [codebase](https://github.com/menloresearch/nitro).
We present a straightforward approach to customizing small, open-source models using fine-tuning and RAG that outperforms GPT-3.5 for specialized use cases. With it, we achieved superior Q&A results of [technical documentation](https://nitro.jan.ai/docs) for a small codebase [codebase](https://github.com/janhq/nitro).
In short, (1) extending a general foundation model like [Mistral](https://huggingface.co/mistralai/Mistral-7B-v0.1) with strong math and coding, and (2) training it over a high-quality, synthetic dataset generated from the intended corpus, and (3) adding RAG capabilities, can lead to significant accuracy improvements.
@ -93,11 +93,11 @@ This final model can be found [here on Huggingface](https://huggingface.co/jan-h
As an additional step, we also added [Retrieval Augmented Generation (RAG)](https://blogs.nvidia.com/blog/what-is-retrieval-augmented-generation/) as an experiment parameter.
A simple RAG setup was done using **[Llamaindex](https://www.llamaindex.ai/)** and the **[bge-en-base-v1.5 embedding](https://huggingface.co/BAAI/bge-base-en-v1.5)** model for efficient documentation retrieval and question-answering. You can find the RAG implementation [here](https://github.com/menloresearch/open-foundry/blob/main/rag-is-not-enough/rag/nitro_rag.ipynb).
A simple RAG setup was done using **[Llamaindex](https://www.llamaindex.ai/)** and the **[bge-en-base-v1.5 embedding](https://huggingface.co/BAAI/bge-base-en-v1.5)** model for efficient documentation retrieval and question-answering. You can find the RAG implementation [here](https://github.com/janhq/open-foundry/blob/main/rag-is-not-enough/rag/nitro_rag.ipynb).
## Benchmarking the Results
We curated a new set of [50 multiple-choice questions](https://github.com/menloresearch/open-foundry/blob/main/rag-is-not-enough/rag/mcq_nitro.csv) (MCQ) based on the Nitro docs. The questions had varying levels of difficulty and had trick components that challenged the model's ability to discern misleading information.
We curated a new set of [50 multiple-choice questions](https://github.com/janhq/open-foundry/blob/main/rag-is-not-enough/rag/mcq_nitro.csv) (MCQ) based on the Nitro docs. The questions had varying levels of difficulty and had trick components that challenged the model's ability to discern misleading information.
![image](https://hackmd.io/_uploads/By9vaE1Ta.png)
@ -121,7 +121,7 @@ We conclude that this combination of model merging + finetuning + RAG yields pro
Anecdotally, weve had some success using this model in practice to onboard new team members to the Nitro codebase.
A full research report with more statistics can be found [here](https://github.com/menloresearch/open-foundry/blob/main/rag-is-not-enough/README.md).
A full research report with more statistics can be found [here](https://github.com/janhq/open-foundry/blob/main/rag-is-not-enough/README.md).
# References

View File

@ -203,7 +203,7 @@ When to choose ChatGPT Plus instead:
Ready to try gpt-oss?
- Download Jan: [https://jan.ai/](https://jan.ai/)
- View source code: [https://github.com/menloresearch/jan](https://github.com/menloresearch/jan)
- View source code: [https://github.com/janhq/jan](https://github.com/janhq/jan)
- Need help? Check our [local AI guide](/post/run-ai-models-locally) for beginners
<CTABlog />

View File

@ -4,7 +4,7 @@ title: Support - Jan
# Support
- Bugs & requests: file a GitHub ticket [here](https://github.com/menloresearch/jan/issues)
- Bugs & requests: file a GitHub ticket [here](https://github.com/janhq/jan/issues)
- For discussion: join our Discord [here](https://discord.gg/FTk2MvZwJH)
- For business inquiries: email hello@jan.ai
- For jobs: please email hr@jan.ai

View File

@ -31,7 +31,7 @@ const config: DocsThemeConfig = {
</div>
</span>
),
docsRepositoryBase: 'https://github.com/menloresearch/jan/tree/dev/docs',
docsRepositoryBase: 'https://github.com/janhq/jan/tree/dev/docs',
feedback: {
content: 'Question? Give us feedback →',
labels: 'feedback',

View File

@ -16,7 +16,7 @@ import {
ListConversationItemsResponse
} from './types'
declare const JAN_API_BASE: string
declare const MENLO_PLATFORM_BASE_URL: string
export class RemoteApi {
private authService: JanAuthService
@ -28,7 +28,7 @@ export class RemoteApi {
async createConversation(
data: Conversation
): Promise<ConversationResponse> {
const url = `${JAN_API_BASE}${CONVERSATION_API_ROUTES.CONVERSATIONS}`
const url = `${MENLO_PLATFORM_BASE_URL}${CONVERSATION_API_ROUTES.CONVERSATIONS}`
return this.authService.makeAuthenticatedRequest<ConversationResponse>(
url,
@ -43,12 +43,12 @@ export class RemoteApi {
conversationId: string,
data: Conversation
): Promise<ConversationResponse> {
const url = `${JAN_API_BASE}${CONVERSATION_API_ROUTES.CONVERSATION_BY_ID(conversationId)}`
const url = `${MENLO_PLATFORM_BASE_URL}${CONVERSATION_API_ROUTES.CONVERSATION_BY_ID(conversationId)}`
return this.authService.makeAuthenticatedRequest<ConversationResponse>(
url,
{
method: 'PATCH',
method: 'POST',
body: JSON.stringify(data),
}
)
@ -70,7 +70,7 @@ export class RemoteApi {
}
const queryString = queryParams.toString()
const url = `${JAN_API_BASE}${CONVERSATION_API_ROUTES.CONVERSATIONS}${queryString ? `?${queryString}` : ''}`
const url = `${MENLO_PLATFORM_BASE_URL}${CONVERSATION_API_ROUTES.CONVERSATIONS}${queryString ? `?${queryString}` : ''}`
return this.authService.makeAuthenticatedRequest<ListConversationsResponse>(
url,
@ -114,7 +114,7 @@ export class RemoteApi {
}
async deleteConversation(conversationId: string): Promise<void> {
const url = `${JAN_API_BASE}${CONVERSATION_API_ROUTES.CONVERSATION_BY_ID(conversationId)}`
const url = `${MENLO_PLATFORM_BASE_URL}${CONVERSATION_API_ROUTES.CONVERSATION_BY_ID(conversationId)}`
await this.authService.makeAuthenticatedRequest(
url,
@ -141,7 +141,7 @@ export class RemoteApi {
}
const queryString = queryParams.toString()
const url = `${JAN_API_BASE}${CONVERSATION_API_ROUTES.CONVERSATION_ITEMS(conversationId)}${queryString ? `?${queryString}` : ''}`
const url = `${MENLO_PLATFORM_BASE_URL}${CONVERSATION_API_ROUTES.CONVERSATION_ITEMS(conversationId)}${queryString ? `?${queryString}` : ''}`
return this.authService.makeAuthenticatedRequest<ListConversationItemsResponse>(
url,

View File

@ -31,7 +31,7 @@ export interface ConversationResponse {
id: string
object: 'conversation'
title?: string
created_at: number
created_at: number | string
metadata: ConversationMetadata
}
@ -50,6 +50,7 @@ export interface ConversationItemAnnotation {
}
export interface ConversationItemContent {
type?: string
file?: {
file_id?: string
mime_type?: string
@ -62,23 +63,50 @@ export interface ConversationItemContent {
file_id?: string
url?: string
}
image_file?: {
file_id?: string
mime_type?: string
}
input_text?: string
output_text?: {
annotations?: ConversationItemAnnotation[]
text?: string
}
reasoning_content?: string
text?: {
value?: string
text?: string
}
type?: string
reasoning_content?: string
tool_calls?: Array<{
id?: string
type?: string
function?: {
name?: string
arguments?: string
}
}>
tool_call_id?: string
tool_result?: {
content?: Array<{
type?: string
text?: string
output_text?: {
text?: string
}
}>
output_text?: {
text?: string
}
}
text_result?: string
}
export interface ConversationItem {
content?: ConversationItemContent[]
created_at: number
created_at: number | string
id: string
object: string
metadata?: Record<string, unknown>
role: string
status?: string
type?: string

View File

@ -1,5 +1,5 @@
import { Thread, ThreadAssistantInfo, ThreadMessage, ContentType } from '@janhq/core'
import { Conversation, ConversationResponse, ConversationItem } from './types'
import { Conversation, ConversationResponse, ConversationItem, ConversationItemContent, ConversationMetadata } from './types'
import { DEFAULT_ASSISTANT } from './const'
export class ObjectParser {
@ -7,7 +7,7 @@ export class ObjectParser {
const modelName = thread.assistants?.[0]?.model?.id || undefined
const modelProvider = thread.assistants?.[0]?.model?.engine || undefined
const isFavorite = thread.metadata?.is_favorite?.toString() || 'false'
let metadata = {}
let metadata: ConversationMetadata = {}
if (modelName && modelProvider) {
metadata = {
model_id: modelName,
@ -23,15 +23,14 @@ export class ObjectParser {
static conversationToThread(conversation: ConversationResponse): Thread {
const assistants: ThreadAssistantInfo[] = []
if (
conversation.metadata?.model_id &&
conversation.metadata?.model_provider
) {
const metadata: ConversationMetadata = conversation.metadata || {}
if (metadata.model_id && metadata.model_provider) {
assistants.push({
...DEFAULT_ASSISTANT,
model: {
id: conversation.metadata.model_id,
engine: conversation.metadata.model_provider,
id: metadata.model_id,
engine: metadata.model_provider,
},
})
} else {
@ -44,16 +43,18 @@ export class ObjectParser {
})
}
const isFavorite = conversation.metadata?.is_favorite === 'true'
const isFavorite = metadata.is_favorite === 'true'
const createdAtMs = parseTimestamp(conversation.created_at)
return {
id: conversation.id,
title: conversation.title || '',
assistants,
created: conversation.created_at,
updated: conversation.created_at,
created: createdAtMs,
updated: createdAtMs,
model: {
id: conversation.metadata.model_id,
provider: conversation.metadata.model_provider,
id: metadata.model_id,
provider: metadata.model_provider,
},
isFavorite,
metadata: { is_favorite: isFavorite },
@ -65,74 +66,70 @@ export class ObjectParser {
threadId: string
): ThreadMessage {
// Extract text content and metadata from the item
let textContent = ''
let reasoningContent = ''
const textSegments: string[] = []
const reasoningSegments: string[] = []
const imageUrls: string[] = []
let toolCalls: any[] = []
let finishReason = ''
if (item.content && item.content.length > 0) {
for (const content of item.content) {
// Handle text content
if (content.text?.value) {
textContent = content.text.value
}
// Handle output_text for assistant messages
if (content.output_text?.text) {
textContent = content.output_text.text
}
// Handle reasoning content
if (content.reasoning_content) {
reasoningContent = content.reasoning_content
}
// Handle image content
if (content.image?.url) {
imageUrls.push(content.image.url)
}
// Extract finish_reason
if (content.finish_reason) {
finishReason = content.finish_reason
}
}
}
// Handle tool calls parsing for assistant messages
if (item.role === 'assistant' && finishReason === 'tool_calls') {
try {
// Tool calls are embedded as JSON string in textContent
const toolCallMatch = textContent.match(/\[.*\]/)
if (toolCallMatch) {
const toolCallsData = JSON.parse(toolCallMatch[0])
toolCalls = toolCallsData.map((toolCall: any) => ({
tool: {
id: toolCall.id || 'unknown',
function: {
name: toolCall.function?.name || 'unknown',
arguments: toolCall.function?.arguments || '{}'
},
type: toolCall.type || 'function'
},
response: {
error: '',
content: []
},
state: 'ready'
}))
// Remove tool calls JSON from text content, keep only reasoning
textContent = ''
}
} catch (error) {
console.error('Failed to parse tool calls:', error)
extractContentByType(content, {
onText: (value) => {
if (value) {
textSegments.push(value)
}
},
onReasoning: (value) => {
if (value) {
reasoningSegments.push(value)
}
},
onImage: (url) => {
if (url) {
imageUrls.push(url)
}
},
onToolCalls: (calls) => {
toolCalls = calls.map((toolCall) => {
const callId = toolCall.id || 'unknown'
const rawArgs = toolCall.function?.arguments
const normalizedArgs =
typeof rawArgs === 'string'
? rawArgs
: JSON.stringify(rawArgs ?? {})
return {
id: callId,
tool_call_id: callId,
tool: {
id: callId,
function: {
name: toolCall.function?.name || 'unknown',
arguments: normalizedArgs,
},
type: toolCall.type || 'function',
},
response: {
error: '',
content: [],
},
state: 'pending',
}
})
},
})
}
}
// Format final content with reasoning if present
let finalTextValue = ''
if (reasoningContent) {
finalTextValue = `<think>${reasoningContent}</think>`
if (reasoningSegments.length > 0) {
finalTextValue += `<think>${reasoningSegments.join('\n')}</think>`
}
if (textContent) {
finalTextValue += textContent
if (textSegments.length > 0) {
if (finalTextValue) {
finalTextValue += '\n'
}
finalTextValue += textSegments.join('\n')
}
// Build content array for ThreadMessage
@ -157,22 +154,26 @@ export class ObjectParser {
}
// Build metadata
const metadata: any = {}
const metadata: any = { ...(item.metadata || {}) }
if (toolCalls.length > 0) {
metadata.tool_calls = toolCalls
}
const createdAtMs = parseTimestamp(item.created_at)
// Map status from server format to frontend format
const mappedStatus = item.status === 'completed' ? 'ready' : item.status || 'ready'
const role = item.role === 'user' || item.role === 'assistant' ? item.role : 'assistant'
return {
type: 'text',
id: item.id,
object: 'thread.message',
thread_id: threadId,
role: item.role as 'user' | 'assistant',
role,
content: messageContent,
created_at: item.created_at * 1000, // Convert to milliseconds
created_at: createdAtMs,
completed_at: 0,
status: mappedStatus,
metadata,
@ -201,25 +202,46 @@ export const combineConversationItemsToMessages = (
): ThreadMessage[] => {
const messages: ThreadMessage[] = []
const toolResponseMap = new Map<string, any>()
const sortedItems = [...items].sort(
(a, b) => parseTimestamp(a.created_at) - parseTimestamp(b.created_at)
)
// First pass: collect tool responses
for (const item of items) {
for (const item of sortedItems) {
if (item.role === 'tool') {
const toolContent = item.content?.[0]?.text?.value || ''
toolResponseMap.set(item.id, {
error: '',
content: [
{
type: 'text',
text: toolContent
}
]
})
for (const content of item.content ?? []) {
const toolCallId = content.tool_call_id || item.id
const toolResultText =
content.tool_result?.output_text?.text ||
(Array.isArray(content.tool_result?.content)
? content.tool_result?.content
?.map((entry) => entry.text || entry.output_text?.text)
.filter((text): text is string => Boolean(text))
.join('\n')
: undefined)
const toolContent =
content.text?.text ||
content.text?.value ||
content.output_text?.text ||
content.input_text ||
content.text_result ||
toolResultText ||
''
toolResponseMap.set(toolCallId, {
error: '',
content: [
{
type: 'text',
text: toolContent,
},
],
})
}
}
}
// Second pass: build messages and merge tool responses
for (const item of items) {
for (const item of sortedItems) {
// Skip tool messages as they will be merged into assistant messages
if (item.role === 'tool') {
continue
@ -228,14 +250,35 @@ export const combineConversationItemsToMessages = (
const message = ObjectParser.conversationItemToThreadMessage(item, threadId)
// If this is an assistant message with tool calls, merge tool responses
if (message.role === 'assistant' && message.metadata?.tool_calls && Array.isArray(message.metadata.tool_calls)) {
if (
message.role === 'assistant' &&
message.metadata?.tool_calls &&
Array.isArray(message.metadata.tool_calls)
) {
const toolCalls = message.metadata.tool_calls as any[]
let toolResponseIndex = 0
for (const [responseId, responseData] of toolResponseMap.entries()) {
if (toolResponseIndex < toolCalls.length) {
toolCalls[toolResponseIndex].response = responseData
toolResponseIndex++
for (const toolCall of toolCalls) {
const callId = toolCall.tool_call_id || toolCall.id || toolCall.tool?.id
let responseKey: string | undefined
let response: any = null
if (callId && toolResponseMap.has(callId)) {
responseKey = callId
response = toolResponseMap.get(callId)
} else {
const iterator = toolResponseMap.entries().next()
if (!iterator.done) {
responseKey = iterator.value[0]
response = iterator.value[1]
}
}
if (response) {
toolCall.response = response
toolCall.state = 'succeeded'
if (responseKey) {
toolResponseMap.delete(responseKey)
}
}
}
}
@ -245,3 +288,79 @@ export const combineConversationItemsToMessages = (
return messages
}
const parseTimestamp = (value: number | string | undefined): number => {
if (typeof value === 'number') {
// Distinguish between seconds and milliseconds
return value > 1e12 ? value : value * 1000
}
if (typeof value === 'string') {
const parsed = Date.parse(value)
return Number.isNaN(parsed) ? Date.now() : parsed
}
return Date.now()
}
const extractContentByType = (
content: ConversationItemContent,
handlers: {
onText: (value: string) => void
onReasoning: (value: string) => void
onImage: (url: string) => void
onToolCalls: (calls: NonNullable<ConversationItemContent['tool_calls']>) => void
}
) => {
const type = content.type || ''
switch (type) {
case 'input_text':
handlers.onText(content.input_text || '')
break
case 'text':
handlers.onText(content.text?.text || content.text?.value || '')
break
case 'output_text':
handlers.onText(content.output_text?.text || '')
break
case 'reasoning_content':
handlers.onReasoning(content.reasoning_content || '')
break
case 'image':
case 'image_url':
if (content.image?.url) {
handlers.onImage(content.image.url)
}
break
case 'tool_calls':
if (content.tool_calls && Array.isArray(content.tool_calls)) {
handlers.onToolCalls(content.tool_calls)
}
break
case 'tool_result':
if (content.tool_result?.output_text?.text) {
handlers.onText(content.tool_result.output_text.text)
}
break
default:
// Fallback for legacy fields without explicit type
if (content.text?.value || content.text?.text) {
handlers.onText(content.text.value || content.text.text || '')
}
if (content.text_result) {
handlers.onText(content.text_result)
}
if (content.output_text?.text) {
handlers.onText(content.output_text.text)
}
if (content.reasoning_content) {
handlers.onReasoning(content.reasoning_content)
}
if (content.image?.url) {
handlers.onImage(content.image.url)
}
if (content.tool_calls && Array.isArray(content.tool_calls)) {
handlers.onToolCalls(content.tool_calls)
}
break
}
}

View File

@ -4,10 +4,11 @@
*/
import { getSharedAuthService, JanAuthService } from '../shared'
import { JanModel, janProviderStore } from './store'
import { ApiError } from '../shared/types/errors'
import { JAN_API_ROUTES } from './const'
import { JanModel, janProviderStore } from './store'
// JAN_API_BASE is defined in vite.config.ts
// MENLO_PLATFORM_BASE_URL is defined in vite.config.ts
// Constants
const TEMPORARY_CHAT_ID = 'temporary-chat'
@ -19,12 +20,7 @@ const TEMPORARY_CHAT_ID = 'temporary-chat'
*/
function getChatCompletionConfig(request: JanChatCompletionRequest, stream: boolean = false) {
const isTemporaryChat = request.conversation_id === TEMPORARY_CHAT_ID
// For temporary chats, use the stateless /chat/completions endpoint
// For regular conversations, use the stateful /conv/chat/completions endpoint
const endpoint = isTemporaryChat
? `${JAN_API_BASE}/chat/completions`
: `${JAN_API_BASE}/conv/chat/completions`
const endpoint = `${MENLO_PLATFORM_BASE_URL}${JAN_API_ROUTES.CHAT_COMPLETIONS}`
const payload = {
...request,
@ -44,9 +40,30 @@ function getChatCompletionConfig(request: JanChatCompletionRequest, stream: bool
return { endpoint, payload, isTemporaryChat }
}
export interface JanModelsResponse {
interface JanModelSummary {
id: string
object: string
data: JanModel[]
owned_by: string
created?: number
}
interface JanModelsResponse {
object: string
data: JanModelSummary[]
}
interface JanModelCatalogResponse {
id: string
supported_parameters?: {
names?: string[]
default?: Record<string, unknown>
}
extras?: {
supported_parameters?: string[]
default_parameters?: Record<string, unknown>
[key: string]: unknown
}
[key: string]: unknown
}
export interface JanChatMessage {
@ -112,6 +129,8 @@ export interface JanChatCompletionChunk {
export class JanApiClient {
private static instance: JanApiClient
private authService: JanAuthService
private modelsCache: JanModel[] | null = null
private modelsFetchPromise: Promise<JanModel[]> | null = null
private constructor() {
this.authService = getSharedAuthService()
@ -124,25 +143,64 @@ export class JanApiClient {
return JanApiClient.instance
}
async getModels(): Promise<JanModel[]> {
async getModels(options?: { forceRefresh?: boolean }): Promise<JanModel[]> {
try {
const forceRefresh = options?.forceRefresh ?? false
if (forceRefresh) {
this.modelsCache = null
} else if (this.modelsCache) {
return this.modelsCache
}
if (this.modelsFetchPromise) {
return this.modelsFetchPromise
}
janProviderStore.setLoadingModels(true)
janProviderStore.clearError()
const response = await this.authService.makeAuthenticatedRequest<JanModelsResponse>(
`${JAN_API_BASE}/conv/models`
)
this.modelsFetchPromise = (async () => {
const response = await this.authService.makeAuthenticatedRequest<JanModelsResponse>(
`${MENLO_PLATFORM_BASE_URL}${JAN_API_ROUTES.MODELS}`
)
const models = response.data || []
janProviderStore.setModels(models)
return models
const summaries = response.data || []
const models: JanModel[] = await Promise.all(
summaries.map(async (summary) => {
const supportedParameters = await this.fetchSupportedParameters(summary.id)
const capabilities = this.deriveCapabilitiesFromParameters(supportedParameters)
return {
id: summary.id,
object: summary.object,
owned_by: summary.owned_by,
created: summary.created,
capabilities,
supportedParameters,
}
})
)
this.modelsCache = models
janProviderStore.setModels(models)
return models
})()
return await this.modelsFetchPromise
} catch (error) {
this.modelsCache = null
this.modelsFetchPromise = null
const errorMessage = error instanceof ApiError ? error.message :
error instanceof Error ? error.message : 'Failed to fetch models'
janProviderStore.setError(errorMessage)
janProviderStore.setLoadingModels(false)
throw error
} finally {
this.modelsFetchPromise = null
}
}
@ -254,7 +312,7 @@ export class JanApiClient {
async initialize(): Promise<void> {
try {
janProviderStore.setAuthenticated(true)
// Fetch initial models
// Fetch initial models (cached for subsequent calls)
await this.getModels()
console.log('Jan API client initialized successfully')
} catch (error) {
@ -266,6 +324,52 @@ export class JanApiClient {
janProviderStore.setInitializing(false)
}
}
private async fetchSupportedParameters(modelId: string): Promise<string[]> {
try {
const endpoint = `${MENLO_PLATFORM_BASE_URL}${JAN_API_ROUTES.MODEL_CATALOGS}/${this.encodeModelIdForCatalog(modelId)}`
const catalog = await this.authService.makeAuthenticatedRequest<JanModelCatalogResponse>(endpoint)
return this.extractSupportedParameters(catalog)
} catch (error) {
console.warn(`Failed to fetch catalog metadata for model "${modelId}":`, error)
return []
}
}
private encodeModelIdForCatalog(modelId: string): string {
return modelId
.split('/')
.map((segment) => encodeURIComponent(segment))
.join('/')
}
private extractSupportedParameters(catalog: JanModelCatalogResponse | null | undefined): string[] {
if (!catalog) {
return []
}
const primaryNames = catalog.supported_parameters?.names
if (Array.isArray(primaryNames) && primaryNames.length > 0) {
return [...new Set(primaryNames)]
}
const extraNames = catalog.extras?.supported_parameters
if (Array.isArray(extraNames) && extraNames.length > 0) {
return [...new Set(extraNames)]
}
return []
}
private deriveCapabilitiesFromParameters(parameters: string[]): string[] {
const capabilities = new Set<string>()
if (parameters.includes('tools')) {
capabilities.add('tools')
}
return Array.from(capabilities)
}
}
export const janApiClient = JanApiClient.getInstance()

View File

@ -0,0 +1,7 @@
export const JAN_API_ROUTES = {
MODELS: '/models',
CHAT_COMPLETIONS: '/chat/completions',
MODEL_CATALOGS: '/models/catalogs',
} as const
export const MODEL_PROVIDER_STORAGE_KEY = 'model-provider'

View File

@ -0,0 +1,122 @@
import type { JanModel } from './store'
import { MODEL_PROVIDER_STORAGE_KEY } from './const'
type StoredModel = {
id?: string
capabilities?: unknown
[key: string]: unknown
}
type StoredProvider = {
provider?: string
models?: StoredModel[]
[key: string]: unknown
}
type StoredState = {
state?: {
providers?: StoredProvider[]
[key: string]: unknown
}
version?: number
[key: string]: unknown
}
const normalizeCapabilities = (capabilities: unknown): string[] => {
if (!Array.isArray(capabilities)) {
return []
}
return [...new Set(capabilities.filter((item): item is string => typeof item === 'string'))].sort(
(a, b) => a.localeCompare(b)
)
}
/**
* Synchronize Jan models stored in localStorage with the latest server state.
* Returns true if the stored data was modified (including being cleared).
*/
export function syncJanModelsLocalStorage(
remoteModels: JanModel[],
storageKey: string = MODEL_PROVIDER_STORAGE_KEY
): boolean {
const rawStorage = localStorage.getItem(storageKey)
if (!rawStorage) {
return false
}
let storedState: StoredState
try {
storedState = JSON.parse(rawStorage) as StoredState
} catch (error) {
console.warn('Failed to parse Jan model storage; clearing entry.', error)
localStorage.removeItem(storageKey)
return true
}
const providers = storedState?.state?.providers
if (!Array.isArray(providers)) {
return false
}
const remoteModelMap = new Map(remoteModels.map((model) => [model.id, model]))
let storageUpdated = false
for (const provider of providers) {
if (provider.provider !== 'jan' || !Array.isArray(provider.models)) {
continue
}
const updatedModels: StoredModel[] = []
for (const model of provider.models) {
const modelId = typeof model.id === 'string' ? model.id : null
if (!modelId) {
storageUpdated = true
continue
}
const remoteModel = remoteModelMap.get(modelId)
if (!remoteModel) {
console.log(`Removing unknown Jan model from localStorage: ${modelId}`)
storageUpdated = true
continue
}
const storedCapabilities = normalizeCapabilities(model.capabilities)
const remoteCapabilities = normalizeCapabilities(remoteModel.capabilities)
const capabilitiesMatch =
storedCapabilities.length === remoteCapabilities.length &&
storedCapabilities.every((cap, index) => cap === remoteCapabilities[index])
if (!capabilitiesMatch) {
console.log(
`Updating capabilities for Jan model ${modelId}:`,
storedCapabilities,
'=>',
remoteCapabilities
)
updatedModels.push({
...model,
capabilities: remoteModel.capabilities,
})
storageUpdated = true
} else {
updatedModels.push(model)
}
}
if (updatedModels.length !== provider.models.length) {
storageUpdated = true
}
provider.models = updatedModels
}
if (storageUpdated) {
localStorage.setItem(storageKey, JSON.stringify(storedState))
}
return storageUpdated
}

View File

@ -14,12 +14,10 @@ import {
ImportOptions,
} from '@janhq/core' // cspell: disable-line
import { janApiClient, JanChatMessage } from './api'
import { syncJanModelsLocalStorage } from './helpers'
import { janProviderStore } from './store'
import { ApiError } from '../shared/types/errors'
// Jan models support tools via MCP
const JAN_MODEL_CAPABILITIES = ['tools'] as const
export default class JanProviderWeb extends AIEngine {
readonly provider = 'jan'
private activeSessions: Map<string, SessionInfo> = new Map()
@ -28,11 +26,11 @@ export default class JanProviderWeb extends AIEngine {
console.log('Loading Jan Provider Extension...')
try {
// Check and clear invalid Jan models (capabilities mismatch)
this.validateJanModelsLocalStorage()
// Initialize authentication and fetch models
// Initialize authentication
await janApiClient.initialize()
// Check and sync stored Jan models against latest catalog data
await this.validateJanModelsLocalStorage()
console.log('Jan Provider Extension loaded successfully')
} catch (error) {
console.error('Failed to load Jan Provider Extension:', error)
@ -43,46 +41,17 @@ export default class JanProviderWeb extends AIEngine {
}
// Verify Jan models capabilities in localStorage
private validateJanModelsLocalStorage() {
private async validateJanModelsLocalStorage(): Promise<void> {
try {
console.log("Validating Jan models in localStorage...")
const storageKey = 'model-provider'
const data = localStorage.getItem(storageKey)
if (!data) return
console.log('Validating Jan models in localStorage...')
const parsed = JSON.parse(data)
if (!parsed?.state?.providers) return
const remoteModels = await janApiClient.getModels()
const storageUpdated = syncJanModelsLocalStorage(remoteModels)
// Check if any Jan model has incorrect capabilities
let hasInvalidModel = false
for (const provider of parsed.state.providers) {
if (provider.provider === 'jan' && provider.models) {
for (const model of provider.models) {
console.log(`Checking Jan model: ${model.id}`, model.capabilities)
if (JSON.stringify(model.capabilities) !== JSON.stringify(JAN_MODEL_CAPABILITIES)) {
hasInvalidModel = true
console.log(`Found invalid Jan model: ${model.id}, clearing localStorage`)
break
}
}
}
if (hasInvalidModel) break
}
// If any invalid model found, just clear the storage
if (hasInvalidModel) {
// Force clear the storage
localStorage.removeItem(storageKey)
// Verify it's actually removed
const afterRemoval = localStorage.getItem(storageKey)
// If still present, try setting to empty state
if (afterRemoval) {
// Try alternative clearing method
localStorage.setItem(storageKey, JSON.stringify({ state: { providers: [] }, version: parsed.version || 3 }))
}
console.log('Cleared model-provider from localStorage due to invalid Jan capabilities')
// Force a page reload to ensure clean state
if (storageUpdated) {
console.log(
'Synchronized Jan models in localStorage with server capabilities; reloading...'
)
window.location.reload()
}
} catch (error) {
@ -119,7 +88,7 @@ export default class JanProviderWeb extends AIEngine {
path: undefined, // Remote model, no local path
owned_by: model.owned_by,
object: model.object,
capabilities: [...JAN_MODEL_CAPABILITIES],
capabilities: [...model.capabilities],
}
: undefined
)
@ -140,7 +109,7 @@ export default class JanProviderWeb extends AIEngine {
path: undefined, // Remote model, no local path
owned_by: model.owned_by,
object: model.object,
capabilities: [...JAN_MODEL_CAPABILITIES],
capabilities: [...model.capabilities],
}))
} catch (error) {
console.error('Failed to list Jan models:', error)
@ -159,6 +128,7 @@ export default class JanProviderWeb extends AIEngine {
port: 443, // HTTPS port
model_id: modelId,
model_path: `remote:${modelId}`, // Indicate this is a remote model
is_embedding: false, // assume false here, TODO: might need further implementation
api_key: '', // API key handled by auth service
}
@ -193,8 +163,12 @@ export default class JanProviderWeb extends AIEngine {
console.error(`Failed to unload Jan session ${sessionId}:`, error)
return {
success: false,
error: error instanceof ApiError ? error.message :
error instanceof Error ? error.message : 'Unknown error',
error:
error instanceof ApiError
? error.message
: error instanceof Error
? error.message
: 'Unknown error',
}
}
}

View File

@ -9,6 +9,9 @@ export interface JanModel {
id: string
object: string
owned_by: string
created?: number
capabilities: string[]
supportedParameters?: string[]
}
export interface JanProviderState {

View File

@ -12,8 +12,8 @@ import { JanMCPOAuthProvider } from './oauth-provider'
import { WebSearchButton } from './components'
import type { ComponentType } from 'react'
// JAN_API_BASE is defined in vite.config.ts (defaults to 'https://api-dev.jan.ai/jan/v1')
declare const JAN_API_BASE: string
// MENLO_PLATFORM_BASE_URL is defined in vite.config.ts (defaults to 'https://api-dev.menlo.ai/jan/v1')
declare const MENLO_PLATFORM_BASE_URL: string
export default class MCPExtensionWeb extends MCPExtension {
private mcpEndpoint = '/mcp'
@ -77,7 +77,7 @@ export default class MCPExtensionWeb extends MCPExtension {
// Create transport with OAuth provider (handles token refresh automatically)
const transport = new StreamableHTTPClientTransport(
new URL(`${JAN_API_BASE}${this.mcpEndpoint}`),
new URL(`${MENLO_PLATFORM_BASE_URL}${this.mcpEndpoint}`),
{
authProvider: this.oauthProvider
// No sessionId needed - server will generate one automatically

View File

@ -6,13 +6,13 @@
import { AuthTokens } from './types'
import { AUTH_ENDPOINTS } from './const'
declare const JAN_API_BASE: string
declare const MENLO_PLATFORM_BASE_URL: string
/**
* Logout user on server
*/
export async function logoutUser(): Promise<void> {
const response = await fetch(`${JAN_API_BASE}${AUTH_ENDPOINTS.LOGOUT}`, {
const response = await fetch(`${MENLO_PLATFORM_BASE_URL}${AUTH_ENDPOINTS.LOGOUT}`, {
method: 'GET',
credentials: 'include',
headers: {
@ -29,7 +29,7 @@ export async function logoutUser(): Promise<void> {
* Guest login
*/
export async function guestLogin(): Promise<AuthTokens> {
const response = await fetch(`${JAN_API_BASE}${AUTH_ENDPOINTS.GUEST_LOGIN}`, {
const response = await fetch(`${MENLO_PLATFORM_BASE_URL}${AUTH_ENDPOINTS.GUEST_LOGIN}`, {
method: 'POST',
credentials: 'include',
headers: {
@ -51,7 +51,7 @@ export async function guestLogin(): Promise<AuthTokens> {
*/
export async function refreshToken(): Promise<AuthTokens> {
const response = await fetch(
`${JAN_API_BASE}${AUTH_ENDPOINTS.REFRESH_TOKEN}`,
`${MENLO_PLATFORM_BASE_URL}${AUTH_ENDPOINTS.REFRESH_TOKEN}`,
{
method: 'GET',
credentials: 'include',

View File

@ -5,10 +5,10 @@
import { AuthTokens, LoginUrlResponse } from './types'
declare const JAN_API_BASE: string
declare const MENLO_PLATFORM_BASE_URL: string
export async function getLoginUrl(endpoint: string): Promise<LoginUrlResponse> {
const response: Response = await fetch(`${JAN_API_BASE}${endpoint}`, {
const response: Response = await fetch(`${MENLO_PLATFORM_BASE_URL}${endpoint}`, {
method: 'GET',
credentials: 'include',
headers: {
@ -30,7 +30,7 @@ export async function handleOAuthCallback(
code: string,
state?: string
): Promise<AuthTokens> {
const response: Response = await fetch(`${JAN_API_BASE}${endpoint}`, {
const response: Response = await fetch(`${MENLO_PLATFORM_BASE_URL}${endpoint}`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',

View File

@ -3,9 +3,9 @@
* Handles authentication flows for any OAuth provider
*/
declare const JAN_API_BASE: string
declare const MENLO_PLATFORM_BASE_URL: string
import { User, AuthState, AuthBroadcastMessage } from './types'
import { User, AuthState, AuthBroadcastMessage, AuthTokens } from './types'
import {
AUTH_STORAGE_KEYS,
AUTH_ENDPOINTS,
@ -115,7 +115,7 @@ export class JanAuthService {
// Store tokens and set authenticated state
this.accessToken = tokens.access_token
this.tokenExpiryTime = Date.now() + tokens.expires_in * 1000
this.tokenExpiryTime = this.computeTokenExpiry(tokens)
this.setAuthProvider(providerId)
this.authBroadcast.broadcastLogin()
@ -158,7 +158,7 @@ export class JanAuthService {
const tokens = await refreshToken()
this.accessToken = tokens.access_token
this.tokenExpiryTime = Date.now() + tokens.expires_in * 1000
this.tokenExpiryTime = this.computeTokenExpiry(tokens)
} catch (error) {
console.error('Failed to refresh access token:', error)
if (error instanceof ApiError && error.isStatus(401)) {
@ -343,6 +343,23 @@ export class JanAuthService {
localStorage.removeItem(AUTH_STORAGE_KEYS.AUTH_PROVIDER)
}
private computeTokenExpiry(tokens: AuthTokens): number {
if (tokens.expires_at) {
const expiresAt = new Date(tokens.expires_at).getTime()
if (!Number.isNaN(expiresAt)) {
return expiresAt
}
console.warn('Invalid expires_at format in auth tokens:', tokens.expires_at)
}
if (typeof tokens.expires_in === 'number') {
return Date.now() + tokens.expires_in * 1000
}
console.warn('Auth tokens missing expiry information; defaulting to immediate expiry')
return Date.now()
}
/**
* Ensure guest access is available
*/
@ -352,7 +369,7 @@ export class JanAuthService {
if (!this.accessToken || Date.now() > this.tokenExpiryTime) {
const tokens = await guestLogin()
this.accessToken = tokens.access_token
this.tokenExpiryTime = Date.now() + tokens.expires_in * 1000
this.tokenExpiryTime = this.computeTokenExpiry(tokens)
}
} catch (error) {
console.error('Failed to ensure guest access:', error)
@ -387,7 +404,6 @@ export class JanAuthService {
case AUTH_EVENTS.LOGOUT:
// Another tab logged out, clear our state
this.clearAuthState()
this.ensureGuestAccess().catch(console.error)
break
}
})
@ -413,7 +429,7 @@ export class JanAuthService {
private async fetchUserProfile(): Promise<User | null> {
try {
return await this.makeAuthenticatedRequest<User>(
`${JAN_API_BASE}${AUTH_ENDPOINTS.ME}`
`${MENLO_PLATFORM_BASE_URL}${AUTH_ENDPOINTS.ME}`
)
} catch (error) {
console.error('Failed to fetch user profile:', error)

View File

@ -16,7 +16,8 @@ export type AuthType = ProviderType | 'guest'
export interface AuthTokens {
access_token: string
expires_in: number
expires_in?: number
expires_at?: string
object: string
}

View File

@ -1,5 +1,5 @@
export {}
declare global {
declare const JAN_API_BASE: string
declare const MENLO_PLATFORM_BASE_URL: string
}

View File

@ -14,6 +14,6 @@ export default defineConfig({
emptyOutDir: false // Don't clean the output directory
},
define: {
JAN_API_BASE: JSON.stringify(process.env.JAN_API_BASE || 'https://api-dev.jan.ai/v1'),
MENLO_PLATFORM_BASE_URL: JSON.stringify(process.env.MENLO_PLATFORM_BASE_URL || 'https://api-dev.menlo.ai/v1'),
}
})

View File

@ -70,6 +70,6 @@ There are a few things to keep in mind when writing your extension code:
```
For more information about the Jan Extension Core module, see the
[documentation](https://github.com/menloresearch/jan/blob/main/core/README.md).
[documentation](https://github.com/janhq/jan/blob/main/core/README.md).
So, what are you waiting for? Go ahead and start customizing your extension!

View File

@ -56,7 +56,7 @@ async function fetchRemoteSupportedBackends(
supportedBackends: string[]
): Promise<{ version: string; backend: string }[]> {
// Pull the latest releases from the repo
const { releases } = await _fetchGithubReleases('menloresearch', 'llama.cpp')
const { releases } = await _fetchGithubReleases('janhq', 'llama.cpp')
releases.sort((a, b) => b.tag_name.localeCompare(a.tag_name))
releases.splice(10) // keep only the latest 10 releases
@ -98,7 +98,7 @@ export async function listSupportedBackends(): Promise<
const sysType = `${os_type}-${arch}`
let supportedBackends = []
// NOTE: menloresearch's tags for llama.cpp builds are a bit different
// NOTE: janhq's tags for llama.cpp builds are a bit different
// TODO: fetch versions from the server?
// TODO: select CUDA version based on driver version
if (sysType == 'windows-x86_64') {
@ -156,8 +156,13 @@ export async function listSupportedBackends(): Promise<
supportedBackends.push('macos-arm64')
}
// get latest backends from Github
const remoteBackendVersions =
let remoteBackendVersions = []
try {
remoteBackendVersions =
await fetchRemoteSupportedBackends(supportedBackends)
} catch (e) {
console.debug(`Not able to get remote backends, Jan might be offline or network problem: ${String(e)}`)
}
// Get locally installed versions
const localBackendVersions = await getLocalInstalledBackends()
@ -242,7 +247,7 @@ export async function downloadBackend(
// Build URLs per source
const backendUrl =
source === 'github'
? `https://github.com/menloresearch/llama.cpp/releases/download/${version}/llama-${version}-bin-${backend}.tar.gz`
? `https://github.com/janhq/llama.cpp/releases/download/${version}/llama-${version}-bin-${backend}.tar.gz`
: `https://catalog.jan.ai/llama.cpp/releases/${version}/llama-${version}-bin-${backend}.tar.gz`
const downloadItems = [
@ -258,7 +263,7 @@ export async function downloadBackend(
downloadItems.push({
url:
source === 'github'
? `https://github.com/menloresearch/llama.cpp/releases/download/${version}/cudart-llama-bin-${platformName}-cu11.7-x64.tar.gz`
? `https://github.com/janhq/llama.cpp/releases/download/${version}/cudart-llama-bin-${platformName}-cu11.7-x64.tar.gz`
: `https://catalog.jan.ai/llama.cpp/releases/${version}/cudart-llama-bin-${platformName}-cu11.7-x64.tar.gz`,
save_path: await joinPath([libDir, 'cuda11.tar.gz']),
proxy: proxyConfig,
@ -267,7 +272,7 @@ export async function downloadBackend(
downloadItems.push({
url:
source === 'github'
? `https://github.com/menloresearch/llama.cpp/releases/download/${version}/cudart-llama-bin-${platformName}-cu12.0-x64.tar.gz`
? `https://github.com/janhq/llama.cpp/releases/download/${version}/cudart-llama-bin-${platformName}-cu12.0-x64.tar.gz`
: `https://catalog.jan.ai/llama.cpp/releases/${version}/cudart-llama-bin-${platformName}-cu12.0-x64.tar.gz`,
save_path: await joinPath([libDir, 'cuda12.tar.gz']),
proxy: proxyConfig,

View File

@ -39,7 +39,6 @@ import { getProxyConfig } from './util'
import { basename } from '@tauri-apps/api/path'
import {
readGgufMetadata,
estimateKVCacheSize,
getModelSize,
isModelSupported,
planModelLoadInternal,
@ -58,6 +57,8 @@ type LlamacppConfig = {
chat_template: string
n_gpu_layers: number
offload_mmproj: boolean
cpu_moe: boolean
n_cpu_moe: number
override_tensor_buffer_t: string
ctx_size: number
threads: number
@ -1527,6 +1528,7 @@ export default class llamacpp_extension extends AIEngine {
if (
this.autoUnload &&
!isEmbedding &&
(loadedModels.length > 0 || otherLoadingPromises.length > 0)
) {
// Wait for OTHER loading models to finish, then unload everything
@ -1534,10 +1536,33 @@ export default class llamacpp_extension extends AIEngine {
await Promise.all(otherLoadingPromises)
}
// Now unload all loaded models
// Now unload all loaded Text models excluding embedding models
const allLoadedModels = await this.getLoadedModels()
if (allLoadedModels.length > 0) {
await Promise.all(allLoadedModels.map((model) => this.unload(model)))
const sessionInfos: (SessionInfo | null)[] = await Promise.all(
allLoadedModels.map(async (modelId) => {
try {
return await this.findSessionByModel(modelId)
} catch (e) {
logger.warn(`Unable to find session for model "${modelId}": ${e}`)
return null // treat as “noteligible for unload”
}
})
)
logger.info(JSON.stringify(sessionInfos))
const nonEmbeddingModels: string[] = sessionInfos
.filter(
(s): s is SessionInfo => s !== null && s.is_embedding === false
)
.map((s) => s.model_id)
if (nonEmbeddingModels.length > 0) {
await Promise.all(
nonEmbeddingModels.map((modelId) => this.unload(modelId))
)
}
}
}
const args: string[] = []
@ -1581,6 +1606,10 @@ export default class llamacpp_extension extends AIEngine {
])
args.push('--jinja')
args.push('-m', modelPath)
if (cfg.cpu_moe) args.push('--cpu-moe')
if (cfg.n_cpu_moe && cfg.n_cpu_moe > 0) {
args.push('--n-cpu-moe', String(cfg.n_cpu_moe))
}
// For overriding tensor buffer type, useful where
// massive MOE models can be made faster by keeping attention on the GPU
// and offloading the expert FFNs to the CPU.
@ -1631,7 +1660,7 @@ export default class llamacpp_extension extends AIEngine {
if (cfg.no_kv_offload) args.push('--no-kv-offload')
if (isEmbedding) {
args.push('--embedding')
args.push('--pooling mean')
args.push('--pooling', 'mean')
} else {
if (cfg.ctx_size > 0) args.push('--ctx-size', String(cfg.ctx_size))
if (cfg.n_predict > 0) args.push('--n-predict', String(cfg.n_predict))
@ -1670,6 +1699,7 @@ export default class llamacpp_extension extends AIEngine {
libraryPath,
args,
envs,
isEmbedding,
}
)
return sInfo
@ -2005,6 +2035,69 @@ export default class llamacpp_extension extends AIEngine {
libraryPath,
envs,
})
// On Linux with AMD GPUs, llama.cpp via Vulkan may report UMA (shared) memory as device-local.
// For clearer UX, override with dedicated VRAM from the hardware plugin when available.
try {
const sysInfo = await getSystemInfo()
if (sysInfo?.os_type === 'linux' && Array.isArray(sysInfo.gpus)) {
const usage = await getSystemUsage()
if (usage && Array.isArray(usage.gpus)) {
const uuidToUsage: Record<string, { total_memory: number; used_memory: number }> = {}
for (const u of usage.gpus as any[]) {
if (u && typeof u.uuid === 'string') {
uuidToUsage[u.uuid] = u
}
}
const indexToAmdUuid = new Map<number, string>()
for (const gpu of sysInfo.gpus as any[]) {
const vendorStr =
typeof gpu?.vendor === 'string'
? gpu.vendor
: typeof gpu?.vendor === 'object' && gpu.vendor !== null
? String(gpu.vendor)
: ''
if (
vendorStr.toUpperCase().includes('AMD') &&
gpu?.vulkan_info &&
typeof gpu.vulkan_info.index === 'number' &&
typeof gpu.uuid === 'string'
) {
indexToAmdUuid.set(gpu.vulkan_info.index, gpu.uuid)
}
}
if (indexToAmdUuid.size > 0) {
const adjusted = dList.map((dev) => {
if (dev.id?.startsWith('Vulkan')) {
const match = /^Vulkan(\d+)/.exec(dev.id)
if (match) {
const vIdx = Number(match[1])
const uuid = indexToAmdUuid.get(vIdx)
if (uuid) {
const u = uuidToUsage[uuid]
if (
u &&
typeof u.total_memory === 'number' &&
typeof u.used_memory === 'number'
) {
const total = Math.max(0, Math.floor(u.total_memory))
const free = Math.max(0, Math.floor(u.total_memory - u.used_memory))
return { ...dev, mem: total, free }
}
}
}
}
return dev
})
return adjusted
}
}
}
} catch (e) {
logger.warn('Device memory override (AMD/Linux) failed:', e)
}
return dList
} catch (error) {
logger.error('Failed to query devices:\n', error)
@ -2013,6 +2106,7 @@ export default class llamacpp_extension extends AIEngine {
}
async embed(text: string[]): Promise<EmbeddingResponse> {
// Ensure the sentence-transformer model is present
let sInfo = await this.findSessionByModel('sentence-transformer-mini')
if (!sInfo) {
const downloadedModelList = await this.list()
@ -2026,30 +2120,45 @@ export default class llamacpp_extension extends AIEngine {
'https://huggingface.co/second-state/All-MiniLM-L6-v2-Embedding-GGUF/resolve/main/all-MiniLM-L6-v2-ggml-model-f16.gguf?download=true',
})
}
sInfo = await this.load('sentence-transformer-mini')
// Load specifically in embedding mode
sInfo = await this.load('sentence-transformer-mini', undefined, true)
}
const baseUrl = `http://localhost:${sInfo.port}/v1/embeddings`
const headers = {
'Content-Type': 'application/json',
'Authorization': `Bearer ${sInfo.api_key}`,
const attemptRequest = async (session: SessionInfo) => {
const baseUrl = `http://localhost:${session.port}/v1/embeddings`
const headers = {
'Content-Type': 'application/json',
'Authorization': `Bearer ${session.api_key}`,
}
const body = JSON.stringify({
input: text,
model: session.model_id,
encoding_format: 'float',
})
const response = await fetch(baseUrl, {
method: 'POST',
headers,
body,
})
return response
}
// First try with the existing session (may have been started without --embedding previously)
let response = await attemptRequest(sInfo)
// If embeddings endpoint is not available (501), reload with embedding mode and retry once
if (response.status === 501) {
try {
await this.unload('sentence-transformer-mini')
} catch {}
sInfo = await this.load('sentence-transformer-mini', undefined, true)
response = await attemptRequest(sInfo)
}
const body = JSON.stringify({
input: text,
model: sInfo.model_id,
encoding_format: 'float',
})
const response = await fetch(baseUrl, {
method: 'POST',
headers,
body,
})
if (!response.ok) {
const errorData = await response.json().catch(() => null)
throw new Error(
`API request failed with status ${response.status}: ${JSON.stringify(
errorData
)}`
`API request failed with status ${response.status}: ${JSON.stringify(errorData)}`
)
}
const responseData = await response.json()
@ -2151,7 +2260,12 @@ export default class llamacpp_extension extends AIEngine {
if (mmprojPath && !this.isAbsolutePath(mmprojPath))
mmprojPath = await joinPath([await getJanDataFolderPath(), path])
try {
const result = await planModelLoadInternal(path, this.memoryMode, mmprojPath, requestedCtx)
const result = await planModelLoadInternal(
path,
this.memoryMode,
mmprojPath,
requestedCtx
)
return result
} catch (e) {
throw new Error(String(e))
@ -2279,12 +2393,18 @@ export default class llamacpp_extension extends AIEngine {
}
// Calculate text tokens
const messages = JSON.stringify({ messages: opts.messages })
// Use chat_template_kwargs from opts if provided, otherwise default to disable enable_thinking
const tokenizeRequest = {
messages: opts.messages,
chat_template_kwargs: opts.chat_template_kwargs || {
enable_thinking: false,
},
}
let parseResponse = await fetch(`${baseUrl}/apply-template`, {
method: 'POST',
headers: headers,
body: messages,
body: JSON.stringify(tokenizeRequest),
})
if (!parseResponse.ok) {

View File

@ -0,0 +1,33 @@
{
"name": "@janhq/rag-extension",
"productName": "RAG Tools",
"version": "0.1.0",
"description": "Registers RAG tools and orchestrates retrieval across parser, embeddings, and vector DB",
"main": "dist/index.js",
"module": "dist/module.js",
"author": "Jan <service@jan.ai>",
"license": "AGPL-3.0",
"scripts": {
"build": "rolldown -c rolldown.config.mjs",
"build:publish": "rimraf *.tgz --glob || true && yarn build && npm pack && cpx *.tgz ../../pre-install"
},
"devDependencies": {
"cpx": "1.5.0",
"rimraf": "6.0.1",
"rolldown": "1.0.0-beta.1",
"typescript": "5.9.2"
},
"dependencies": {
"@janhq/core": "../../core/package.tgz",
"@janhq/tauri-plugin-rag-api": "link:../../src-tauri/plugins/tauri-plugin-rag",
"@janhq/tauri-plugin-vector-db-api": "link:../../src-tauri/plugins/tauri-plugin-vector-db"
},
"files": [
"dist/*",
"package.json"
],
"installConfig": {
"hoistingLimits": "workspaces"
},
"packageManager": "yarn@4.5.3"
}

View File

@ -0,0 +1,14 @@
import { defineConfig } from 'rolldown'
import settingJson from './settings.json' with { type: 'json' }
export default defineConfig({
input: 'src/index.ts',
output: {
format: 'esm',
file: 'dist/index.js',
},
platform: 'browser',
define: {
SETTINGS: JSON.stringify(settingJson),
},
})

View File

@ -0,0 +1,58 @@
[
{
"key": "enabled",
"titleKey": "settings:attachments.enable",
"descriptionKey": "settings:attachments.enableDesc",
"controllerType": "checkbox",
"controllerProps": { "value": true }
},
{
"key": "max_file_size_mb",
"titleKey": "settings:attachments.maxFile",
"descriptionKey": "settings:attachments.maxFileDesc",
"controllerType": "input",
"controllerProps": { "value": 20, "type": "number", "min": 1, "max": 200, "step": 1, "textAlign": "right" }
},
{
"key": "retrieval_limit",
"titleKey": "settings:attachments.topK",
"descriptionKey": "settings:attachments.topKDesc",
"controllerType": "input",
"controllerProps": { "value": 3, "type": "number", "min": 1, "max": 20, "step": 1, "textAlign": "right" }
},
{
"key": "retrieval_threshold",
"titleKey": "settings:attachments.threshold",
"descriptionKey": "settings:attachments.thresholdDesc",
"controllerType": "input",
"controllerProps": { "value": 0.3, "type": "number", "min": 0, "max": 1, "step": 0.01, "textAlign": "right" }
},
{
"key": "chunk_size_tokens",
"titleKey": "settings:attachments.chunkSize",
"descriptionKey": "settings:attachments.chunkSizeDesc",
"controllerType": "input",
"controllerProps": { "value": 512, "type": "number", "min": 64, "max": 8192, "step": 64, "textAlign": "right" }
},
{
"key": "overlap_tokens",
"titleKey": "settings:attachments.chunkOverlap",
"descriptionKey": "settings:attachments.chunkOverlapDesc",
"controllerType": "input",
"controllerProps": { "value": 64, "type": "number", "min": 0, "max": 1024, "step": 16, "textAlign": "right" }
},
{
"key": "search_mode",
"titleKey": "settings:attachments.searchMode",
"descriptionKey": "settings:attachments.searchModeDesc",
"controllerType": "dropdown",
"controllerProps": {
"value": "auto",
"options": [
{ "name": "Auto (recommended)", "value": "auto" },
{ "name": "ANN (sqlite-vec)", "value": "ann" },
{ "name": "Linear", "value": "linear" }
]
}
}
]

Some files were not shown because too many files have changed in this diff Show More