6089 Commits

Author SHA1 Message Date
Ramon Perez
2a5cffd64c Merge branch 'dev' into rp/v2-docs-improvements 2025-08-25 21:55:16 +10:00
Ramon Perez
31364dd9f8 main pages revamped and new docs updated 2025-08-25 21:53:04 +10:00
Faisal Amir
4137821e53 fix: system monitor window permission 2025-08-25 17:38:30 +07:00
Faisal Amir
e376314315 chore: update filter hub while searching 2025-08-25 16:51:30 +07:00
Faisal Amir
e73a710c06 fix/update-ui-info 2025-08-25 16:45:59 +07:00
Faisal Amir
2472cc949a
Merge pull request #6281 from menloresearch/fix/handle-vision-remote-model
fix: handle manual toggle vision for remote model
2025-08-25 13:36:27 +07:00
Faisal Amir
62eb422934 chore: show model setting only for local provider 2025-08-25 11:26:56 +07:00
Faisal Amir
9ff46757bb
Merge pull request #6282 from menloresearch/fix/blog-og-image
fix: jan research blog `ogimage`
2025-08-25 11:04:50 +07:00
Faisal Amir
91eb37c240 fix/blog-og-image 2025-08-25 10:59:49 +07:00
Faisal Amir
8d06c3addf chore: add tooltip visions 2025-08-25 10:47:18 +07:00
Faisal Amir
45ba949d96 fix: toggle vision for remote model 2025-08-25 10:28:18 +07:00
lugnicca
1a6a37c003 fix: escape key was closing modal instead of only combobox and remove arrow left/righ closing combobox 2025-08-24 00:40:02 +02:00
lugnicca
6c0e6dce06 fix: remove unused keyRepeatTimeoutRef 2025-08-23 18:32:12 +02:00
lugnicca
639bd5fb27 fix: set Escape in keyboard navigation 2025-08-23 18:08:29 +02:00
lugnicca
aa568e6290 fix: remove ModelProvider type 2025-08-23 15:07:42 +02:00
lugnicca
1bf5802a68 refactor: update MockModelProvider type to use ModelProvider and clean up test setup 2025-08-23 02:37:15 +02:00
lugnicca
4e8dd9281f refactor: simplify event handling and fix test setup in ModelCombobox 2025-08-23 02:37:14 +02:00
lugnicca
9a68631d39 refactor: more modular error handling in fetchModelsFromProvider function 2025-08-23 02:37:14 +02:00
lugnicca
f35e6cdae8 refactor: clean model selector and add more tests 2025-08-23 02:37:14 +02:00
lugnicca
3339629747 test: add unit tests for ModelCombobox, useProviderModels and providers 2025-08-23 02:37:14 +02:00
lugnicca
5d9c3ab462 feat: add model selector with fetching from /v1/models endpoints when adding models 2025-08-23 02:36:38 +02:00
Emre Can Kartal
8548e0fb12
Merge pull request #6274 from menloresearch/fix/og-image-date-update
docs: fix OG image URL and update publication date
2025-08-22 15:52:25 +03:00
Emre Can Kartal
82eb18bc00
Merge branch 'dev' into fix/og-image-date-update 2025-08-22 15:41:01 +03:00
eckartal
e331e8fb76 docs: fix OG image URL and update publication date
- Fix OG image URL to full https://jan.ai/post/_assets/jan-research.jpeg for Twitter preview
- Update publication date to 2025-08-22
- Ensure social media platforms can properly display the image
2025-08-22 15:33:58 +03:00
Emre Can Kartal
61ae8eb88b
Merge pull request #6197 from menloresearch/blog/jan-v1-research
docs: add blog for jan v1 research
2025-08-22 15:00:07 +03:00
eckartal
37110ea262 docs: update Jan v1 research blog with professional styling and OG image
- Updated title to 'Jan v1 for Deep Research'
- Added professional cookbook-style formatting inspired by OpenAI guide
- Added performance summary with benchmark results (91.1% vs 83.2%)
- Added new OG image (jan-research.jpeg)
- Improved content structure and readability
2025-08-22 14:35:17 +03:00
Faisal Amir
63acb3a275
Merge pull request #6272 from menloresearch/fix/copy-mmproj-setting
fix: update copy offload_mmproj setting desc
2025-08-22 17:06:09 +07:00
Ramon Perez
51086f39ca removed orphan pages and polished wording of main page 2025-08-22 18:59:22 +10:00
Faisal Amir
7801f9c330 fix: update copy mmproj setting desc 2025-08-22 15:27:07 +07:00
Faisal Amir
f7e2c49154
Merge pull request #6271 from menloresearch/fix/compatibility-imported-model
fix: compatibility imported model
2025-08-22 14:06:05 +07:00
Faisal Amir
f6e4d55f5e fix: compatibility imported model 2025-08-22 13:20:57 +07:00
Akarshan Biswas
39e8d3b80c
fix: update linux build script to be consistent with CI (#6269)
The local build script for Linux was failing due to a bundling error. This commit updates the `build:tauri:linux` script in `package.json` to be consistent with the CI build pipeline, which resolves the issue.

The updated script now includes:
- **`NO_STRIP=1`**: This environment variable prevents the `linuxdeploy` utility from stripping debugging symbols, which was a potential cause of the bundling failure.
- **`--verbose`**: This flag provides more detailed output during the build, which can be useful for debugging similar issues in the future.
2025-08-22 09:22:22 +05:30
new5558
9d15453b66 docs: update jan v1 research blog title and keywords 3 2025-08-22 09:39:34 +07:00
Akarshan Biswas
64a608039b
fix: check for env value before setting (#6266)
* fix: check for env value before setting

* Use empty instead of none
2025-08-21 22:55:49 +05:30
Nguyen Ngoc Minh
2cf6464780
Merge pull request #6264 from menloresearch/ci/add-trigger-condition-assign-milestone
ci: add job condition for auto assign milestone
2025-08-21 14:35:47 +00:00
Minh141120
2000fc31b5 ci: add job condition for auto assign milestone 2025-08-21 21:25:08 +07:00
new5558
1180f3e42b docs: update jan v1 research blog title and keywords 2 2025-08-21 18:25:02 +07:00
new5558
404695f1d8 docs: update jan v1 research blog title and keywords 2025-08-21 18:19:25 +07:00
Piotr Orzechowski
ef90f07db8
fix: add missing Polish translations (#6262) 2025-08-21 17:46:48 +07:00
Akarshan Biswas
510c70bdf7
feat: Add model compatibility check and memory estimation (#6243)
* feat: Add model compatibility check and memory estimation

This commit introduces a new feature to check if a given model is supported based on available device memory.

The change includes:
- A new `estimateKVCache` method that calculates the required memory for the model's KV cache. It uses GGUF metadata such as `block_count`, `head_count`, `key_length`, and `value_length` to perform the calculation.
- An `isModelSupported` method that combines the model file size and the estimated KV cache size to determine the total memory required. It then checks if any available device has sufficient free memory to load the model.
- An updated error message for the `version_backend` check to be more user-friendly, suggesting a stable internet connection as a potential solution for backend setup failures.

This functionality helps prevent the application from attempting to load models that would exceed the device's memory capacity, leading to more stable and predictable behavior.

fixes: #5505

* Update extensions/llamacpp-extension/src/index.ts

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* Update extensions/llamacpp-extension/src/index.ts

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* Extend this to available system RAM if GGML device is not available

* fix: Improve model metadata and memory checks

This commit refactors the logic for checking if a model is supported by a system's available memory.

**Key changes:**
- **Remote model support**: The `read_gguf_metadata` function can now fetch metadata from a remote URL by reading the file in chunks.
- **Improved KV cache size calculation**: The KV cache size is now estimated more accurately by using `attention.key_length` and `attention.value_length` from the GGUF metadata, with a fallback to `embedding_length`.
- **Granular memory check statuses**: The `isModelSupported` function now returns a more specific status (`'RED'`, `'YELLOW'`, `'GREEN'`) to indicate whether the model weights or the KV cache are too large for the available memory.
- **Consolidated logic**: The logic for checking local and remote models has been consolidated into a single `isModelSupported` function, improving code clarity and maintainability.

These changes provide more robust and informative model compatibility checks, especially for models hosted on remote servers.

* Update extensions/llamacpp-extension/src/index.ts

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* Make ctx_size optional and use sum free memory across ggml devices

* feat: hub and dropdown model selection handle model compatibility

* feat: update bage model info color

* chore: enable detail page to get compatibility model

* chore: update copy

* chore: update shrink indicator UI

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-08-21 16:13:50 +05:30
Akarshan Biswas
9c25480c7b
fix: Update placeholder text and error message (#6263)
This commit improves the clarity of the llama.cpp extension.

- Corrected a placeholder example from `GGML_VK_VISIBLE_DEVICES='0,1'` to `GGML_VK_VISIBLE_DEVICES=0,1` for better accuracy.
- Changed an ambiguous error message from `"Failed to load llama-server: ${error}"` to the more specific `"Failed to load llamacpp backend"`.
2025-08-21 16:01:31 +05:30
Akarshan Biswas
5c3a6fec32
feat: Add support for custom environmental variables to llama.cpp (#6256)
This commit adds a new setting `llamacpp_env` to the llama.cpp extension, allowing users to specify custom environment variables. These variables are passed to the backend process when it starts.

A new function `parseEnvFromString` is introduced to handle the parsing of the semicolon-separated key-value pairs from the user input. The environment variables are then used in the `load` function and when listing available devices. This enables more flexible configuration of the llama.cpp backend, such as specifying visible GPUs for Vulkan.

This change also updates the Tauri command `get_devices` to accept environment variables, ensuring that device discovery respects the user's settings.
2025-08-21 15:50:37 +05:30
Louis
5c4deff215
Merge pull request #6260 from menloresearch/fix/bring-back-manual-model-capability-edit
fix: bring back manual model capability edit modal
2025-08-21 16:31:17 +07:00
Dinh Long Nguyen
32a2ca95b6
feat: gguf file size + hash validation (#5266) (#6259)
* feat: gguf file size + hash validation

* fix tests fe

* update cargo tests

* handle asyn download for both models and mmproj

* move progress tracker to models

* handle file download cancelled

* add cancellation mid hash run
2025-08-21 16:17:58 +07:00
Nguyen Ngoc Minh
41b4cc3bb3
Merge pull request #6261 from menloresearch/ci/add-autoqa-migration-workflow
ci: add autoqa migration workflow
2025-08-21 09:15:42 +00:00
Nguyen Ngoc Minh
e096c3114f
Create autoqa-migration.yml 2025-08-21 16:14:13 +07:00
Louis
8d3fcf1680
Merge pull request #6257 from menloresearch/fix/enable-back-app-language-setting
fix: enable back app language setting
2025-08-21 13:15:31 +07:00
Louis
9bc243c3f7
Merge branch 'dev' into fix/enable-back-app-language-setting 2025-08-21 12:53:21 +07:00
Louis
8e7378b70f
Merge pull request #6255 from menloresearch/fix/remove-experimental-toggle
fix: remove experimental toggle
2025-08-21 12:51:25 +07:00
Faisal Amir
7b9e752301
Merge pull request #6250 from menloresearch/feat/local-api-server
feat: run on startup setting for local api server
2025-08-21 12:43:13 +07:00