5583 Commits

Author SHA1 Message Date
Emre Can Kartal
c1cdc434a8
Add gpt-oss local installation blog post (#6075)
- Complete beginner guide for running OpenAI's gpt-oss locally
- Step-by-step instructions using Jan AI
- Alternative installation methods (llama.cpp, Ollama, LM Studio)
- Performance benchmarks and troubleshooting guide
- SEO-optimized with FAQ section and comparison tables
- 4 supporting screenshots showing the installation process
2025-08-07 09:48:05 +07:00
Nguyen Ngoc Minh
06941b932d
Merge pull request #6078 from menloresearch/ci/deprecate-jan-docs-new-release
ci: deprecate jan docs new release workflow in favor of jan-docs
2025-08-07 00:22:22 +07:00
Minh141120
c3cca93850 ci: deprecate jan docs new release workflow in favor of jan-docs 2025-08-07 00:04:21 +07:00
Nguyen Ngoc Minh
d55a5e695f
Merge pull request #6073 from menloresearch/chore/update-workflow-name
chore: update workflow name
2025-08-06 23:46:25 +07:00
Nguyen Ngoc Minh
397f71db6e
chore: update workflow name 2025-08-06 17:36:03 +07:00
Louis
b0785e9db0
Merge pull request #6072 from menloresearch/fix/should-not-include-reasoning-content-in-completion-request
fix: should not include reasoning text in the chat completion request
2025-08-06 17:34:16 +07:00
Louis
0b1b84dbf4
test: add tests for new change 2025-08-06 17:13:22 +07:00
Louis
fc815dc98e
fix: should not include reasoning text in the chat completion request 2025-08-06 17:07:32 +07:00
Faisal Amir
ffdb6829e1
fix: gpt-oss thinking block (#6071) 2025-08-06 16:10:24 +07:00
Ramon Perez
1739958664
Added new model provider and updated main repo readme 2025-08-06 13:14:28 +10:00
Ramon Perez
683fb34709 fixed components in troubleshooting tab 2025-08-06 12:49:01 +10:00
Ramon Perez
2306da0e84 added troubleshooting server instructions to config 2025-08-06 12:38:55 +10:00
Akarshan Biswas
fec4cce560 fix: Add conditional Vulkan support check for better GPU compatibility (#6066)
Changes:
- Introduce conditional Vulkan support check for discrete GPUs with 6GB+ VRAM

fixes: #6009
2025-08-06 12:24:21 +10:00
Louis
e74601443f chore: add deep_link register_all 2025-08-06 12:24:21 +10:00
Louis
f41a04b1a2 fix: test env 2025-08-06 12:24:21 +10:00
Louis
3bdd5f00b6 chore: able to disable updater via env flag 2025-08-06 12:24:21 +10:00
Louis
de146f363a test: add tests 2025-08-06 12:24:21 +10:00
Louis
83527a7533 fix: Jan hub repo detail and deep link 2025-08-06 12:24:21 +10:00
Faisal Amir
026b21f779 feat: jinja template customize per model instead provider level (#6053) 2025-08-06 12:24:21 +10:00
Akarshan Biswas
dcffa4fa0a Fix: Improve Llama.cpp model path handling and error handling (#6045)
* Improve Llama.cpp model path handling and validation

This commit refactors the load_llama_model function to improve how it handles and validates the model path.

Previously, the function extracted the model path but did not perform any validation. This change adds the following improvements:

It now checks for the presence of the -m flag.

It verifies that a path is provided after the -m flag.

It validates that the specified model path actually exists on the filesystem.

It ensures that the SessionInfo struct stores the canonical display path of the model, which is a more robust approach.

These changes make the model loading process more reliable and provide better error handling for invalid or missing model paths.

* Exp: Use short path on Windows

* Fix: Remove error channel and handling in llama.cpp server loading

The previous implementation used a channel to receive error messages from the llama.cpp server's stdout. However, this proved unreliable as the path names can contain 'errors strings' that we use to check even during normal operation. This commit removes the error channel and associated error handling logic.
The server readiness is still determined by checking for the "server is listening" message in stdout. Errors are now handled by relying on the process exit code and capturing the full stderr output if the process fails to start or exits unexpectedly. This approach provides a more robust and accurate error detection mechanism.

* Add else block in Windows path handling

* Add some path related tests

* Fix windows tests
2025-08-06 12:24:21 +10:00
Faisal Amir
318f6f504f feat: recommended label llamacpp setting (#6052)
* feat: recommended label llamacpp

* chore: remove log
2025-08-06 12:24:21 +10:00
Minh141120
8e4c696583 ci: disable autoqa on nightly build 2025-08-06 12:24:21 +10:00
Louis
7e52512d0e fix: should check for invalid backend to cover previous missing backend case 2025-08-06 12:24:21 +10:00
Louis
eb13189d07 fix: run dev should reinstall extensions 2025-08-06 12:24:21 +10:00
Louis
026383e92d test: add tests for new changes 2025-08-06 12:24:21 +10:00
Louis
4b6269a4f0 fix: add missing cuda backend support 2025-08-06 12:24:21 +10:00
Minh141120
3ffb30b544 chore: skip nightly build workflow for external contributor 2025-08-06 12:24:21 +10:00
Sherzod Mutalov
5f06a35f4e fix: use attributes to check the feature existence 2025-08-06 12:24:21 +10:00
Sherzod Mutalov
280ea1aa9f chore: extracted macos avx2 check code to the utility function 2025-08-06 12:23:18 +10:00
Sherzod Mutalov
ad9c4854a9 chore: added comments 2025-08-06 12:20:30 +10:00
Sherzod Mutalov
49c8334e40 chore: replaced with macros call to remove warning 2025-08-06 12:20:30 +10:00
Sherzod Mutalov
f1dd42de9e fix: use system npx on old mac's 2025-08-06 12:20:30 +10:00
Chaiyapruek Muangsiri
4e31e1d3a8 remove unnecessary try catch block 2025-08-06 12:20:30 +10:00
Chaiyapruek Muangsiri
00f686a733 fix connected servers status not in sync when edit mcp json 2025-08-06 12:20:30 +10:00
Ramon Perez
890a917dec removed nextra component in astro site 2025-08-06 12:20:30 +10:00
Akarshan Biswas
8d147c1774
fix: Add conditional Vulkan support check for better GPU compatibility (#6066)
Changes:
- Introduce conditional Vulkan support check for discrete GPUs with 6GB+ VRAM

fixes: #6009
2025-08-06 07:20:44 +05:30
Louis
c642076ec3
Merge pull request #6024 from menloresearch/fix/jan-hub-repo-data-and-deeplink
fix: Jan hub model detail and deep link
2025-08-06 08:46:07 +07:00
Louis
3b349a60f1 chore: add deep_link register_all 2025-08-05 22:32:27 +07:00
Ramon Perez
4ee6873ca5
Update docs/src/pages/docs/remote-models/huggingface.mdx
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-08-06 00:24:18 +10:00
Ramon Perez
fc4ecd3412
Update README.md
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-08-06 00:24:10 +10:00
Faisal Amir
5d001dfd5a
feat: jinja template customize per model instead provider level (#6053) 2025-08-05 21:21:41 +07:00
Ramon Perez
f95c6c4d3d updated readme 2025-08-05 23:11:05 +10:00
Ramon Perez
4c66b1f65b added huggingface page and updated readme 2025-08-05 22:57:49 +10:00
Akarshan Biswas
088b9d7f25
Fix: Improve Llama.cpp model path handling and error handling (#6045)
* Improve Llama.cpp model path handling and validation

This commit refactors the load_llama_model function to improve how it handles and validates the model path.

Previously, the function extracted the model path but did not perform any validation. This change adds the following improvements:

It now checks for the presence of the -m flag.

It verifies that a path is provided after the -m flag.

It validates that the specified model path actually exists on the filesystem.

It ensures that the SessionInfo struct stores the canonical display path of the model, which is a more robust approach.

These changes make the model loading process more reliable and provide better error handling for invalid or missing model paths.

* Exp: Use short path on Windows

* Fix: Remove error channel and handling in llama.cpp server loading

The previous implementation used a channel to receive error messages from the llama.cpp server's stdout. However, this proved unreliable as the path names can contain 'errors strings' that we use to check even during normal operation. This commit removes the error channel and associated error handling logic.
The server readiness is still determined by checking for the "server is listening" message in stdout. Errors are now handled by relying on the process exit code and capturing the full stderr output if the process fails to start or exits unexpectedly. This approach provides a more robust and accurate error detection mechanism.

* Add else block in Windows path handling

* Add some path related tests

* Fix windows tests
2025-08-05 14:17:19 +05:30
Faisal Amir
99567a1102
feat: recommended label llamacpp setting (#6052)
* feat: recommended label llamacpp

* chore: remove log
2025-08-05 13:55:33 +07:00
Louis
065a850a94 fix: test env 2025-08-05 13:44:40 +07:00
Louis
b8070f1871 chore: able to disable updater via env flag 2025-08-05 13:44:40 +07:00
Louis
90e46a2696 test: add tests 2025-08-05 13:44:40 +07:00
Louis
7f0c605651 fix: Jan hub repo detail and deep link 2025-08-05 13:44:40 +07:00
Nguyen Ngoc Minh
339a1957c8
Merge pull request #6051 from menloresearch/ci/disable-autoqa-on-nightly-build
ci: disable autoqa on nightly build
2025-08-05 12:47:31 +07:00