10 Commits

Author SHA1 Message Date
Akarshan Biswas
92703bceb2
refactor: move thinking toggle to runtime settings for dynamic control (#5800)
* refactor: move thinking toggle to runtime settings for per-message control

Replaces the static `reasoning_budget` config with a dynamic `enable_thinking` flag under `chat_template_kwargs`, allowing models like Jan-nano and Qwen3 to enable/disable thinking behavior at runtime, even mid-conversation.
Requires UI update

* remove engine argument
2025-07-17 20:18:24 +05:30
Akarshan Biswas
96ba42e411
feat: Add missing ctx-shift toggle (#5765)
* feat: Add missing ctx_shift

* fix typo

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* refine description

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-07-14 11:51:34 +05:30
Akarshan
ffef7b9cab enhancement: Add custom Jinja chat template option
Adds a new configuration option `chat_template` to the Llama.cpp extension, allowing users to define a custom Jinja chat template for the model.

The template can be provided via a new input field in the settings, and if set, it will be passed to the Llama.cpp backend using the `--chat-template` argument. This enhances flexibility for users who require specific chat formatting beyond the GGUF default.

The `chat_template` is added to the `LlamacppConfig` type and conditionally pushed to the command arguments if it's provided. The placeholder text provides an example of a Jinja template structure.
2025-07-03 23:38:16 +07:00
Akarshan
40f1fd4ffd
feat: Auto update backend implementation 2025-07-03 19:32:12 +05:30
Akarshan
0cbf35dc77
Add auto unload setting to llamacpp-extension 2025-07-02 12:28:25 +07:00
Thien Tran
1ae7c0b59a
update version/backend format. fix bugs around load() 2025-07-02 12:27:15 +07:00
Thien Tran
40cd7e962a
feat: download backend for llama.cpp extension (#5123)
* wip

* update

* add download logic

* add decompress. support delete file

* download backend upon selecting setting

* add some logging and nootes

* add note on race condition

* remove then catch

* default to none backend. only download if it's not installed

* merge version and backend. fetch version from GH

* restrict scope of output_dir

* add note on unpack
2025-07-02 12:27:13 +07:00
Akarshan Biswas
742e731e96
Add --reasoning_budget option 2025-07-02 12:27:10 +07:00
Akarshan Biswas
19274f7e69
update settings 2025-07-02 12:26:39 +07:00
Thien Tran
3f082372fd
add llamacpp-extension. can list some models 2025-07-02 12:26:39 +07:00