enhancement: Add custom Jinja chat template option

Adds a new configuration option `chat_template` to the Llama.cpp extension, allowing users to define a custom Jinja chat template for the model.

The template can be provided via a new input field in the settings, and if set, it will be passed to the Llama.cpp backend using the `--chat-template` argument. This enhances flexibility for users who require specific chat formatting beyond the GGUF default.

The `chat_template` is added to the `LlamacppConfig` type and conditionally pushed to the command arguments if it's provided. The placeholder text provides an example of a Jinja template structure.
This commit is contained in:
Akarshan 2025-07-02 19:56:07 +05:30 committed by Faisal Amir
parent 3a197d56c0
commit ffef7b9cab
2 changed files with 14 additions and 0 deletions

View File

@ -23,6 +23,18 @@
"controllerType": "checkbox",
"controllerProps": { "value": true }
},
{
"key": "chat_template",
"title": "Custom Jinja Chat template",
"description": "Custom Jinja chat_template to be used for the model",
"controllerType": "input",
"controllerProps": {
"value": "",
"placeholder": "e.g., {% for message in messages %}...{% endfor %} (default is read from GGUF)",
"type": "text",
"textAlign": "right"
}
},
{
"key": "threads",
"title": "Threads",

View File

@ -33,6 +33,7 @@ type LlamacppConfig = {
version_backend: string
auto_update_engine: boolean
auto_unload: boolean
chat_template: string
n_gpu_layers: number
ctx_size: number
threads: number
@ -793,6 +794,7 @@ export default class llamacpp_extension extends AIEngine {
}
// Add remaining options from the interface
if (cfg.chat_template) args.push('--chat-template', cfg.chat_template)
args.push('-ngl', String(cfg.n_gpu_layers > 0 ? cfg.n_gpu_layers : 100))
if (cfg.threads > 0) args.push('--threads', String(cfg.threads))
if (cfg.threads_batch > 0)