jan/extensions
Akarshan Biswas 706dad2687
feat: Add support for llamacpp MoE offloading setting (#6748)
* feat: Add support for llamacpp MoE offloading setting

Introduces the n_cpu_moe configuration setting for the llamacpp provider. This allows users to specify the number of Mixture of Experts (MoE) layers whose weights should be offloaded to the CPU via the --n-cpu-moe flag in llama.cpp.

This is useful for running large MoE models by balancing resource usage, for example, by keeping attention on the GPU and offloading expert FFNs to the CPU.

The changes include:

 - Updating the llamacpp-extension to accept and pass the --n-cpu-moe argument.

 - Adding the input field to the Model Settings UI (ModelSetting.tsx).

 - Including model setting migration logic and bumping the store version to 4.

* remove unused import

* feat: add cpu-moe boolean flag

* chore: remove unused migration cont_batching

* chore: fix migration delete old key and add new one

* chore: fix migration

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-10-07 19:37:58 +05:30
..