Louis bc4fe52f8d
fix: llama.cpp integration model load and chat experience (#5823)
* fix: stop generating should not stop running models

* fix: ensure backend ready before loading model

* fix: backend setting should not block onLoad
2025-07-21 09:29:26 +07:00
..
2025-07-10 21:14:21 +07:00
2025-07-10 21:14:21 +07:00
2025-07-10 21:14:21 +07:00
2024-08-15 10:44:47 +07:00