hiento09
1b74772d07
feat: download llamacpp backend fail back to cdn ( #6361 )
...
* feat: download llamacpp backend fail back to cdn incase github api encounters errors
2025-09-04 09:39:16 +07:00
Louis
3a36353b02
fix: backend variant selection
2025-08-21 10:54:35 +07:00
Akarshan Biswas
5ad3d282af
fix: re-enable Vulkan backend in integrated GPUs with enough memory ( #6215 )
2025-08-18 17:31:01 +05:30
Dinh Long Nguyen
e1c8d98bf2
Backend Architecture Refactoring ( #6094 ) ( #6162 )
...
* add llamacpp plugin
* Refactor llamacpp plugin
* add utils plugin
* remove utils folder
* add hardware implementation
* add utils folder + move utils function
* organize cargo files
* refactor utils src
* refactor util
* apply fmt
* fmt
* Update gguf + reformat
* add permission for gguf commands
* fix cargo test windows
* revert yarn lock
* remove cargo.lock for hardware plugin
* ignore cargo.lock file
* Fix hardware invoke + refactor hardware + refactor tests, constants
* use api wrapper in extension to invoke hardware call + api wrapper build integration
* add newline at EOF (per Akarshan)
* add vi mock for getSystemInfo
2025-08-15 08:59:01 +07:00
Akarshan Biswas
8d147c1774
fix: Add conditional Vulkan support check for better GPU compatibility ( #6066 )
...
Changes:
- Introduce conditional Vulkan support check for discrete GPUs with 6GB+ VRAM
fixes : #6009
2025-08-06 07:20:44 +05:30
Louis
bf9315dbbe
fix: add missing cuda backend support
2025-08-04 15:54:21 +07:00
Akarshan Biswas
1eaec5e4f6
Fix: engine unable to find dlls on when running on Windows ( #5863 )
...
* Fix: Windows llamacpp not picking up dlls from lib repo
* Fix lib path on Windows
* Add debug info about lib_path
* Normalize lib_path for Windows
* fix window lib path normalization
* fix: missing cuda dll files on windows
* throw backend setup errors to UI
* Fix format
* Update extensions/llamacpp-extension/src/index.ts
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
* feat: add logger to llamacpp-extension
* fix: platform check
---------
Co-authored-by: Louis <louis@jan.ai>
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-07-22 20:05:24 +05:30
Louis
19cb1c96e0
fix: llama.cpp backend download on windows ( #5813 )
...
* fix: llama.cpp backend download on windows
* test: add missing cases
* clean: linter
* fix: build
2025-07-20 16:58:09 +07:00
Louis
8ca507c01c
feat: proxy support for the new downloader ( #5795 )
...
* feat: proxy support for the new downloader
* test: remove outdated test
* ci: clean up
2025-07-17 23:10:21 +07:00
Akarshan
37151ba926
Feat: Auto load and download default backend during first launch
2025-07-03 09:13:32 +05:30
Thien Tran
525cc93d4a
fix system cudart detection on linux
2025-07-02 12:27:34 +07:00
Thien Tran
65d6f34878
check for system libraries
2025-07-02 12:27:17 +07:00
Thien Tran
622f4118c0
add placeholder for windows and linux arm
2025-07-02 12:27:17 +07:00
Thien Tran
f7bcf43334
update folde structure. small refactoring
2025-07-02 12:27:16 +07:00
Thien Tran
494a47aaa5
fix download condition
2025-07-02 12:27:14 +07:00
Thien Tran
f32ae402d5
fix CUDA version URL
2025-07-02 12:27:14 +07:00
Thien Tran
27146eb5cc
fix feature parsing
2025-07-02 12:27:14 +07:00
Thien Tran
a75d13f42f
fix version compare
2025-07-02 12:27:14 +07:00
Thien Tran
3490299f66
refactor get supported features. check driver version for cu11 and cu12
2025-07-02 12:27:13 +07:00
Thien Tran
fbfaaf43c5
download CUDA libs if needed
2025-07-02 12:27:13 +07:00
Thien Tran
40cd7e962a
feat: download backend for llama.cpp extension ( #5123 )
...
* wip
* update
* add download logic
* add decompress. support delete file
* download backend upon selecting setting
* add some logging and nootes
* add note on race condition
* remove then catch
* default to none backend. only download if it's not installed
* merge version and backend. fetch version from GH
* restrict scope of output_dir
* add note on unpack
2025-07-02 12:27:13 +07:00