* chore: add react developer tools to electron * feat: add small convert modal * feat: separate modals and add hugging face extension * feat: fully implement hugging face converter * fix: forgot to uncomment this... * fix: typo * feat: try hf-to-gguf script first and then use convert.py HF-to-GGUF has support for some unusual models maybe using convert.py first would be better but we can change the usage order later * fix: pre-install directory changed * fix: sometimes exit code is undefined * chore: download additional files for qwen * fix: event handling changed * chore: add one more necessary package * feat: download gguf-py from llama.cpp * fix: cannot interpret wildcards on GNU tar Co-authored-by: hiento09 <136591877+hiento09@users.noreply.github.com> --------- Co-authored-by: hiento09 <136591877+hiento09@users.noreply.github.com>
3 lines
526 B
Batchfile
3 lines
526 B
Batchfile
@echo off
|
|
set /p LLAMA_CPP_VERSION=<./scripts/version.txt
|
|
.\node_modules\.bin\download https://github.com/ggerganov/llama.cpp/archive/refs/tags/%LLAMA_CPP_VERSION%.tar.gz -o . --filename ./scripts/llama.cpp.tar.gz && tar -xzf .\scripts\llama.cpp.tar.gz "llama.cpp-%LLAMA_CPP_VERSION%/convert.py" "llama.cpp-%LLAMA_CPP_VERSION%/convert-hf-to-gguf.py" "llama.cpp-%LLAMA_CPP_VERSION%/gguf-py" && cpx "./llama.cpp-%LLAMA_CPP_VERSION%/**" "scripts" && rimraf "./scripts/llama.cpp.tar.gz" && rimraf "./llama.cpp-%LLAMA_CPP_VERSION%" |