Merge pull request #167 from janhq/fix_local_model_doc

Fix local model doc
This commit is contained in:
namvuong 2023-09-12 18:11:17 +07:00 committed by GitHub
commit cc39664ce4
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 2 additions and 3 deletions

View File

@ -136,14 +136,13 @@ pip install 'llama-cpp-python[server]'
We recommend that Llama2-7B (4-bit quantized) as a basic model to get started. We recommend that Llama2-7B (4-bit quantized) as a basic model to get started.
You will need to download the models to the `models` folder. You will need to download the models to the `models` folder at root level.
```shell ```shell
mkdir -p models
# Downloads model (~4gb) # Downloads model (~4gb)
# Download time depends on your internet connection and HuggingFace's bandwidth # Download time depends on your internet connection and HuggingFace's bandwidth
# In this part, please head over to any source contains `.gguf` format model - https://huggingface.co/models?search=gguf # In this part, please head over to any source contains `.gguf` format model - https://huggingface.co/models?search=gguf
wget LLM_MODEL_URL=https://huggingface.co/TheBloke/CodeLlama-13B-GGUF/resolve/main/codellama-13b.Q3_K_L.gguf -P models wget https://huggingface.co/TheBloke/CodeLlama-13B-GGUF/resolve/main/codellama-13b.Q3_K_L.gguf -P models
``` ```
- Run the model in host machine - Run the model in host machine

0
models/.gitkeep Normal file
View File