Merge pull request #167 from janhq/fix_local_model_doc
Fix local model doc
This commit is contained in:
commit
cc39664ce4
@ -136,14 +136,13 @@ pip install 'llama-cpp-python[server]'
|
|||||||
|
|
||||||
We recommend that Llama2-7B (4-bit quantized) as a basic model to get started.
|
We recommend that Llama2-7B (4-bit quantized) as a basic model to get started.
|
||||||
|
|
||||||
You will need to download the models to the `models` folder.
|
You will need to download the models to the `models` folder at root level.
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
mkdir -p models
|
|
||||||
# Downloads model (~4gb)
|
# Downloads model (~4gb)
|
||||||
# Download time depends on your internet connection and HuggingFace's bandwidth
|
# Download time depends on your internet connection and HuggingFace's bandwidth
|
||||||
# In this part, please head over to any source contains `.gguf` format model - https://huggingface.co/models?search=gguf
|
# In this part, please head over to any source contains `.gguf` format model - https://huggingface.co/models?search=gguf
|
||||||
wget LLM_MODEL_URL=https://huggingface.co/TheBloke/CodeLlama-13B-GGUF/resolve/main/codellama-13b.Q3_K_L.gguf -P models
|
wget https://huggingface.co/TheBloke/CodeLlama-13B-GGUF/resolve/main/codellama-13b.Q3_K_L.gguf -P models
|
||||||
```
|
```
|
||||||
|
|
||||||
- Run the model in host machine
|
- Run the model in host machine
|
||||||
|
|||||||
0
models/.gitkeep
Normal file
0
models/.gitkeep
Normal file
Loading…
x
Reference in New Issue
Block a user