fix: Remove troubleshooting for no model

This commit is contained in:
vuonghoainam 2023-08-31 15:33:43 +07:00
parent 465b45fb81
commit 1320668172
2 changed files with 1 additions and 8 deletions

View File

@ -82,6 +82,7 @@ docker compose up -d
| app-backend (hasura) | http://localhost:8080 | Admin credentials are set via the environment variables `HASURA_GRAPHQL_ADMIN_SECRET` in file `conf/sample.env_app-backend` |
| web-client | http://localhost:3000 | Users are signed up to keycloak, default created user is set via `conf/keycloak_conf/example-realm.json` on keycloak with username: `username`, password: `password` |
| llm service | http://localhost:8000 | |
| sd service | http://localhost:8001 | |
## Usage
To get started with Jan, follow these steps:
@ -122,14 +123,6 @@ You can access the live demo at https://cloud.jan.ai.
## Common Issues and Troubleshooting
**Error in `jan-inference` service** ![](images/download-model-error.png)
- Error: download model incomplete
- Solution:
- Manually download the LLM model using the URL specified in the environment variable `MODEL_URL` within the `.env` file. The URL is typically https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML/resolve/main/llama-2-7b-chat.ggmlv3.q4_1.bin
- Copy the downloaded file `llama-2-7b-chat.ggmlv3.q4_1.bin` to the folder `jan-inference/llm/models`
- Run `docker compose down` followed by `docker compose up -d` again to restart the services.
## Contributing
Contributions are welcome! Please read the [CONTRIBUTING.md](CONTRIBUTING.md) file for guidelines on how to contribute to this project.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 59 KiB