fix: Remove troubleshooting for no model
This commit is contained in:
parent
465b45fb81
commit
1320668172
@ -82,6 +82,7 @@ docker compose up -d
|
||||
| app-backend (hasura) | http://localhost:8080 | Admin credentials are set via the environment variables `HASURA_GRAPHQL_ADMIN_SECRET` in file `conf/sample.env_app-backend` |
|
||||
| web-client | http://localhost:3000 | Users are signed up to keycloak, default created user is set via `conf/keycloak_conf/example-realm.json` on keycloak with username: `username`, password: `password` |
|
||||
| llm service | http://localhost:8000 | |
|
||||
| sd service | http://localhost:8001 | |
|
||||
## Usage
|
||||
|
||||
To get started with Jan, follow these steps:
|
||||
@ -122,14 +123,6 @@ You can access the live demo at https://cloud.jan.ai.
|
||||
|
||||
## Common Issues and Troubleshooting
|
||||
|
||||
**Error in `jan-inference` service** 
|
||||
|
||||
- Error: download model incomplete
|
||||
- Solution:
|
||||
- Manually download the LLM model using the URL specified in the environment variable `MODEL_URL` within the `.env` file. The URL is typically https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML/resolve/main/llama-2-7b-chat.ggmlv3.q4_1.bin
|
||||
- Copy the downloaded file `llama-2-7b-chat.ggmlv3.q4_1.bin` to the folder `jan-inference/llm/models`
|
||||
- Run `docker compose down` followed by `docker compose up -d` again to restart the services.
|
||||
|
||||
## Contributing
|
||||
|
||||
Contributions are welcome! Please read the [CONTRIBUTING.md](CONTRIBUTING.md) file for guidelines on how to contribute to this project.
|
||||
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 59 KiB |
Loading…
x
Reference in New Issue
Block a user