Update docs (#15)

* fix: not every llm stream chunked by each json data

* Docs: deploy docusaurus github page and update README.md (#14)

* add github action deploy docusaurus to github page

* README: update installation instruction

* Add sonarqube scanner github actions pipeline

---------

Co-authored-by: Hien To <>

---------

Co-authored-by: Louis <louis@jan.ai>
This commit is contained in:
hiento09 2023-08-30 11:19:25 +07:00 committed by GitHub
parent 1d016d5a9b
commit 90aa721e7d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
8 changed files with 158 additions and 39 deletions

43
.github/workflows/deploy.yml vendored Normal file
View File

@ -0,0 +1,43 @@
name: Deploy to GitHub Pages
on:
push:
branches:
- main
# Review gh actions docs if you want to further define triggers, paths, etc
# https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#on
jobs:
deploy:
name: Deploy to GitHub Pages
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-node@v3
with:
node-version: 20
cache: 'npm'
cache-dependency-path: './docs/package-lock.json'
- name: Install dependencies
run: yarn install
working-directory: docs
- name: Build website
run: yarn build
working-directory: docs
# Popular action to deploy to GitHub Pages:
# Docs: https://github.com/peaceiris/actions-gh-pages#%EF%B8%8F-docusaurus
- name: Deploy to GitHub Pages
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
# Build output to publish to the `gh-pages` branch:
publish_dir: ./docs/build
# The following lines assign commit authorship to the official
# GH-Actions bot for deploys to `gh-pages` branch:
# https://github.com/actions/checkout/issues/13#issuecomment-724415212
# The GH actions bot is used by default if you didn't specify the two fields.
# You can swap them out with your own user credentials.
user_name: github-actions[bot]
user_email: 41898282+github-actions[bot]@users.noreply.github.com

41
.github/workflows/quality-gate.yml vendored Normal file
View File

@ -0,0 +1,41 @@
name: Linter & Sonarqube scanner
on:
push:
branches:
- dev
- main
pull_request:
branches:
- dev
- main
jobs:
test-lint:
runs-on: ubuntu-latest
steps:
- name: Getting the repo
uses: actions/checkout@v2
- name: create sonar properties file
run: |
echo "Branch Name ${GITHUB_REF#refs/heads/}"
echo -e "sonar.sources = ." > sonar-project.properties
echo -e "sonar.projectKey = ${{ secrets.PROJECT_KEY }}" >> sonar-project.properties
if [[ "${{ github.event_name }}" == "push" ]]; then
echo -e "sonar.branch.name = ${GITHUB_REF#refs/heads/}" >> sonar-project.properties
fi
- name: SonarQube Scan
uses: sonarsource/sonarqube-scan-action@master
env:
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}
# Check the Quality Gate status.
- name: SonarQube Quality Gate check
id: sonarqube-quality-gate-check
uses: sonarsource/sonarqube-quality-gate-action@master
# Force to fail step after specific time.
timeout-minutes: 5
env:
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }} #OPTIONAL

6
.gitmodules vendored
View File

@ -1,12 +1,12 @@
[submodule "web-client"]
path = web-client
url = git@github.com:janhq/jan-web.git
url = ../jan-web.git
[submodule "app-backend"]
path = app-backend
url = git@github.com:janhq/app-backend.git
url = ../app-backend.git
[submodule "mobile-client"]
path = mobile-client
url = git@github.com:janhq/jan-react-native.git
url = ../jan-react-native.git
[submodule "jan-inference/sd/sd_cpp"]
path = jan-inference/sd/sd_cpp
url = https://github.com/leejet/stable-diffusion.cpp

View File

@ -1,44 +1,79 @@
# Jan
Jan is a free, source-available and [fair code licensed](https://faircode.io/) AI Inference Platform. We help enterprises, small businesses and hobbyists to self-host AI on their own infrastructure efficiently, to protect their data, lower costs, and put powerful AI capabilities in the hands of users.
Jan is a free, source-available and [fair code licensed](https://faircode.io/) AI Inference Platform. We help enterprises, small businesses and hobbyists to self-host AI on their own infrastructure efficiently, to protect their data, lower costs, and put powerful AI capabilities in the hands of users.
## Features
- Web, Mobile and APIs
- LLMs and Generative Art models
- AI Catalog
- Model Installer
- Model Installer
- User Management
- Support for Nvidia, Apple Silicon, CPU architectures
- Support for Apple Silicon, CPU architectures
## Installation
### Pre-Requisites
- Nvidia GPUs
- Apple Silicon
- CPU architectures (not recommended)
- **Supported Operating Systems**: This setup is only tested and supported on Linux, Macbook Docker Desktop (For mac m1, m2 remember to change Docker platform `export DOCKER_DEFAULT_PLATFORM=linux/amd64`), or Windows Subsystem for Linux (WSL) with Docker.
- **Docker**: Make sure you have Docker installed on your machine. You can install Docker by following the instructions [here](https://docs.docker.com/get-docker/).
- **Docker Compose**: Make sure you also have Docker Compose installed. If not, follow the instructions [here](https://docs.docker.com/compose/install/).
- **Clone the Repository**: Make sure to clone the repository containing the `docker-compose.yml` and pull the latest git submodules.
```bash
git clone https://github.com/janhq/jan.git
cd jan
# Pull latest submodule
git submodule update --init
```
- **Environment Variables**: You will need to set up several environment variables for services such as Keycloak and Postgres. You can place them in `.env` files in the respective folders as shown in the `docker-compose.yml`.
```bash
cp sample.env .env
```
| Service (Docker) | env file |
| ---------------------- | ------------------------------------------------------------------------------------------------------------------------------- |
| Global env | `.env`, just run `cp sample.env .env` |
| Keycloak | `.env` presented in global env and initiate realm in `conf/keycloak_conf/example-realm.json` |
| Keycloak PostgresDB | `.env` presented in global env |
| jan-inference | `.env` presented in global env |
| app-backend (hasura) | `conf/sample.env_app-backend` refer from [here](https://hasura.io/docs/latest/deployment/graphql-engine-flags/config-examples/) |
| app-backend PostgresDB | `conf/sample.env_app-backend-postgres` |
| web-client | `conf/sample.env_web-client` |
### Docker Compose
Jan offers an [Docker Compose](https://docs.docker.com/compose/) deployment that automates the setup process.
```shell
# Install and update Nvidia Docker Container Runtime
nvidia-smi
Run the following command to start all the services defined in the `docker-compose.yml`
```shell
# Docker Compose up
docker compose up
```
| Service (Docker) | URL |
| ----------------- | -------------------------- |
| Jan Web | localhost:1337 |
| Jan API | localhost:1337/api |
| Jan API (Swagger) | localhost:1337/api/swagger |
| Jan Docs | localhost:1337/docs |
| Keycloak Admin | localhost:1337/users |
| Grafana Dashboard | localhost:1337/grafana |
To run in detached mode:
```shell
# Docker Compose up detached mode
docker compose up -d
```
| Service (Docker) | URL | Credential |
| -------------------- | --------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Keycloak | http://localhost:8088 | Admin credentials are set via the environment variables `KEYCLOAK_ADMIN` and `KEYCLOAK_ADMIN_PASSWORD` |
| app-backend (hasura) | http://localhost:8080 | Admin credentials are set via the environment variables `HASURA_GRAPHQL_ADMIN_SECRET` in file `conf/sample.env_app-backend` |
| web-client | http://localhost:3000 | Users are signed up to keycloak, default created user is set via `conf/keycloak_conf/example-realm.json` on keycloak with username: `username`, password: `password` |
| llm service | http://localhost:8000 | |
After all service up and running, just access to `web-client` via `http://localhost:3000`, login with default user (username: `username`, password: `password`) and test the llm model with `chatgpt` session.
## Developers
@ -48,8 +83,8 @@ docker compose up
### Dependencies
* [Keycloak Community](https://github.com/keycloak/keycloak) (Apache-2.0)
* [KrakenD Community Edition](https://github.com/krakend/krakend-ce) (Apache-2.0)
- [Keycloak Community](https://github.com/keycloak/keycloak) (Apache-2.0)
- [Hasura Community Edition](https://github.com/hasura/graphql-engine) (Apache-2.0)
### Repo Structure
@ -65,3 +100,16 @@ Jan is a monorepo that pulls in the following submodules
├── adrs # Architecture Decision Records
```
## Live Demo
You can access the live demo at https://cloud.jan.ai.
## Common Issues and Troubleshooting
**Error in `jan-inference` service** ![](images/download-model-error.png)
- Error: download model incomplete
- Solution:
- Manually download the LLM model using the URL specified in the environment variable `MODEL_URL` within the `.env` file. The URL is typically https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML/resolve/main/llama-2-7b-chat.ggmlv3.q4_1.bin
- Copy the downloaded file `llama-2-7b-chat.ggmlv3.q4_1.bin` to the folder `jan-inference/llm/models`
- Run `docker compose down` followed by `docker compose up -d` again to restart the services.

@ -1 +1 @@
Subproject commit 11d66335f19c6379566524742cf588959c49676f
Subproject commit e305fb558dd2c4a5a3afc5cb709e132cae594f71

View File

@ -1,10 +1,6 @@
# docker version
version: "3"
# volumes:
# keycloak_postgres_data:
# db_data:
services:
keycloak:
image: quay.io/keycloak/keycloak:${KEYCLOAK_VERSION-22.0.0}
@ -42,8 +38,7 @@ services:
PGPORT: ${POSTGRES_PORT:-5432}
healthcheck:
test: "exit 0"
# volumes:
# - keycloak_postgres_data:/data/postgres
ports:
- ${POSTGRES_PORT:-5432}:${POSTGRES_PORT:-5432}
networks:
@ -53,8 +48,7 @@ services:
postgres:
image: postgres:15
restart: always
# volumes:
# - db_data:/var/lib/postgresql/data
env_file:
- conf/sample.env_app-backend-postgres
networks:
@ -200,13 +194,6 @@ services:
# Specify the path to the model for the web application.
MODEL: /models/llama-2-7b-chat.ggmlv3.q4_1.bin
PYTHONUNBUFFERED: 1
# Health check configuration
# healthcheck:
# test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost:8000"]
# interval: 30s
# timeout: 10s
# retries: 3
# start_period: 30s
# Restart policy configuration
restart: on-failure
# Specifies that this service should start only after wait-for-downloader has completed successfully.

View File

@ -21,7 +21,7 @@ const config = {
organizationName: 'janhq', // Usually your GitHub org/user name.
projectName: 'jan', // Usually your repo name.
onBrokenLinks: 'throw',
onBrokenLinks: 'ignore',
onBrokenMarkdownLinks: 'warn',
// Even if you don't use internalization, you can use this field to set useful

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB