# Jan - Self-Hosted AI Platform
Getting Started - Docs - Changelog - Bug reports - Discord
> ⚠️ **Jan is currently in Development**: Expect breaking changes and bugs! Jan is a self-hosted AI Platform. We help you run AI on your own hardware, giving you full control and protecting your enterprises' data and IP. Jan is free, source-available, and [fair-code](https://faircode.io/) licensed. ## Demo 👋 https://cloud.jan.ai ## Features **Multiple AI Engines** - [x] Self-hosted Llama2 and LLMs - [x] Self-hosted StableDiffusion and Controlnet - [ ] Connect to ChatGPT, Claude via API Key (coming soon) - [ ] 1-click installs for Models (coming soon) **Cross-Platform** - [x] Web App - [ ] Jan Mobile support for custom Jan server (in progress) - [ ] Cloud deployments (coming soon) **Organization Tools** - [x] Multi-user support - [ ] Audit and Usage logs (coming soon) - [ ] Compliance and Audit (coming soon) - [ ] PII and Sensitive Data policy engine for 3rd-party AIs (coming soon) **Hardware Support** - [ ] Nvidia GPUs - [ ] Apple Silicon (in progress) - [ ] CPU support via llama.cpp (in progress) ## Documentation 👋 https://docs.jan.ai (Work in Progress) ## Installation > ⚠️ **Jan is currently in Development**: Expect breaking changes and bugs! ### Step 1: Install Docker Jan is currently packaged as a Docker Compose application. - Docker ([Installation Instructions](https://docs.docker.com/get-docker/)) - Docker Compose ([Installation Instructions](https://docs.docker.com/compose/install/)) ### Step 2: Clone Repo ```bash git clone https://github.com/janhq/jan.git cd jan # Pull latest submodules git submodule update --init --recursive ``` ### Step 3: Configure `.env` We provide a sample `.env` file that you can use to get started. ```shell cp sample.env .env ``` You will need to set the following `.env` variables ```shell # TODO: Document .env variables ``` ### Step 4: Install Models > Note: This step will change soon with [Nitro](https://github.com/janhq/nitro) becoming its own library We recommend that Llama2-7B (4-bit quantized) as a basic model to get started. You will need to download the models to the `jan-inference/llms/models` folder. ```shell cd jan-inference/llms/models # Downloads model (~4gb) # Download time depends on your internet connection and HuggingFace's bandwidth wget https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML/resolve/main/llama-2-7b-chat.ggmlv3.q4_1.bin ``` ### Step 5: `docker compose up` Jan utilizes Docker Compose to run all services: ```shell docker compose up docker compose up -d # Detached mode ``` - (Backend) - [Keycloak](https://www.keycloak.org/) (Identity) The table below summarizes the services and their respective URLs and credentials. | Service | Container Name | URL and Port | Credentials | | ------------------------------------------------ | -------------------- | --------------------- | ---------------------------------------------------------------------------------- | | Jan Web | jan-web-* | http://localhost:3000 | Set in `conf/keycloak_conf/example-realm.json`