feat: enhance docs structure and add comprehensive Products section

- Added rich Products section with detailed platform coverage
- Enhanced all documentation sections with improved formatting
- Added new images and visual content throughout
- Reorganized Local Server docs into main docs flow
- Removed .vscode settings and added api-server-ui.png asset
This commit is contained in:
Ramon Perez 2025-07-29 21:01:27 +10:00
parent 1836863066
commit bd7022fb58
5 changed files with 81 additions and 127 deletions

View File

@ -1,4 +0,0 @@
{
"recommendations": ["astro-build.astro-vscode"],
"unwantedRecommendations": []
}

View File

@ -1,11 +0,0 @@
{
"version": "0.2.0",
"configurations": [
{
"command": "./node_modules/.bin/astro dev",
"name": "Development server",
"request": "launch",
"type": "node-terminal"
}
]
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 562 KiB

View File

@ -1,136 +1,107 @@
---
title: Server Setup
description: Learn how to run Jan's local API server.
title: Local API Server
description: Run Jan's OpenAI-compatible API server on your local machine for private, offline AI development.
keywords:
[
Jan,
Customizable Intelligence, LLM,
local AI,
privacy focus,
free and open source,
private and offline,
conversational AI,
no-subscription fee,
large language models,
Jan Extensions,
Extensions,
local AI server,
OpenAI API,
local API,
self-hosted AI,
offline AI,
privacy-focused AI,
API integration,
local LLM server,
llama.cpp,
CORS,
API key
]
---
import { Aside, Steps } from '@astrojs/starlight/components';
import { Aside } from '@astrojs/starlight/components';
import { Steps } from '@astrojs/starlight/components';
import { Tabs, TabItem } from '@astrojs/starlight/components';
Jan provides a built-in, OpenAI-compatible API server that runs entirely on your computer,
powered by `llama.cpp`. Use it as a drop-in replacement for cloud APIs to build private,
offline-capable AI applications.
![Jan's Local API Server Settings UI](../../../assets/api-server-ui.png)
Configure and start Jan's built-in API server.
## Quick Start
## Prerequisites
### 1. Start the Server
- Jan installed and running
- At least one AI model downloaded or configured (see [Model Management](/docs/manage-models))
1. Navigate to **Settings** > **Local API Server**.
2. Enter a custom **API Key** (e.g., `secret-key-123`). This is required for all requests.
3. Click **Start Server**.
For an overview of Jan Local Server, see the [Local Server introduction](./index).
The server is ready when the logs show `JAN API listening at http://12.0.0.1:1337`.
<br/>
![Local API Server](../../../assets/api-server.png)
<br/>
### 2. Load a model with cURL
## Start Server
```sh
curl http://127.0.0.1:1337/v1/models/start -H "Content-Type: application/json" \
-H "Authorization: Bearer hey" \
-d '{"model": "gemma3:12b"}'
```
1. Navigate to **Local API Server**
2. Add an API Key (can be anything)
3. Configure settings (see [Server Configuration](#server-configuration) below)
4. Click **Start Server**
5. Wait for confirmation: `JAN API listening at: http://127.0.0.1:1337`
### 3. Test with cURL
Open a terminal and make a request. Replace `YOUR_MODEL_ID` with the ID of an available model in Jan.
![Local API Server](../../../assets/api-server2.png)
## Test Server
1. Click **API Playground**
2. Select a model
3. Send a test request
## API Usage
```jan/website/src/content/docs/local-server/api-server.mdx#L69-80
```bash
curl http://127.0.0.1:1337/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer testing-something" \
-H "Authorization: Bearer secret-key-123" \
-d '{
"model": "jan-nano-gguf",
"messages": [
{
"role": "user",
"content": "Write a one-sentence bedtime story about a unicorn."
}
]
"model": "YOUR_MODEL_ID",
"messages": [{"role": "user", "content": "Tell me a joke."}]
}'
```
Include your API key in the `Authorization` header for all requests.
## Server Configuration
### Host Address Options
- **127.0.0.1 (Recommended)**:
- Only accessible from your computer
- Most secure option for personal use
- **0.0.0.0**:
- Makes server accessible from other devices on your network
- Use with caution and only when necessary
These settings control the network accessibility and basic behavior of your local server.
### Port Number
- Default: `1337`
- Can be any number between 1-65535
- Avoid common ports (80, 443, 3000, 8080) that might be used by other applications
### Server Host
The network address the server listens on.
- **`127.0.0.1`** (Default): The server is only accessible from your own computer. This is the most secure option for personal use.
- **`0.0.0.0`**: The server is accessible from other devices on your local network (e.g., your phone or another computer). Use this with caution.
### Server Port
The port number for the API server.
- **`1337`** (Default): A common alternative port.
- You can change this to any available port number (e.g., `8000`).
### API Prefix
- Default: `/v1`
- Defines the base path for all API endpoints
- Example: http://127.0.0.1:1337/v1/chat/completions
The base path for all API endpoints.
- **`/v1`** (Default): Follows OpenAI's convention. The chat completions endpoint would be `http://127.0.0.1:1337/v1/chat/completions`.
- You can change this or leave it empty if desired.
### API Key
A mandatory secret key to authenticate requests.
- You must set a key. It can be any string (e.g., `a-secure-password`).
- All API requests must include this key in the `Authorization: Bearer YOUR_API_KEY` header.
### Trusted Hosts
A comma-separated list of hostnames allowed to access the server. This provides an additional layer of security when the server is exposed on your network.
## Advanced Settings
### Cross-Origin Resource Sharing (CORS)
CORS controls which websites can access your API, which is important for web applications running in browsers.
**When to enable:**
- If you're building a web application that needs to access the API
- If you're using browser extensions
**When to leave disabled:**
- If you're only using the API from your local applications
- If you're concerned about security
- **(Enabled by default)** Allows web applications (like a custom web UI you are building) running on different domains to make requests to the API server.
- **Disable this** if your API will only be accessed by non-browser-based applications (e.g., scripts, command-line tools) for slightly improved security.
### Verbose Server Logs
Enable to show:
- Detailed information about each API request
- Error messages and debugging information
- Server status updates
- **(Enabled by default)** Provides detailed, real-time logs of all incoming requests, responses, and server activity.
- This is extremely useful for debugging application behavior and understanding exactly what is being sent to the models.
## Troubleshooting
<Aside type="note">
Enable **Verbose Server Logs** for detailed error messages.
Ensure **Verbose Server Logs** are enabled to get detailed error messages in the "Server Logs" view.
</Aside>
### Common Server Issues
- Server not running
- Model not loaded in Jan
- Port already in use
- Check admin/sudo rights
- API endpoint doesn't match server settings
- Model name in request doesn't match Jan model name
- Invalid JSON format
- Firewall blocking connection
- Missing API key in request headers
### CORS Errors
- Enable CORS in server settings
- Check request origin
- Verify URL matches server address
- Check browser console
### Performance Issues
- Monitor CPU, RAM, GPU usage
- Reduce context length or GPU layers
- Close other resource-intensive applications
- **Connection Refused:** The server is not running, or your application is pointing to the wrong host or port.
- **401 Unauthorized:** Your API Key is missing from the `Authorization` header or is incorrect.
- **404 Not Found:**
- The `model` ID in your request body does not match an available model in Jan.
- Your request URL is incorrect (check the API Prefix).
- **CORS Error (in a web browser):** Ensure the CORS toggle is enabled in Jan's settings.

View File

@ -8,19 +8,17 @@ import { Aside, Card, CardGrid } from '@astrojs/starlight/components';
**Jan's Goal is**
> to build a superintelligence that you can self-host and use locally on your own devices.
> **to build a superintelligence that you can self-host and use locally on your own devices.**
**We know it's hard**
We know it's hard but we believe this will be possible in the next decade through a combination of
models, applications and tools. For this we are...
> but we believe this will be possible in the next decade.
Jan is moving from a local AI application to a complete full-stack AI solution that you can self-host. This includes models, applications, and tools that delight users and help them solve their problems.
> **building a solution that ties all of these seamlessly so that users, regardless of their technical
background, are able to use Jan in the same way they use other applications but while owning them.**
## What We're Building
**Jan Factory (or Agent)** = Jan Models + Jan Application + Jan Tools
**Jan Ecosystem** = Jan Models + Jan Application + Jan Tools
Unlike other AI assistants that do specific tasks with one model or have many models with a myriad of solutions, Jan provides:
- Its own specialised models that are optimised at specific tasks like web-search, creative writing, and translation