Updates Guide Using the Local Server

This commit is contained in:
SamPatt 2024-02-04 23:20:21 -05:00
parent 480a1d9cc1
commit 20e7d3071a
9 changed files with 173 additions and 33 deletions

View File

@ -1,33 +0,0 @@
---
title: Connect to Server
description: Connect to Jan's built-in API server.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
]
---
:::warning
This page is under construction.
:::
Jan ships with a built-in API server, that can be used as a drop-in, local replacement for OpenAI's API.
Jan runs on port `1337` by default, but this can (soon) be changed in Settings.
1. Go to Settings > Advanced > Enable API Server
2. Go to http://localhost:1337 for the API docs.
3. In terminal, simply CURL...
Note: Some UI states may be broken when in Server Mode.

View File

@ -0,0 +1,70 @@
---
title: Start Local Server
description: How to run Jan's built-in API server.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
]
---
Jan ships with a built-in API server that can be used as a drop-in, local replacement for OpenAI's API. You can run your server by following these simple steps.
## Open Local API Server View
Navigate by clicking the `Local API Server` icon on the left side of your screen, as shown in the image below.
<br></br>
![local-api-view](./assets/local-api-view.png)
## Choose your model
On the top right of your screen under `Model Settings`, set the LLM that your local server will be running. You can choose from any of the models already installed, or pick a new model by clicking `Explore the Hub`.
<br></br>
![choose-model](./assets/choose-model.png)
## Set your Server Options
On the left side of your screen you can set custom server options.
<br></br>
![server-settings](./assets/server-settings.png)
### Local Server Address
By default, Jan will be accessible only on localhost `127.0.0.1`. This means a local server can only be accessed on the same machine where the server is being run.
You can make the local server more accessible by clicking on the address and choosing `0.0.0.0` instead, which allows the server to be accessed from other devices on the local network. This is less secure than choosing localhost, and should be done with caution.
### Port
Jan runs on port `1337` by default, but this can be changed.
### CORS
Cross-Origin Resource Sharing (CORS) manages resource access on the local server from external domains. Enabled for security by default, it can be disabled if needed.
### Verbose Server Logs
The center of the screen displays the server logs as the local server runs. This option provides extensive details about server activities.
## Start Server
Click the `Start Server` button on the top left of your screen. You will see the server log display a message such as `Server listening at http://127.0.0.1:1337`, and the `Start Server` button will change to a red `Stop Server` button.
<br></br>
![running-server](./assets/running-server.png)
Your server is now running. Next, learn how to use your local API server.

View File

@ -0,0 +1,103 @@
---
title: Using Local Server
description: How to use Jan's built-in API server.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
]
---
Jan's built-in API server is compatible with [OpenAI's API](https://platform.openai.com/docs/api-reference) and can be used as a drop-in, local replacement. Follow these steps to use the API server.
## Open the API Reference
Jan contains a comprehensive API reference. This reference displays all the API endpoints available, gives you examples requests and responses, and allows you to execute them in browser.
On the top left of your screen below the red `Stop Server` button is the blue `API Reference`. Clicking this will open the reference in browser.
<br></br>
![api-reference](./assets/api-reference.png)
Scroll through the various available endpoints to learn what options are available.
### Chat
In the Chat section of the API reference, you will see an example JSON request body.
<br></br>
![chat-example](./assets/chat-example.png)
With your local server running, you can click the `Try it out` button on the top left, then the blue `Execute` button below the JSON. The browser will send the example request to your server, and display the response body below.
Use the API endpoints, request and response body examples as models for your own application.
### Curl request example
Here's an example curl request with a local server running `tinyllama-1.1b`:
<br></br>
```json
curl -X 'POST' \
'http://localhost:1337/v1/chat/completions' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"messages": [
{
"content": "You are a helpful assistant.",
"role": "system"
},
{
"content": "Hello!",
"role": "user"
}
],
"model": "tinyllama-1.1b",
"stream": true,
"max_tokens": 2048,
"stop": [
"hello"
],
"frequency_penalty": 0,
"presence_penalty": 0,
"temperature": 0.7,
"top_p": 0.95
}'
```
### Response body example
```json
{
"choices": [
{
"finish_reason": null,
"index": 0,
"message": {
"content": "Hello user. What can I help you with?",
"role": "assistant"
}
}
],
"created": 1700193928,
"id": "ebwd2niJvJB1Q2Whyvkz",
"model": "_",
"object": "chat.completion",
"system_fingerprint": "_",
"usage": {
"completion_tokens": 500,
"prompt_tokens": 33,
"total_tokens": 533
}
}
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB