diff --git a/docs/docs/guides/05-using-server/01-server.md b/docs/docs/guides/05-using-server/01-server.md deleted file mode 100644 index 952b7399f..000000000 --- a/docs/docs/guides/05-using-server/01-server.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -title: Connect to Server -description: Connect to Jan's built-in API server. -keywords: - [ - Jan AI, - Jan, - ChatGPT alternative, - local AI, - private AI, - conversational AI, - no-subscription fee, - large language model, - ] ---- - -:::warning - -This page is under construction. - -::: - -Jan ships with a built-in API server, that can be used as a drop-in, local replacement for OpenAI's API. - -Jan runs on port `1337` by default, but this can (soon) be changed in Settings. - -1. Go to Settings > Advanced > Enable API Server - -2. Go to http://localhost:1337 for the API docs. - -3. In terminal, simply CURL... - -Note: Some UI states may be broken when in Server Mode. diff --git a/docs/docs/guides/05-using-server/01-start-server.md b/docs/docs/guides/05-using-server/01-start-server.md new file mode 100644 index 000000000..4b94b53a2 --- /dev/null +++ b/docs/docs/guides/05-using-server/01-start-server.md @@ -0,0 +1,70 @@ +--- +title: Start Local Server +description: How to run Jan's built-in API server. +keywords: + [ + Jan AI, + Jan, + ChatGPT alternative, + local AI, + private AI, + conversational AI, + no-subscription fee, + large language model, + ] +--- + +Jan ships with a built-in API server that can be used as a drop-in, local replacement for OpenAI's API. You can run your server by following these simple steps. + +## Open Local API Server View + +Navigate by clicking the `Local API Server` icon on the left side of your screen, as shown in the image below. + +

+ +![local-api-view](./assets/local-api-view.png) + +## Choose your model + +On the top right of your screen under `Model Settings`, set the LLM that your local server will be running. You can choose from any of the models already installed, or pick a new model by clicking `Explore the Hub`. + +

+ +![choose-model](./assets/choose-model.png) + +## Set your Server Options + +On the left side of your screen you can set custom server options. + +

+ +![server-settings](./assets/server-settings.png) + +### Local Server Address + +By default, Jan will be accessible only on localhost `127.0.0.1`. This means a local server can only be accessed on the same machine where the server is being run. + +You can make the local server more accessible by clicking on the address and choosing `0.0.0.0` instead, which allows the server to be accessed from other devices on the local network. This is less secure than choosing localhost, and should be done with caution. + +### Port + +Jan runs on port `1337` by default, but this can be changed. + +### CORS + +Cross-Origin Resource Sharing (CORS) manages resource access on the local server from external domains. Enabled for security by default, it can be disabled if needed. + +### Verbose Server Logs + +The center of the screen displays the server logs as the local server runs. This option provides extensive details about server activities. + +## Start Server + +Click the `Start Server` button on the top left of your screen. You will see the server log display a message such as `Server listening at http://127.0.0.1:1337`, and the `Start Server` button will change to a red `Stop Server` button. + +

+ +![running-server](./assets/running-server.png) + +Your server is now running. Next, learn how to use your local API server. + diff --git a/docs/docs/guides/05-using-server/02-using-server.md b/docs/docs/guides/05-using-server/02-using-server.md new file mode 100644 index 000000000..2ac0480fa --- /dev/null +++ b/docs/docs/guides/05-using-server/02-using-server.md @@ -0,0 +1,103 @@ +--- +title: Using Local Server +description: How to use Jan's built-in API server. +keywords: + [ + Jan AI, + Jan, + ChatGPT alternative, + local AI, + private AI, + conversational AI, + no-subscription fee, + large language model, + ] +--- + +Jan's built-in API server is compatible with [OpenAI's API](https://platform.openai.com/docs/api-reference) and can be used as a drop-in, local replacement. Follow these steps to use the API server. + +## Open the API Reference + +Jan contains a comprehensive API reference. This reference displays all the API endpoints available, gives you examples requests and responses, and allows you to execute them in browser. + +On the top left of your screen below the red `Stop Server` button is the blue `API Reference`. Clicking this will open the reference in browser. + +

+ +![api-reference](./assets/api-reference.png) + +Scroll through the various available endpoints to learn what options are available. + +### Chat + +In the Chat section of the API reference, you will see an example JSON request body. + +

+ +![chat-example](./assets/chat-example.png) + +With your local server running, you can click the `Try it out` button on the top left, then the blue `Execute` button below the JSON. The browser will send the example request to your server, and display the response body below. + +Use the API endpoints, request and response body examples as models for your own application. + +### Curl request example + +Here's an example curl request with a local server running `tinyllama-1.1b`: + +

+ +```json +curl -X 'POST' \ + 'http://localhost:1337/v1/chat/completions' \ + -H 'accept: application/json' \ + -H 'Content-Type: application/json' \ + -d '{ + "messages": [ + { + "content": "You are a helpful assistant.", + "role": "system" + }, + { + "content": "Hello!", + "role": "user" + } + ], + "model": "tinyllama-1.1b", + "stream": true, + "max_tokens": 2048, + "stop": [ + "hello" + ], + "frequency_penalty": 0, + "presence_penalty": 0, + "temperature": 0.7, + "top_p": 0.95 +}' +``` + +### Response body example + +```json +{ + "choices": [ + { + "finish_reason": null, + "index": 0, + "message": { + "content": "Hello user. What can I help you with?", + "role": "assistant" + } + } + ], + "created": 1700193928, + "id": "ebwd2niJvJB1Q2Whyvkz", + "model": "_", + "object": "chat.completion", + "system_fingerprint": "_", + "usage": { + "completion_tokens": 500, + "prompt_tokens": 33, + "total_tokens": 533 + } +} +``` \ No newline at end of file diff --git a/docs/docs/guides/05-using-server/assets/api-reference.png b/docs/docs/guides/05-using-server/assets/api-reference.png new file mode 100644 index 000000000..8cfef431c Binary files /dev/null and b/docs/docs/guides/05-using-server/assets/api-reference.png differ diff --git a/docs/docs/guides/05-using-server/assets/chat-example.png b/docs/docs/guides/05-using-server/assets/chat-example.png new file mode 100644 index 000000000..4227fdbf6 Binary files /dev/null and b/docs/docs/guides/05-using-server/assets/chat-example.png differ diff --git a/docs/docs/guides/05-using-server/assets/choose-model.png b/docs/docs/guides/05-using-server/assets/choose-model.png new file mode 100644 index 000000000..08f468264 Binary files /dev/null and b/docs/docs/guides/05-using-server/assets/choose-model.png differ diff --git a/docs/docs/guides/05-using-server/assets/local-api-view.png b/docs/docs/guides/05-using-server/assets/local-api-view.png new file mode 100644 index 000000000..6d5b13e6f Binary files /dev/null and b/docs/docs/guides/05-using-server/assets/local-api-view.png differ diff --git a/docs/docs/guides/05-using-server/assets/running-server.png b/docs/docs/guides/05-using-server/assets/running-server.png new file mode 100644 index 000000000..078806d88 Binary files /dev/null and b/docs/docs/guides/05-using-server/assets/running-server.png differ diff --git a/docs/docs/guides/05-using-server/assets/server-settings.png b/docs/docs/guides/05-using-server/assets/server-settings.png new file mode 100644 index 000000000..bfb41332b Binary files /dev/null and b/docs/docs/guides/05-using-server/assets/server-settings.png differ