Merge pull request #599 from janhq/documentation

docs: new docs
This commit is contained in:
Daniel 2023-11-13 12:12:36 +08:00 committed by GitHub
commit 14f93c5bf3
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
55 changed files with 2317 additions and 3182 deletions

View File

@ -0,0 +1,3 @@
---
title: Thread
---

3
docs/docs/api/chat.md Normal file
View File

@ -0,0 +1,3 @@
---
title: Chat
---

3
docs/docs/api/files.md Normal file
View File

@ -0,0 +1,3 @@
---
title: File
---

3
docs/docs/api/message.md Normal file
View File

@ -0,0 +1,3 @@
---
title: Message
---

3
docs/docs/api/model.md Normal file
View File

@ -0,0 +1,3 @@
---
title: Model
---

View File

@ -0,0 +1,4 @@
---
title: Overview
position: 1
---

3
docs/docs/api/thread.md Normal file
View File

@ -0,0 +1,3 @@
---
title: Chat
---

View File

@ -1,202 +0,0 @@
---
title: Anatomy of 👋Jan
---
This page explains all the architecture of [Jan](https://Jan/).
## Synchronous architecture
![Synchronous architecture](../img/arch-async.drawio.png)
### Overview
The architecture of the Jan application is designed to provide a seamless experience for the users while also being modular and extensible.
### BackEnd and FrontEnd
**BackEnd:**
- The BackEnd serves as the brain of the application. It processes the information, performs computations, and manages the main logic of the system.
:::info
This is like an [OS (Operating System)](https://en.wikipedia.org/wiki/Operating_system) in the computer.
:::
**FrontEnd:**
- The FrontEnd is the interface that users interact with. It takes user inputs, displays results, and communicates with the BackEnd through Inter-process communication bi-directionally.
:::info
This is like [VSCode](https://code.visualstudio.com/) application
:::
**Inter-process communication:**
- A mechanism that allows the BackEnd and FrontEnd to communicate in real time. It ensures that data flows smoothly between the two, facilitating rapid response and dynamic updates.
### Plugins and Apps
**Plugins:**
In Jan, Plugins contains all the core features. They could be Core Plugins or [Nitro](https://github.com/janhq/nitro)
- **Load:** This denotes the initialization and activation of a plugin when the application starts or when a user activates it.
- **Implement:** This is where the main functionality of the plugin resides. Developers code the desired features and functionalities here. This is a "call to action" feature.
- **Dispose:** After the plugin's task is completed or deactivated, this function ensures that it releases any resources it uses, providing optimal performance and preventing memory leaks.
:::info
This is like [Extensions](https://marketplace.visualstudio.com/VSCode) in VSCode.
:::
**Apps:**
Apps are basically Plugin-like. However, Apps can be built by users for their own purposes.
> For example, users can build a `Personal Document RAG App` to chat with specific documents or articles.
With **Plugins and Apps**, users can build a broader ecosystem surrounding Jan.
## Asynchronous architecture
![Asynchronous architecture](../img/arch-async.drawio.png)
### Overview
The asynchronous architecture allows Jan to handle multiple operations simultaneously without waiting for one to complete before starting another. This results in a more efficient and responsive user experience. The provided diagram breaks down the primary components and their interactions.
### Components
#### Results
After processing certain tasks or upon specific triggers, the backend can broadcast the results. This could be a processed data set, a calculated result, or any other output that needs to be shared.
#### Events
Similar to broadcasting results but oriented explicitly towards events. This could include user actions, system events, or notifications that other components should be aware of.
- **Notify:**
Upon the conclusion of specific tasks or when particular triggers are activated, the system uses the Notify action to send out notifications from the **Results**. The Notify action is the conduit through which results are broadcasted asynchronously, whether they concern task completions, errors, updates, or any processed data set.
- **Listen:**
Here, the BackEnd actively waits for incoming data or events. It is geared towards capturing inputs from users or updates from plugins.
#### Plugins
These are modular components or extensions designed to enhance the application's functionalities. Each plugin possesses a "Listen" action, enabling it to stand by for requests emanating from user inputs.
### Flow
1. Input is provided by the user or an external source.
2. This input is broadcasted as an event into the **Broadcast event**.
3. The **BackEnd** processes the event. Depending on the event, it might interact with one or several Plugins.
4. Once processed, **Broadcast result** can be sent out asynchronously through multiple notifications via Notify action.
## Jan workflow
![Workflow](../img/arch-flow.drawio.png)
### Overview
The architecture of the Jan desktop application is structured into four primary modules: "Prompt Template," "Language Model," "Output Parser," and "Apps." Let's break down each module and understand its components.
### Prompt Template
This is where predefined templates are stored. It sets the format and structure for user interactions. It contains:
- **Character's definition:**
Definitions of various characters or entities that may be interacted with or invoked during user requests (e.g., name, personality, and communication style).
- **Model's definition:**
Definitions related to the different language models (e.g., objectives, capabilities, and constraints)
- **Examples:**
Sample inputs and outputs for guidance. If given good examples, LLM could enable basic reasoning or planning skills.
- **Input:**
The actual user query or request that is being processed.
### Large Language Model
This processes the input provided.
- **Local models:**
These are the downloaded models that reside within the system. They can process requests without the need to connect to external resources.
- **OpenAI:**
This will connect you with OpenAI API, allowing the application to utilize powerful models like GPT-3.5 and GPT-4.
:::info
To use OpenAI models, you must have an OpenAI account and secret key. You can get your [OpenAI key](https://platform.openai.com/account/api-keys) here.
:::
- **Custom Agents:**
These are user-defined or third-party models that can be integrated into the Jan system for specific tasks.
### Output Parser
Language models produce textual content. However, often, there's a need for more organized data instead of plain text. This is achieved using output parsers.
- **Parser:**
This component ensures that the output conforms to the desired structure and format, removing unwanted information or errors.
### Apps
This represents applications or extensions that can be integrated with Jan.
- **Characters:** Characters or entities that can be utilized within the applications.
- **Models:** Different Large Language Models, Large Multimodal Models, and Stable Diffusion models that the apps might use.
- **RAG:** Represents a "Retrieval Augmented Generation" functionality, which helps in fetching relevant data and generating responses based on it.
## Jan Platform
![Platform](../img/arch-connection.drawio.png)
### Overview
The architecture of Jan can be thought of as a layered system, comprising of the FrontEnd, Middleware, and BackEnd. Each layer has distinct components and responsibilities, ensuring a modular and scalable approach.
#### FrontEnd
The **FrontEnd** is the visible part of Jan that interacts directly with the user.
- **Controller:** This is the main control unit of the FrontEnd. It processes the user's inputs, handles UI events, and communicates with other layers to fetch or send data.
- **Apps:** This represents applications or extensions that can be integrated with Jan.
- **Execute Request** act as the initial triggers to initiate processes within the application.
#### Middleware
It's a bridge between the FrontEnd and BackEnd. It's responsible for translating requests and ensuring smooth data flow.
- **SDK:** Stands for Software Development Kit. It provides a set of tools, libraries, and guidelines for developers to build and integrate custom applications or features with Jan.
- **Core:** It's tasked with managing the connections between the FrontEnd and BackEnd. It ensures data is routed correctly and efficiently between the two.
- **Local Native:** Refers to the connectors that enable communication with local services or applications. This will use your own hardware to ddeploy models.
- **Cloud Native:** As the name suggests, these connectors are tailored for cloud-based interactions, allowing Jan to leverage cloud services or interact with other cloud-based applications.
:::info
The Middleware communicates with the BackEnd primarily through **IPC** for Local and **Http** for Cloud.
:::
#### BackEnd
It is responsible for data processing, storage, and other core functionalities.
- **Plugins:** Extendable modules that can be added to the Jan system to provide additional functionalities or integrations with third-party applications.
- **Nitro:** This is a high-performance engine or a set of services that power specific functionalities within Jan. Given its placement in the architecture, it's reasonable to assume that Nitro provides acceleration or optimization capabilities for tasks.

View File

@ -1,158 +0,0 @@
---
title: Build an app
---
# Build and publish an app
You can build a custom AI application on top of Jan.
In this tutorial, you'll build a sample app and load it into Jan Desktop.
## What you'll learn
After you've completed this tutorial, you'll be able to:
- Configure an environment for developing Jan apps.
- Compile a app from source code.
- Reload a app after making changes to it.
## Prerequisites
To complete this tutorial, you'll need:
- [Git](https://git-scm.com/) installed on your local machine.
- A local development environment for [Node.js](https://node.js.org/en/about/).
- A code editor, such as [Visual Studio Code](https://code.visualstudio.com/).
> When developing apps, one mistake can lead to unintended changes to your app. Please backup your data.
## Development
### Step 1: Download the sample app
- Go to [Jan sample app](https://github.com/janhq/jan-sample-app)
- Select `Use this template button` at the top of the repository
- Select `Create a new repository`
- Select an owner and name for your new repository
- Click `Create repository`
- Git clone your new repository
### Step 2: Installation
> [!NOTE]
>
> You'll need to have a reasonably modern version of
> [Node.js](https://nodejs.org) handy. If you are using a version manager like
> [`nodenv`](https://github.com/nodenv/nodenv) or
> [`nvm`](https://github.com/nvm-sh/nvm), you can run `nodenv install` in the
> root of your repository to install the version specified in
> [`package.json`](./package.json). Otherwise, 20.x or later should work!
1. :hammer_and_wrench: Install the dependencies
```bash
npm install
```
1. :building_construction: Package the TypeScript for distribution
```bash
npm run bundle
```
1. :white_check_mark: Check your artifact
There will be a tgz file in your src directory now
### Step 3: Update the App Manifest
The [`package.json`](package.json) file lets you define your apps metadata, e.g.
app name, main entry, description and version.
### Step 4: Implementation
The [`src/`](./src/) directory is the heart of your app! You can replace the contents of this directory with your own code.
- `index.ts` is your app's mainentrypoint. You can access the Web runtime and define UI in this file.
- `module.ts` is your Node runtime in which functions get executed. You should define core logic and compute-intensive workloads in this file.
- `index.ts` and `module.ts` interact with each other via RPC (See [Information flow](./app-anatomy.md#information-flow)) via [`invokePluginFunc`](../../reference/01_init.md#invokepluginfunc)
Import the Jan SDK
```typescript
import { core } from "@janhq/core";
```
#### index.ts
Think of this as your "app frontend". You register events, custom functions here.
Note: Most Jan app functions are processed asynchronously. In `index.ts`, you will see that the extension function will return a `Promise<any>`.
```typescript
import { core } from "@janhq/core";
function onStart(): Promise<any> {
return core.invokePluginFunc(MODULE_PATH, "run", 0);
}
```
Define custom functions and register your implementation.
```javascript
/**
* The entrypoint for the app.
*/
import { PluginService, RegisterExtensionPoint, core } from "@janhq/core";
/**
* Invokes the `run` function from the `module.js` file using the `invokePluginFunc` method.
* "run" is the name of the function to invoke.
* @returns {Promise<any>} A promise that resolves with the result of the `run` function.
*/
function onStart(): Promise<any> {
return core.invokePluginFunc(MODULE_PATH, "run", 0);
}
/**
* Initializes the plugin by registering the extension functions with the given register function.
* @param {Function} options.register - The function to use for registering the extension functions
*/
export function init({ register }: { register: RegisterExtensionPoint }) {
register(PluginService.OnStart, PLUGIN_NAME, onStart);
}
```
#### module.ts
Think of this as your "app backend". Your core logic implementation goes here.
```javascript
const path = require("path");
const { app } = require("electron");
function run(param: number): [] {
console.log(`execute runner ${param} in main process`);
// Your code here
return [];
}
module.exports = {
run,
};
```
## App installation
![Manual installation](../img/build-app-1.png)
- `Select` the built `*.tar.gz` file
- Jan will reload after new apps get installed
## App uninstallation
To be updated
## App update
To be updated

View File

@ -1,11 +0,0 @@
---
title: Publishing an app
---
After you have completed with local app development and want to publish to `Jan marketplace` for other to reuse, please follow the following steps
- Step 1: Update your local `package.json` and configure `npm login` correctly
- Step 2: Run `npm publish` and set to public NPM package (so that other can install) - Please refer to our example [NPM retrieval plugin](https://www.npmjs.com/package/retrieval-plugin)
- Step 3: Go to `Jan plugin catalog`(https://github.com/janhq/plugin-catalog) and create a `Pull request` for new `App artifact` (which is a renamed version of your App `package.json`) - Please refer to example [retrieval-plugin](https://github.com/janhq/plugin-catalog/blob/main/retrieval-plugin.json)
- Step 4: We at Jan will be responsible to review and merge to `main`
- Step 5: Once your new app is on `main`, you and other Jan users can find it in `Jan marketplace`

View File

@ -1,16 +0,0 @@
---
title: Overview
---
Jan's mission is to power the next-gen App with limitless extensibility by providing users with the following:
- Unified API/ Helpers so that they only need to care about what matters.
- Wide range of Optimized and state-of-the-art models that can help your App with Thinking/ Hearing/ Seeing capabilities. This is powered by our [Nitro](https://github.com/janhq/nitro).
- Strong support for the App marketplace and Model marketplace that streamline value from end customers to builders at all layers.
- The most important thing is: The users of Jan can use the Apps via UI and API for integration.
At Jan, we strongly believe in `Portable AI` and `Personal AI` that is created once and run anywhere.
## Downloads
[Jan.ai](https://jan.ai/) - Desktop app
[Jan Github](https://github.com/janhq/jan) - Opensource library for developers

Binary file not shown.

Before

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 778 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 232 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 158 KiB

View File

@ -1,5 +0,0 @@
---
title: "Azure OpenAI Plugin"
---
NPM Package: [@janhq/azure-openai-plugin](https://www.npmjs.com/package/@janhq/azure-openai-plugin)

View File

@ -1,5 +0,0 @@
---
title: "Data Plugin"
---
NPM Package: [@janhq/data-plugin](https://www.npmjs.com/package/@janhq/data-plugin)

View File

@ -1,5 +0,0 @@
---
title: "Inference Plugin"
---
NPM Package: [@janhq/inference-plugin](https://www.npmjs.com/package/@janhq/inference-plugin)

View File

@ -1,5 +0,0 @@
---
title: "Model Management Plugin"
---
NPM Package: [@janhq/model-management-plugin](https://www.npmjs.com/package/@janhq/model-management-plugin)

View File

@ -1,5 +0,0 @@
---
title: "Monitoring Plugin"
---
NPM Package: [@janhq/monitoring-plugin](https://www.npmjs.com/package/@janhq/monitoring-plugin)

View File

@ -1,5 +0,0 @@
---
title: "RAG Plugin"
---
Coming soon.

View File

@ -0,0 +1,103 @@
---
title: Introduction
---
Jan can be used to build a variety of AI use cases, at every level of the stack:
- An OpenAI compatible API, with feature parity for `models`, `assistants`, `files` and more
- A standard data format on top of the user's local filesystem, allowing for transparency and composability
- Automatically package and distribute to Mac, Windows and Linux. Cloud coming soon
- An UI kit to customize user interactions with `assistants` and more
- A standalone inference engine for low level use cases
## Resources
<!-- (@Rex: to add some quickstart tutorials) -->
- Create an AI assistant
- Run an OpenAI compatible API endpoint
- Build a VSCode plugin with a local model
- Build a Jan platform module
## Key Concepts
### Modules
Jan is comprised of system-level modules that mirror OpenAIs, exposing similar APIs and objects
- Modules are modular, atomic implementations of a single OpenAI-compatible endpoint
- Modules can be swapped out for alternate implementations
- The default `messages` module persists messages in thread-specific `.json`
- `messages-postgresql` uses Postgres for production-grade cloud-native environments
| Jan Module | Description | API Docs |
| ---------- | ------------- | ---------------------------- |
| Chat | Inference | [/chat](/api/chat) |
| Models | Models | [/model](/api/model) |
| Assistants | Apps | [/assistant](/api/assistant) |
| Threads | Conversations | [/thread](/api/thread) |
| Messages | Messages | [/message](/api/message) |
### Local Filesystem
Jan use the local filesystem for data persistence, similar to VSCode. This allows for composability and tinkerability.
```sh=
/janroot # Jan's root folder (e.g. ~/jan)
/models # For raw AI models
/threads # For conversation history
/assistants # For AI assistants' configs, knowledge, etc.
```
```sh=
/models
/modelA
model.json # Default model settings
llama-7b-q4.gguf # Model binaries
llama-7b-q5.gguf # Include different quantizations
/threads
/jan-unixstamp-salt
model.json # Overrides assistant/model-level model settings
thread.json # thread metadata (e.g. subject)
messages.json # messages
content.json # What is this?
files/ # Future for RAG
/assistants
/jan
assistant.json # Assistant configs (see below)
# For any custom code
package.json # Import npm modules
# e.g. Langchain, Llamaindex
/src # Supporting files (needs better name)
index.js # Entrypoint
process.js # For electron IPC processes (needs better name)
# `/threads` at root level
# `/models` at root level
/shakespeare
assistant.json
model.json # Creator chooses model and settings
package.json
/src
index.js
process.js
/threads # Assistants remember conversations in the future
/models # Users can upload custom models
/finetuned-model
```
### Jan: a "global" assistant
Jan ships with a default assistant "Jan" that lets users chat with any open source model out-of-the-box.
This assistant is defined in `/jan`. It is a generic assistant to illustrate power of Jan. In the future, it will support additional features e.g. multi-assistant conversations
- Your Assistant "Jan" lets you pick any model that is in the root /models folder
- Right panel: pick LLM model and set model parameters
- Jans threads will be at root level
- `model.json` will reflect model chosen for that session
- Be able to “add” other assistants in the future
- Jans files will be at thread level
- Jan is not a persistent memory assistant

View File

@ -0,0 +1,3 @@
---
title: Quickstart
---

Binary file not shown.

After

Width:  |  Height:  |  Size: 128 KiB

View File

@ -0,0 +1,29 @@
---
title: "Chats"
---
Chats are essentially inference requests to a model
> OpenAI Equivalent: https://platform.openai.com/docs/api-reference/chat
## Chat Object
- Equivalent to: https://platform.openai.com/docs/api-reference/chat/object
## Chat API
See [/chat](/api/chat)
- Equivalent to: https://platform.openai.com/docs/api-reference/chat
```sh=
POST https://localhost:1337/v1/chat/completions
TODO:
# Figure out how to incorporate tools
```
## Chat Filesystem
- Chats will be persisted to `messages` within `threads`
- There is no data structure specific to Chats

View File

@ -0,0 +1,81 @@
---
title: "Models"
---
Models are AI models like Llama and Mistral
> OpenAI Equivalent: https://platform.openai.com/docs/api-reference/models
## Model Object
- `model.json`
> Equivalent to: https://platform.openai.com/docs/api-reference/models/object
```json=
{
// OpenAI model compatibility
// https://platform.openai.com/docs/api-reference/models)
"id": "llama-2-uuid",
"object": "model",
"created": 1686935002,
"owned_by": "you"
// Model settings (benchmark: Ollama)
// https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#template
"model_name": "llama2",
"model_path": "ROOT/models/...",
"parameters": {
"temperature": "..",
"token-limit": "..",
"top-k": "..",
"top-p": ".."
},
"template": "This is a full prompt template",
"system": "This is a system prompt",
// Model metadata (benchmark: HuggingFace)
"version": "...",
"author": "...",
"tags": "...",
...
}
```
## Model API
See [/model](/api/model)
- Equivalent to: https://platform.openai.com/docs/api-reference/models
```sh=
GET https://localhost:1337/v1/models # List models
GET https://localhost:1337/v1/models/{model} # Get model object
DELETE https://localhost:1337/v1/models/{model} # Delete model
TODO:
# Start model
# Stop model
```
## Model Filesystem
How `models` map onto your local filesystem
```sh
/janroot
/models
/modelA
model.json # Default model params
modelA.gguf
modelA.bin
/modelB/*
model.json
modelB.gguf
/assistants
model.json # Defines model, default: looks in `/models`
/models # Optional /models folder that overrides root
/modelA
model.json
modelA.bin
```

View File

@ -0,0 +1,80 @@
---
title: "Assistants"
---
Assistants can use models and tools.
- Jan's `Assistants` are even more powerful than OpenAI due to customizable code in `index.js`
> OpenAI Equivalent: https://platform.openai.com/docs/api-reference/assistants
## Assistant Object
- `assistant.json`
- Equivalent to: https://platform.openai.com/docs/api-reference/assistants/object
```json=
{
// Jan specific properties
"avatar": "https://lala.png"
"thread_location": "ROOT/threads" // Default to root (optional field)
// TODO: add moar
// OpenAI compatible properties: https://platform.openai.com/docs/api-reference/assistants
"id": "asst_abc123",
"object": "assistant",
"created_at": 1698984975,
"name": "Math Tutor",
"description": null,
"model": reference model.json,
"instructions": reference model.json,
"tools": [
{
"type": "rag"
}
],
"file_ids": [],
"metadata": {}
}
```
## Assistants API
- _TODO_: What would modifying Assistant do? (doesn't mutate `index.js`?)
```sh=
GET https://api.openai.com/v1/assistants # List
POST https://api.openai.com/v1/assistants # C
GET https://api.openai.com/v1/assistants/{assistant_id} # R
POST https://api.openai.com/v1/assistants/{assistant_id} # U
DELETE https://api.openai.com/v1/assistants/{assistant_id} # D
```
## Assistants Filesystem
```sh=
/assistants
/jan
assistant.json # Assistant configs (see below)
# For any custom code
package.json # Import npm modules
# e.g. Langchain, Llamaindex
/src # Supporting files (needs better name)
index.js # Entrypoint
process.js # For electron IPC processes (needs better name)
# `/threads` at root level
# `/models` at root level
/shakespeare
assistant.json
model.json # Creator chooses model and settings
package.json
/src
index.js
process.js
/threads # Assistants remember conversations in the future
/models # Users can upload custom models
/finetuned-model
```

View File

@ -0,0 +1,53 @@
---
title: "Threads"
---
Threads contain `messages` history with assistants. Messages in a thread share context.
- Note: For now, threads "lock the model" after a `message` is sent
- When a new `thread` is created with Jan, users can choose the models
- Users can still edit model parameters/system prompts
- Note: future Assistants may customize this behavior
- Note: Assistants will be able to specify default thread location in the future
- Jan uses root-level threads, to allow for future multi-assistant threads
- Assistant Y may store threads in its own folder, to allow for [long-term assistant memory](https://github.com/janhq/jan/issues/344)
> OpenAI Equivalent: https://platform.openai.com/docs/api-reference/threads
## Thread Object
- `thread.json`
- Equivalent to: https://platform.openai.com/docs/api-reference/threads/object
```json
{
// Jan specific properties:
"summary": "HCMC restaurant recommendations",
"messages": {see below}
// OpenAI compatible properties: https://platform.openai.com/docs/api-reference/threads)
"id": "thread_abc123",
"object": "thread",
"created_at": 1698107661,
"metadata": {}
}
```
## Threads API
- Equivalent to: https://platform.openai.com/docs/api-reference/threads
```sh=
POST https://localhost:1337/v1/threads/{thread_id} # Create thread
GET https://localhost:1337/v1/threads/{thread_id} # Get thread
DELETE https://localhost:1337/v1/models/{thread_id} # Delete thread
```
## Threads Filesystem
```sh
/assistants
/homework-helper
/threads # context is "permanently remembered" by assistant in future conversations
/threads # context is only retained within a single thread
```

View File

@ -0,0 +1,53 @@
---
title: "Messages"
---
Messages are within `threads` and capture additional metadata.
- Equivalent to: https://platform.openai.com/docs/api-reference/messages
## Message Object
- Equivalent to: https://platform.openai.com/docs/api-reference/messages/object
```json
{
// Jan specific properties
"updatedAt": "..." // that's it I think
// OpenAI compatible properties: https://platform.openai.com/docs/api-reference/messages)
"id": "msg_dKYDWyQvtjDBi3tudL1yWKDa",
"object": "thread.message",
"created_at": 1698983503,
"thread_id": "thread_RGUhOuO9b2nrktrmsQ2uSR6I",
"role": "assistant",
"content": [
{
"type": "text",
"text": {
"value": "Hi! How can I help you today?",
"annotations": []
}
}
],
"file_ids": [],
"assistant_id": "asst_ToSF7Gb04YMj8AMMm50ZLLtY",
"run_id": "run_BjylUJgDqYK9bOhy4yjAiMrn",
"metadata": {}
}
```
## Messages API
- Equivalent to: https://platform.openai.com/docs/api-reference/messages
```sh=
POST https://api.openai.com/v1/threads/{thread_id}/messages # create msg
GET https://api.openai.com/v1/threads/{thread_id}/messages # list messages
GET https://api.openai.com/v1/threads/{thread_id}/messages/{message_id}
# Get message file
GET https://api.openai.com/v1/threads/{thread_id}/messages/{message_id}/files/{file_id}
# List message files
GET https://api.openai.com/v1/threads/{thread_id}/messages/{message_id}/files
```

View File

@ -0,0 +1,43 @@
---
title: "Files"
---
Files can be used by `threads`, `assistants` and `fine-tuning`
> Equivalent to: https://platform.openai.com/docs/api-reference/files
## Files Object
- Equivalent to: https://platform.openai.com/docs/api-reference/files
- Note: OAI's struct doesn't seem very well designed
- `files.json`
```json
{
// Public properties (OpenAI Compatible: https://platform.openai.com/docs/api-reference/files/object)
"id": "file-BK7bzQj3FfZFXr7DbL6xJwfo",
"object": "file",
"bytes": 120000,
"created_at": 1677610602,
"filename": "salesOverview.pdf",
"purpose": "assistants"
}
```
## File API
## Files Filesystem
- Files can exist in several parts of Jan's filesystem
- TODO: are files hard copied into these folders? Or do we define a `files.json` and only record the relative filepath?
```sh=
/files # root `/files` for finetuning, etc
/assistants
/jan
/files # assistant-specific files
/threads
/jan-12938912
/files # thread-specific files
```

View File

@ -0,0 +1,67 @@
---
title: User Interface
---
Jan provides a UI Kit for customize the UI for your use case. This means you can personalize the entire application according to your own brand and visual styles.
This page gives you an overview of how to customize the UI.
You can see some of the user interface components when you first open Jan.
To Link:
- Ribbon
- LeftSidebar
- Main
- RightSidebar
- StatusBar
## Views
![Jan Views](./img/jan-views.png)
TODO: add a better image.
### Ribbon
Assistants shortcuts and Modules settings show up here.
```js
import .. from "@jan"
sample code here
```
### LeftSidebar
Conversation threads show up here. This is customizable, so custom assistants can add additional menu items here.
```js
import .. from "@jan"
sample code here
```
### Main
The main view for interacting with assistants. This is customizable, so custom assistants can add in additional UI components. By default, this is a chat thread with assistants.
```js
import .. from "@jan"
sample code here
```
### RightSidebar
A "settings" view for each thread. Users should be able to edit settings or other configs to customize the assistant experience within each thread.
```js
import .. from "@jan"
sample code here
```
### StatusBar
A global status bar that shows processes, hardware/disk utilization and more.
```js
import .. from "@jan"
sample code here
```

View File

@ -1,44 +0,0 @@
---
title: Cloud Native
---
# Installing Jan Cloud Native
Cloud Native is useful when you want to deploy Jan to a shared/remote/cloud server, rather than running it as a local Desktop app.
> This is an experimental feature - expect breaking changes!
### Getting Started
#### Run from source code
```bash
git clone https://github.com/janhq/jan
cd jan
git checkout feat-255 && git pull
yarn install
yarn start:server
```
Open your browser at [http://localhost:4000](http://localhost:4000)
### Run from docker file
```bash
git clone https://github.com/janhq/jan
cd jan
git checkout feat-255 && git pull
docker build --platform linux/x86_64 --progress=plain -t jan-server .
docker run --platform linux/x86_64 --name jan-server -p4000:4000 -p3928:3928 -it jan-server
```
Open your browser at [http://localhost:4000](http://localhost:4000)
### Architecture
![cloudnative](../../developers/img/cloudnative.png)
### TODOs
- [Authencation Plugins](https://github.com/janhq/jan/issues/334)
- [Remote server](https://github.com/janhq/jan/issues/200)

View File

@ -1,3 +0,0 @@
---
title: Installation
---

View File

@ -1,76 +0,0 @@
---
title: Nitro
slug: /nitro
---
Nitro, is the inference engine that powers Jan. Nitro is written in C++, optimized for edge deployment.
⚡ Explore Nitro's codebase: [GitHub](https://github.com/janhq/nitro)
## Dependencies and Acknowledgements:
- [llama.cpp](https://github.com/ggerganov/llama.cpp): Nitro wraps Llama.cpp, which runs Llama models in C++
- [drogon](https://github.com/drogonframework/drogon): Nitro runs Drogon, which is a fast, C++17/20 HTTP application framework.
- (Coming soon) tensorrt-llm support.
## Features
In addition to the above features, Nitro also provides:
- OpenAI compatibility
- HTTP interface with no bindings needed
- Runs as a separate process, not interfering with main app processes
- Multi-threaded server supporting concurrent users
- 1-click install
- No hardware dedendencies
- Ships as a small binary (~3mb compressed on average)
- Runs on Windows, MacOS, and Linux
- Compatible with arm64, x86, and NVIDIA GPUs
## HTTP Interface
Nitro offers a straightforward HTTP interface. With compatibility for multiple standard APIs, including OpenAI formats.
```bash
curl --location 'http://localhost:3928/inferences/llamacpp/chat_completion' \
--header 'Content-Type: application/json' \
--header 'Accept: text/event-stream' \
--header 'Access-Control-Allow-Origin: *' \
--data '{
"messages": [
{"content": "Hello there 👋", "role": "assistant"},
{"content": "Can you write a long story", "role": "user"}
],
"stream": true,
"model": "gpt-3.5-turbo",
"max_tokens": 2000
}'
```
## Using Nitro
**Step 1: Obtain Nitro**:
Access Nitro binaries from the release page.
🔗 [Download Nitro](https://github.com/janhq/nitro/releases)
**Step 2: Source a Model**:
For those interested in the llama C++ integration, obtain a "GGUF" model from The Bloke's repository.
🔗 [Download Model](https://huggingface.co/TheBloke)
**Step 3: Initialize Nitro**:
Launch Nitro and position your model using the following API call:
```bash
curl -X POST 'http://localhost:3928/inferences/llamacpp/loadmodel' \
-H 'Content-Type: application/json' \
-d '{
"llama_model_path": "/path/to/your_model.gguf",
"ctx_len": 2048,
"ngl": 100,
"embedding": true
}'
```
## Architecture diagram
![Nitro Architecture](../developers/img/architecture.png)

View File

@ -1,74 +0,0 @@
---
title: "init"
---
`init` is the entrypoint for your application and its custom logic. `init` is a reserved function that Jan will look for to initialize your application.
## Usage
Importing
```js
// javascript
const core = require("@janhq/core");
// typescript
import * as core from "@janhq/core";
```
Setting up event listeners
```js
export function init({ register }) {
myListener();
}
```
Setting up core service implementation
```js
export function init({ register }: { register: RegisterExtensionPoint }) {
register(DataService.GetConversations, "my-app-id", myImplementation);
}
```
## RegisterExtensionPoint
`RegisterExtensionPoint` is used for app initialization.
It lets you register `CoreService` functions/methods with the main application.
```js
import { RegisterExtensionPoint } from "@janhq/core";
```
```js
type RegisterExtensionPoint = (
extensionName: string,
extensionId: string,
method: Function,
priority?: number
)
```
## invokePluginFunc
`invokePluginFunc` is a way to invoke your custom functions (defined in your `module.ts`) from your application client (defined in your `index.ts`)
```js
// index.ts: your application "frontend" and entrypoint
function foo(id: number) {
return core.invokePluginFunc(MODULE_PATH, "foo", param1, ...);
}
export function init({ register }: { register: RegisterExtensionPoint }) {
register(Service.Foo, "my-app-id", foo);
}
```
```js
// module.ts: your application "backend"
export function foo(param1, ...) {
// Your code here
}
```

View File

@ -1,94 +0,0 @@
---
title: "CoreService"
---
`CoreService` provides an interface for implementing custom methods in Jan.
It lets you define shared behavior across your custom application, like how your app handles state, models, or inferencing behavior.
## Usage
```js
import { CoreService, ... } from "@janhq/core";
```
## CoreService
The `CoreService` type bundles the following services:
- `StoreService`
- `DataService`
- `InferenceService`
- `ModelManagementService`
- `SystemMonitoringService`
- `PreferenceService`
## StoreService
The `StoreService` enum represents available methods for managing the database store. It includes the following methods:
- `CreateCollection`: Creates a new collection in the data store.
- `DeleteCollection`: Deletes an existing collection from the data store.
- `InsertOne`: Inserts a new value into an existing collection in the data store.
- `UpdateOne`: Updates an existing value in an existing collection in the data store.
- `UpdateMany`: Updates multiple records in a collection in the data store.
- `DeleteOne`: Deletes an existing value from an existing collection in the data store.
- `DeleteMany`: Deletes multiple records in a collection in the data store.
- `FindMany`: Retrieves multiple records from a collection in the data store.
- `FindOne`: Retrieves a single record from a collection in the data store.
## DataService
The `DataService` enum represents methods related to managing conversations and messages. It includes the following methods:
- `GetConversations`: Gets a list of conversations from the data store.
- `CreateConversation`: Creates a new conversation in the data store.
- `DeleteConversation`: Deletes an existing conversation from the data store.
- `CreateMessage`: Creates a new message in an existing conversation in the data store.
- `UpdateMessage`: Updates an existing message in an existing conversation in the data store.
- `GetConversationMessages`: Gets a list of messages for an existing conversation from the data store.
## InferenceService
The `InferenceService` enum exports:
- `InitModel`: Initializes a model for inference.
- `StopModel`: Stops a running inference model.
## ModelManagementService
The `ModelManagementService` enum provides methods for managing models:
- `GetDownloadedModels`: Gets a list of downloaded models.
- `GetAvailableModels`: Gets a list of available models from data store.
- `DeleteModel`: Deletes a downloaded model.
- `DownloadModel`: Downloads a model from the server.
- `SearchModels`: Searches for models on the server.
- `GetConfiguredModels`: Gets configured models from the data store.
- `StoreModel`: Stores a model in the data store.
- `UpdateFinishedDownloadAt`: Updates the finished download time for a model in the data store.
- `GetUnfinishedDownloadModels`: Gets a list of unfinished download models from the data store.
- `GetFinishedDownloadModels`: Gets a list of finished download models from the data store.
- `DeleteDownloadModel`: Deletes a downloaded model from the data store.
- `GetModelById`: Gets a model by its ID from the data store.
## PreferenceService
The `PreferenceService` enum provides methods for managing plugin preferences:
- `ExperimentComponent`: Represents the UI experiment component for a testing function.
## SystemMonitoringService
The `SystemMonitoringService` enum includes methods for monitoring system resources:
- `GetResourcesInfo`: Gets information about system resources.
- `GetCurrentLoad`: Gets the current system load.
## PluginService
The `PluginService` enum includes plugin cycle handlers:
- `OnStart`: Handler for starting. E.g. Create a collection.
- `OnPreferencesUpdate`: Handler for preferences update. E.g. Update instances with new configurations.
For more detailed information on each of these components, please refer to the source code.

View File

@ -1,85 +0,0 @@
---
title: "events"
---
`events` lets you receive events about actions that take place in the app, like when a user sends a new message.
You can then implement custom logic handlers for such events.
## Usage
```js
import { events } from "@janhq/core";
```
You can subscribe to NewMessageRequest events by defining a function to handle the event and registering it with the events object:
```js
import { events } from "@janhq/core";
function handleMessageRequest(message: NewMessageRequest) {
// Your logic here. For example:
// const response = openai.createChatCompletion({...})
}
function registerListener() {
events.on(EventName.OnNewMessageRequest, handleMessageRequest);
}
// Register the listener function with the relevant extension points.
export function init({ register }) {
registerListener();
}
```
In this example, we're defining a function called handleMessageRequest that takes a NewMessageRequest object as its argument. We're also defining a function called registerListener that registers the handleMessageRequest function as a listener for NewMessageRequest events using the on method of the events object.
```js
import { events } from "@janhq/core";
function handleMessageRequest(data: NewMessageRequest) {
// Your logic here. For example:
const response = openai.createChatCompletion({...})
const message: NewMessageResponse = {
...data,
message: response.data.choices[0].message.content
}
// Now emit event so the app can display in the conversation
events.emit(EventName.OnNewMessageResponse, message)
}
```
## EventName
The `EventName` enum bundles the following events:
- `OnNewConversation`
- `OnNewMessageRequest`
- `OnNewMessageResponse`
- `OnMessageResponseUpdate`
- `OnDownloadUpdate`
- `OnDownloadSuccess`
- `OnDownloadError`
## event.on
Adds an observer for an event.
```js
const on: (eventName: string, handler: Function) => void = (eventName, handler);
```
## event.emit
Emits an event.
```js
const emit: (eventName: string, object: any) => void = (eventName, object);
```
## event.off
Removes an observer for an event.
```js
const off: (eventName: string, handler: Function) => void =
(eventName, handler);
```

View File

@ -1,76 +0,0 @@
---
title: "store"
---
`store` is a helper object for working with Jan app's local storage database.
By default, Jan ships with a [pouchDB](https://pouchdb.com/) client side noSQL db to persist usage state.
_Note: default `store` logic is from [@data-plugin](https://www.npmjs.com/package/@janhq/data-plugin) which implements `StoreService`._
## Usage
```js
import { store } from "@janhq/core";
```
## Insert Data
You can use the store.insertOne function to insert data into a specific collection in the local data store.
```js
import { store } from "@janhq/core";
function insertData() {
store.insertOne("conversations", { name: "meow" });
// Insert a new document with { name: "meow" } into the "conversations" collection.
}
```
## Get Data
To retrieve data from a collection in the local data store, you can use the `store.findOne` or `store.findMany` function. It allows you to filter and retrieve documents based on specific criteria.
store.getOne(collectionName, key) retrieves a single document that matches the provided key in the specified collection.
store.getMany(collectionName, selector, sort) retrieves multiple documents that match the provided selector in the specified collection.
```js
import { store } from "@janhq/core";
function getData() {
const selector = { name: "meow" };
const data = store.findMany("conversations", selector);
// Retrieve documents from the "conversations" collection that match the filter.
}
```
## Update Data
You can update data in the local store using these functions:
store.updateOne(collectionName, key, update) updates a single document that matches the provided key in the specified collection.
store.updateMany(collectionName, selector, update) updates multiple documents that match the provided selector in the specified collection.
```js
function updateData() {
const selector = { name: "meow" };
const update = { name: "newName" };
store.updateOne("conversations", selector, update);
// Update a document in the "conversations" collection.
}
```
## Delete Data
You can delete data from the local data store using these functions:
store.deleteOne(collectionName, key) deletes a single document that matches the provided key in the specified collection.
store.deleteMany(collectionName, selector) deletes multiple documents that match the provided selector in the specified collection.
```js
function deleteData() {
const selector = { name: "meow" };
store.deleteOne("conversations", selector);
// Delete a document from the "conversations" collection.
}
```

View File

@ -1,35 +0,0 @@
---
title: "filesystem"
---
The core package also provides functions to perform file operations. Here are a couple of examples:
## Usage
```js
// javascript
const core = require("@janhq/core");
// typescript
import * as core from "@janhq/core";
```
## Download a File
You can download a file from a specified URL and save it with a given file name using the core.downloadFile function.
```js
function downloadModel(url: string, fileName: string) {
core.downloadFile(url, fileName);
}
```
## Delete a File
To delete a file, you can use the core.deleteFile function, providing the path to the file you want to delete.
```js
function deleteModel(filePath: string) {
core.deleteFile(path);
}
```

View File

@ -1,59 +0,0 @@
---
title: "preferences"
---
`preferences` is a helper object for adding settings fields to your app.
## Usage
To register plugin preferences, you can use the preferences object from the @janhq/core package. Here's an example of how to register and retrieve plugin preferences:
```js
import { PluginService, preferences } from "@janhq/core";
const pluginName = "your-first-plugin";
const preferenceKey = "";
const preferenceName = "Your First Preference";
const preferenceDescription = "This is for example only";
const defaultValue = "";
export function init({ register }: { register: RegisterExtensionPoint }) {
// Register preference update handlers. E.g. update plugin instance with new configuration
register(PluginService.OnPreferencesUpdate, pluginName, onPreferencesUpdate);
// Register plugin preferences. E.g. Plugin need apiKey to connect to your service
preferences.registerPreferences <
string >
(register,
pluginName,
preferenceKey,
preferenceName,
preferenceDescription,
defaultValue);
}
```
In this example, we're registering preference update handlers and plugin preferences using the preferences object. We're also defining a PluginName constant to use as the name of the plugin.
To retrieve the values of the registered preferences, we're using the get method of the preferences object and passing in the name of the plugin and the name of the preference.
```js
import { preferences } from "@janhq/core";
const pluginName = "your-first-plugin";
const preferenceKey = "apiKey";
const setup = async () => {
// Retrieve apiKey
const apiKey: string =
(await preferences.get(pluginName, preferenceKey)) ?? "";
};
```
## registerPreferences
## get
## set
## clear

View File

@ -1,6 +0,0 @@
---
sidebar_position: 1
title: Building a chat app
---
TODO

View File

@ -1,6 +0,0 @@
---
sidebar_position: 2
title: Building a RAG app
---
TODO

View File

@ -129,9 +129,15 @@ const config = {
},
{
type: "docSidebar",
sidebarId: "devSidebar",
sidebarId: "docsSidebar",
position: "left",
label: "Developers",
label: "Documentation",
},
{
type: "docSidebar",
sidebarId: "apiSidebar",
position: "left",
label: "API Reference",
},
// Navbar right
{

View File

@ -35,54 +35,46 @@ const sidebars = {
type: "category",
label: "Installation",
collapsible: true,
collapsed: true,
link: { type: "doc", id: "guides/install/install" },
collapsed: false,
items: [
"guides/install/linux",
"guides/install/windows",
"guides/install/mac",
"guides/install/cloud-native",
{
type: "autogenerated",
dirName: "guides/install",
},
],
},
"guides/troubleshooting",
],
devSidebar: [
"developers/developers",
"nitro/nitro",
docsSidebar: [
"docs/introduction",
"docs/quickstart",
{
type: "category",
label: "Apps",
label: "Modules",
collapsible: true,
collapsed: true,
collapsed: false,
items: [
{
type: "autogenerated",
dirName: "developers/apps",
dirName: "docs/modules",
},
],
},
"docs/user-interface",
],
apiSidebar: [
"api/overview",
{
type: "category",
label: "Plugins",
label: "Endpoints",
collapsible: true,
collapsed: true,
collapsed: false,
items: [
{
type: "autogenerated",
dirName: "developers/plugins",
},
],
},
{
type: "category",
label: "API Reference",
collapsible: true,
collapsed: true,
items: [
{
type: "autogenerated",
dirName: "reference",
dirName: "api",
},
],
},

File diff suppressed because it is too large Load Diff