Update the app anatomy

This commit is contained in:
hahuyhoang411 2023-11-02 11:39:21 +07:00
parent 0c3868ddf7
commit 13f55298bf
14 changed files with 3404 additions and 87 deletions

View File

@ -1,27 +1,17 @@
---
sidebar_position: 2
title: Anatomy of an app
title: Anatomy of 👋Jan.ai
---
This page explains all the architecture of [Jan.ai](https://jan.ai/).
## Note: This one should be in the welcome page
Jan mission is to power the next gen App with the limitless extensibility by providing users:
- Unified API/ Helpers so that they only need to care about what matters.
- Wide range of Optimized and State of the art models that can help your App with Thinking/ Hearing/ Seeing capabilities. This is powered by our [Nitro](https://github.com/janhq/nitro).
- Strong support for App marketplace and Model market place that streamline value from end customers to builders at all layers.
- The most important: The users of Jan can use the Apps via UI and API for integration.
At Jan, we strongly believe in `Portable AI` and `Personal AI` that is created once and run anywhere.
## Synchronous architecture
![Synchronous architecture](img/arch-sync.png)
![Synchronous architecture](img/arch-sync.drawio.png)
### Overview
The architecture of the Jan.ai application is designed to provide a seamless experience for the users, while also being modular and extensible.
The architecture of the Jan.ai application is designed to provide a seamless experience for the users while also being modular and extensible.
### BackEnd and FrontEnd
@ -29,30 +19,36 @@ The architecture of the Jan.ai application is designed to provide a seamless exp
- The BackEnd serves as the brain of the application. It processes the information, performs computations, and manages the main logic of the system.
> **ELI5:** This is like an [OS (Operating System)](https://en.wikipedia.org/wiki/Operating_system) in the computer
:::info
For easy to understand, this is like an [OS (Operating System)](https://en.wikipedia.org/wiki/Operating_system) in the computer.
:::
**FrontEnd:**
- The FrontEnd is the interface that users interact with. It takes user inputs, displays results, and communicates with the BackEnd through Inter-process communication bi-directionally.
> **ELI5:** This is like [VSCode](https://code.visualstudio.com/) application
:::info
This is like [VSCode](https://code.visualstudio.com/) application
:::
**Inter-process communication:**
- A mechanism that allows the BackEnd and FrontEnd to communicate in real-time. It ensures that data flows smoothly between the two, facilitating rapid response and dynamic updates.
- A mechanism that allows the BackEnd and FrontEnd to communicate in real time. It ensures that data flows smoothly between the two, facilitating rapid response and dynamic updates.
### Plugins and Apps
**Plugins:**
In Jan, Plugins are cotains of all the core features. They could be Core Plugins or [Nitro](https://github.com/janhq/nitro)
In Jan, Plugins contains all the core features. They could be Core Plugins or [Nitro](https://github.com/janhq/nitro)
- **Load:** This denotes the initialization and activation of a plugin when the application starts or when a user activates it.
- **Implement:** This is where the main functionality of the plugin resides. Developers code the desired features and functionalities here. This is a "call to action" feature.
- **Dispose:** After the plugin's task is completed or when it's deactivated, this function ensures that the plugin releases any resources it used, ensuring optimal performance and preventing memory leaks.
- **Dispose:** After the plugin's task is completed or deactivated, this function ensures that it releases any resources it uses, providing optimal performance and preventing memory leaks.
> ELI5: This is like [Extensions](https://marketplace.visualstudio.com/VSCode) in VSCode.
:::info
This is like [Extensions](https://marketplace.visualstudio.com/VSCode) in VSCode.
:::
**Apps:**
@ -60,8 +56,41 @@ Apps are basically Plugin-like. However, Apps can be built by users for their ow
> For example, users can build a `Personal Document RAG App` to chat with specific documents or articles.
With Plugins and Apps, users can build a broader ecosystem surrounding Jan.ai.
With **Plugins and Apps**, users can build a broader ecosystem surrounding Jan.ai.
## Asynchronous architecture
TODOS:
![Asynchronous architecture](img/arch-async.drawio.png)
### Overview
The asynchronous architecture allows Jan to handle multiple operations simultaneously without waiting for one to complete before starting another. This results in a more efficient and responsive user experience. The provided diagram breaks down the primary components and their interactions.
### Components
#### Results
After processing certain tasks or upon specific triggers, the backend can broadcast the results. This could be a processed data set, a calculated result, or any other output that needs to be shared.
#### Events
Similar to broadcasting results but oriented explicitly towards events. This could include user actions, system events, or notifications that other components should be aware of.
- **Notify:**
Upon the conclusion of specific tasks or when particular triggers are activated, the system uses the Notify action to send out notifications from the **Results**. The Notify action is the conduit through which results are broadcasted asynchronously, whether they concern task completions, errors, updates, or any processed data set.
- **Listen:**
Here, the BackEnd actively waits for incoming data or events. It is geared towards capturing inputs from users or updates from plugins.
#### Plugins
These are modular components or extensions designed to enhance the application's functionalities. Each plugin possesses a "Listen" action, enabling it to stand by for requests emanating from user inputs.
### Flow
1. Input is provided by the user or an external source.
2. This input is broadcasted as an event into the **Broadcast event**.
3. The **BackEnd** processes the event. Depending on the event, it might interact with one or several Plugins.
4. Once processed, **Broadcast result** can be sent out asynchronously through multiple notifications via Notify action.

View File

@ -1,5 +1,5 @@
---
sidebar_position: 1
sidebar_position: 2
title: Build an app
---

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 278 KiB

View File

@ -0,0 +1,17 @@
---
sidebar_position: 1
title: Overview
---
Jan's mission is to power the next-gen App with limitless extensibility by providing users with the following:
- Unified API/ Helpers so that they only need to care about what matters.
- Wide range of Optimized and state-of-the-art models that can help your App with Thinking/ Hearing/ Seeing capabilities. This is powered by our [Nitro](https://github.com/janhq/nitro).
- Strong support for the App marketplace and Model marketplace that streamline value from end customers to builders at all layers.
- The most important thing is: The users of Jan can use the Apps via UI and API for integration.
At Jan, we strongly believe in `Portable AI` and `Personal AI` that is created once and run anywhere.
## Downloads
[Jan.ai](https://jan.ai/) - Desktop app
[Jan Github](https://github.com/janhq/jan) - Opensource library for developers

View File

@ -1,8 +1,9 @@
---
title: Concepts
sidebar_position: 1
---
- Jan Platform: Desktop app/ Cloud native SaaS that can run on Linux, Windows, Mac or even Server that comes with extensibilities, toolbox and state of the art but optimized models for next gen App.
- Jan App: Next gen App built on Jan Plaform as `portable intelligence` that can be run everywhere.
- Jan Platform: Desktop app/ Cloud native SaaS that can run on Linux, Windows, Mac, or even a Server that comes with extensibilities, toolbox, and state-of-the-art but optimized models for next-gen Apps.
- Jan App: Next-gen App built on Jan Plaform as `portable intelligence` that can be run everywhere.
- Models:
- Large Language Models
- Stable Diffusion models

View File

@ -1,5 +1,6 @@
---
title: Internal Guidelines
sidebar_position: 6
---
# Internal Guidelines
@ -31,7 +32,7 @@ cd ../build/bin/
./main -m ./models/llama-2-7b.Q8_0.gguf -p "Writing a thesis proposal can be done in 10 simple steps:\nStep 1:" -n 2048 -e -ngl 100 -t 48
```
For the llama.cpp CLI arguments you could see here:
For the llama.cpp CLI arguments you can see here:
| Short Option | Long Option | Param Value | Description |
|--------------|-----------------------|-------------|-------------|
@ -81,7 +82,7 @@ sudo make -C docker build
sudo make -C docker run
```
Once in the container, TensorRT-LLM can be built from source using:
Once in the container, TensorRT-LLM can be built from the source using the following:
3. **Build:**
```bash
@ -91,7 +92,7 @@ python3 ./scripts/build_wheel.py --trt_root /usr/local/tensorrt
pip install ./build/tensorrt_llm*.whl
```
> Note: You can specify the GPU achitecture (e.g. for 4090 is ADA) for compilation time reduction
> Note: You can specify the GPU architecture (e.g. for 4090 is ADA) for compilation time reduction
> The list of supported architectures can be found in the `CMakeLists.txt` file.
```bash
@ -119,4 +120,4 @@ python build.py --model_dir ./llama/7B/ --dtype float16 --remove_input_padding -
python3 run.py --max_output_len=2048 --tokenizer_dir ./llama/7B/ --engine_dir=./llama/7B/trt_engines/weight_only/1-gpu/ --input_text "Writing a thesis proposal can be done in 10 simple steps:\nStep 1:"
```
For the tensorRT-LLM CLI arguments you could see in the `run.py`
For the tensorRT-LLM CLI arguments, you can see in the `run.py`.

View File

@ -1,5 +1,6 @@
---
title: Installing Jan on Linux
sidebar_position: 4
---
# Linux users
@ -11,18 +12,27 @@ To begin using 👋Jan.ai on your Windows computer, follow these steps:
![Jan Installer](img/jan-download.png)
> Note: For faster results, you should enable your NVIDIA GPU. Make sure to have the CUDA toolkit installed. You can download it from your Linux distro's package manager or from here: [CUDA Toolkit](https://developer.nvidia.com/cuda-downloads).
:::tip
For faster results, you should enable your NVIDIA GPU. Make sure to have the CUDA toolkit installed. You can download it from your Linux distro's package manager or from here: [CUDA Toolkit](https://developer.nvidia.com/cuda-downloads).
:::
```bash
apt install nvidia-cuda-toolkit
```
> Check the installation by
Check the installation by
```bash
nvidia-smi
```
> For AMD GPU. You can download it from your Linux distro's package manager or from here: [ROCm Quick Start (Linux)](https://rocm.docs.amd.com/en/latest/deploy/linux/quick_start.html).
:::tip
For AMD GPU. You can download it from your Linux distro's package manager or from here: [ROCm Quick Start (Linux)](https://rocm.docs.amd.com/en/latest/deploy/linux/quick_start.html).
:::
## Step 2: Download your first model
Now, let's get your first model:
@ -33,7 +43,7 @@ Now, let's get your first model:
![Explore models](img/explore-model.png)
3. You can also see different quantized versions by clicking on "Show available Versions".
3. You can also see different quantized versions by clicking on "Show Available Versions."
![Model versions](img/model-version.png)
@ -44,14 +54,14 @@ Now, let's get your first model:
![Downloading](img/downloading.PNG)
## Step 3: Start the model
Once your model is downloaded. Go to "My Models," and then click "Start Model".
Once your model is downloaded. Go to "My Models" and then click "Start Model."
![Start model](img/start-model.PNG)
## Step 4: Start the conversations
Now you're ready to start using 👋Jan.ai for conversations:
Click "Chat" and begin your first conversation by selecting "New conversation".
Click "Chat" and begin your first conversation by selecting "New conversation."
You can also check the CPU and Memory usage of the computer.

View File

@ -1,5 +1,6 @@
---
title: Installing Jan on Mac
sidebar_position: 2
---
# Mac users
@ -20,7 +21,7 @@ Now, let's get your first model:
![Explore models](img/explore-model.png)
3. You can also see different quantized versions by clicking on "Show available Versions".
3. You can also see different quantized versions by clicking on "Show Available Versions."
![Model versions](img/model-version.png)
@ -31,14 +32,14 @@ Now, let's get your first model:
![Downloading](img/downloading.PNG)
## Step 3: Start the model
Once your model is downloaded. Go to "My Models," and then click "Start Model".
Once your model is downloaded. Go to "My Models" and then click "Start Model."
![Start model](img/start-model.PNG)
## Step 4: Start the conversations
Now you're ready to start using 👋Jan.ai for conversations:
Click "Chat" and begin your first conversation by selecting "New conversation".
Click "Chat" and begin your first conversation by selecting "New conversation."
You can also check the CPU and Memory usage of the computer.

View File

@ -1,5 +1,6 @@
---
title: Troubleshooting
sidebar_position: 5
---
# Jan.ai Troubleshooting Guide

View File

@ -1,5 +1,6 @@
---
title: Installing Jan on Windows
sidebar_position: 3
---
# Windows users
@ -23,15 +24,22 @@ When you run the Jan Installer, Windows Defender may display a warning. Here's w
![Setting up](img/set-up.png)
> Note: For faster results, you should enable your NVIDIA GPU. Make sure to have the CUDA toolkit installed. You can download it from here: [CUDA Toolkit](https://developer.nvidia.com/cuda-downloads) or [CUDA Installation guide](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html#verify-you-have-a-cuda-capable-gpu).
:::tip
> Check the installation by
For faster results, you should enable your NVIDIA GPU. Make sure to have the CUDA toolkit installed. You can download it from here: [CUDA Toolkit](https://developer.nvidia.com/cuda-downloads) or [CUDA Installation guide](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html#verify-you-have-a-cuda-capable-gpu).
:::
Check the installation by
```bash
nvidia-smi
```
:::tip
> For AMD GPU, you should use [WSLv2](https://learn.microsoft.com/en-us/windows/wsl/install). You can download it from here: [ROCm Quick Start (Linux)](https://rocm.docs.amd.com/en/latest/deploy/linux/quick_start.html).
For AMD GPU, you should use [WSLv2](https://learn.microsoft.com/en-us/windows/wsl/install). You can download it from here: [ROCm Quick Start (Linux)](https://rocm.docs.amd.com/en/latest/deploy/linux/quick_start.html).
:::
## Step 3: Download your first model
Now, let's get your first model:
@ -42,7 +50,7 @@ Now, let's get your first model:
![Explore models](img/explore-model.png)
3. You can also see different quantized versions by clicking on "Show available Versions".
3. You can also see different quantized versions by clicking on "Show Available Versions."
![Model versions](img/model-version.png)
@ -53,14 +61,14 @@ Now, let's get your first model:
![Downloading](img/downloading.PNG)
## Step 4: Start the model
Once your model is downloaded. Go to "My Models," and then click "Start Model".
Once your model is downloaded. Go to "My Models" and then click "Start Model."
![Start model](img/start-model.PNG)
## Step 5: Start the conversations
Now you're ready to start using 👋Jan.ai for conversations:
Click "Chat" and begin your first conversation by selecting "New conversation".
Click "Chat" and begin your first conversation by selecting "New conversation."
You can also check the CPU and Memory usage of the computer.

View File

@ -17,6 +17,7 @@
"@docusaurus/core": "^2.4.3",
"@docusaurus/preset-classic": "^2.4.3",
"@docusaurus/theme-live-codeblock": "^2.4.3",
"@docusaurus/theme-mermaid": "^3.0.0",
"@headlessui/react": "^1.7.17",
"@heroicons/react": "^2.0.18",
"@mdx-js/react": "^1.6.22",

File diff suppressed because it is too large Load Diff