docs: initialize handbook structure (#1477)

docs: initial handbook structure
This commit is contained in:
Hieu 2024-01-10 09:20:37 +07:00 committed by GitHub
commit 8676599293
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
41 changed files with 800 additions and 131 deletions

View File

@ -0,0 +1,18 @@
---
title: Overview
slug: /handbook
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
]
---
Welcome to Jan Handbook! Were really excited to bring you onboard.

View File

@ -0,0 +1,17 @@
---
title: Why we exist
slug: /handbook/meet-jan/why-we-exist
description: Why we exist
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---

View File

@ -0,0 +1,17 @@
---
title: Vision and Mission
slug: /handbook/meet-jan/vision-and-mission
description: Vision and mission of Jan
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---

View File

@ -0,0 +1,21 @@
---
title: Meet Jan
slug: /handbook/meet-jan
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---
import DocCardList from "@theme/DocCardList";
<DocCardList className="DocCardList--no-description" />

View File

@ -0,0 +1,17 @@
---
title: Overview of Jan Framework and Its Applications
slug: /handbook/products-and-innovations/overview-of-jan-framework-and-its-applications
description: Overview of Jan Framework and Its Applications
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---

View File

@ -0,0 +1,17 @@
---
title: Philosophy Behind Product Development
slug: /handbook/products-and-innovations/philosophy-behind-product-development
description: Philosophy Behind Product Development
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---

View File

@ -0,0 +1,17 @@
---
title: Roadmap - Present and Future Directions
slug: /handbook/products-and-innovations/roadmap-present-and-future-directions
description: Roadmap - Present and Future Directions
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---

View File

@ -0,0 +1,21 @@
---
title: Our Products and Innovations
slug: /handbook/products-and-innovations
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---
import DocCardList from "@theme/DocCardList";
<DocCardList className="DocCardList--no-description" />

View File

@ -0,0 +1,17 @@
---
title: How We Hire
slug: /handbook/core-contributors/how-we-hire
description: How We Hire
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---

View File

@ -0,0 +1,17 @@
---
title: Embracing Pod Structure
slug: /handbook/core-contributors/embracing-pod-structure
description: Embracing Pod Structure
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---

View File

@ -0,0 +1,17 @@
---
title: The Art of Conflict
slug: /handbook/core-contributors/the-art-of-conflict
description: The Art of Conflict
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---

View File

@ -0,0 +1,17 @@
---
title: OpSec
slug: /handbook/core-contributors/opsec
description: OpSec
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---

View File

@ -0,0 +1,17 @@
---
title: See a Problem, Own a Problem
slug: /handbook/core-contributors/see-a-problem-own-a-problem
description: See a Problem, Own a Problem - How we function without management
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---

View File

@ -0,0 +1,21 @@
---
title: Our Contributors
slug: /handbook/core-contributors
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---
import DocCardList from "@theme/DocCardList";
<DocCardList className="DocCardList--no-description" />

View File

@ -0,0 +1,17 @@
---
title: No PMs Allowed
slug: /handbook/what-we-do/no-pms-allowed
description: No PMs Allowed
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---

View File

@ -0,0 +1,17 @@
---
title: Our Support Methodology - Open Source, Collaborative, and Self-serve
slug: /handbook/what-we-do/our-support-methodology
description: Our Support Methodology - Open Source, Collaborative, and Self-serve
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---

View File

@ -0,0 +1,17 @@
---
title: Our Approach to Design
slug: /handbook/what-we-do/our-approach-to-design
description: Our Approach to Design
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---

View File

@ -0,0 +1,17 @@
---
title: Shipping Now, Shipping Later
slug: /handbook/what-we-do/shipping-now-shipping-later
description: Shipping Now, Shipping Later
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---

View File

@ -0,0 +1,17 @@
---
title: Trial by Fire
slug: /handbook/what-we-do/trial-by-fire
description: Trial by Fire
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---

View File

@ -0,0 +1,21 @@
---
title: What We Do
slug: /handbook/what-we-do
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---
import DocCardList from "@theme/DocCardList";
<DocCardList className="DocCardList--no-description" />

View File

@ -0,0 +1,17 @@
---
title: On the Tools - What We Use and Why
slug: /handbook/engineering-exellence/one-the-tools-what-we-use-and-why
description: On the Tools - What We Use and Why
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---

View File

@ -0,0 +1,17 @@
---
title: Jan Choices - Why FOSS and Why C++
slug: /handbook/engineering-exellence/jan-choices
description: Jan Choices - Why FOSS and Why C++
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---

View File

@ -0,0 +1,17 @@
---
title: Engineering Processes - From Plan to Launch
slug: /handbook/engineering-exellence/engineering-processes
description: Engineering Processes - From Plan to Launch
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---

View File

@ -0,0 +1,17 @@
---
title: Data Management and Deployment Strategies
slug: /handbook/engineering-exellence/data-management-and-deployment-strategies
description: Data Management and Deployment Strategies
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---

View File

@ -0,0 +1,21 @@
---
title: Engineering Excellence
slug: /handbook/engineering-exellence
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---
import DocCardList from "@theme/DocCardList";
<DocCardList className="DocCardList--no-description" />

View File

@ -0,0 +1,17 @@
---
title: How Do We Know What to Work On?
slug: /handbook/product-and-community/how-dowe-know-what-to-work-on
description: How Do We Know What to Work On?
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---

View File

@ -0,0 +1,17 @@
---
title: Our OKRs
slug: /handbook/product-and-community/our-okrs
description: Our OKRs
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---

View File

@ -0,0 +1,17 @@
---
title: Approaches to Beta Testing and User Engagement
slug: /handbook/product-and-community/approaches-to-beta-testing-and-user-engagement
description: Approaches to Beta Testing and User Engagement
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---

View File

@ -0,0 +1,21 @@
---
title: Product and Community
slug: /handbook/product-and-community
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---
import DocCardList from "@theme/DocCardList";
<DocCardList className="DocCardList--no-description" />

View File

@ -0,0 +1,17 @@
---
title: Jans Pivot and Journey So Far
slug: /handbook/from-spaghetti-flinging-to-strategy/jan-pivot-and-journey-so-far
description: Jans Pivot and Journey So Far
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---

View File

@ -0,0 +1,17 @@
---
title: ESOP Philosophy
slug: /handbook/from-spaghetti-flinging-to-strategy/esop-philosophy
description: ESOP Philosophy
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---

View File

@ -0,0 +1,17 @@
---
title: How We GTM
slug: /handbook/from-spaghetti-flinging-to-strategy/how-we-gtm
description: How We GTM
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---

View File

@ -0,0 +1,21 @@
---
title: From Spaghetti Flinging to Strategy
slug: /handbook/from-spaghetti-flinging-to-strategy
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---
import DocCardList from "@theme/DocCardList";
<DocCardList className="DocCardList--no-description" />

View File

@ -0,0 +1,17 @@
---
title: How to Get Involved and FAQ
slug: /handbook/contributing-to-jan/how-to-get-involved-and-faq
description: How to Get Involved and FAQ
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---

View File

@ -0,0 +1,17 @@
---
title: Feedback Channels/ Where to Get Help/ Use Your Voice
slug: /handbook/contributing-to-jan/feedback-channels
description: Feedback Channels/ Where to Get Help/ Use Your Voice
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---

View File

@ -0,0 +1,21 @@
---
title: Contributing to Jan
slug: /handbook/contributing-to-jan
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
handbook,
]
---
import DocCardList from "@theme/DocCardList";
<DocCardList className="DocCardList--no-description" />

View File

@ -0,0 +1,146 @@
---
title: Engineering
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
]
---
## Connecting to Rigs
### Pritunl Setup
1. **Install Pritunl**: [Download here](https://client.pritunl.com/#install)
2. **Import .ovpn file**
3. **VSCode**: Install the "Remote-SSH" extension for connection
### Llama.cpp Setup
1. **Clone Repo**: `git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp`
2. **Build**:
```bash
mkdir build && cd build
cmake .. -DLLAMA_CUBLAS=ON -DLLAMA_CUDA_F16=ON -DLLAMA_CUDA_MMV_Y=8
cmake --build . --config Release
```
3. **Download Model:**
```bash
cd ../models && wget https://huggingface.co/TheBloke/Llama-2-7B-GGUF/resolve/main/llama-2-7b.Q8_0.gguf
```
4. **Run:**
```bash
cd ../build/bin/
./main -m ./models/llama-2-7b.Q8_0.gguf -p "Writing a thesis proposal can be done in 10 simple steps:\nStep 1:" -n 2048 -e -ngl 100 -t 48
```
For the llama.cpp CLI arguments you can see here:
| Short Option | Long Option | Param Value | Description |
| --------------- | --------------------- | ----------- | ---------------------------------------------------------------- |
| `-h` | `--help` | | Show this help message and exit |
| `-i` | `--interactive` | | Run in interactive mode |
| | `--interactive-first` | | Run in interactive mode and wait for input right away |
| | `-ins`, `--instruct` | | Run in instruction mode (use with Alpaca models) |
| `-r` | `--reverse-prompt` | `PROMPT` | Run in interactive mode and poll user input upon seeing `PROMPT` |
| | `--color` | | Colorise output to distinguish prompt and user input from |
| **Generations** |
| `-s` | `--seed` | `SEED` | Seed for random number generator |
| `-t` | `--threads` | `N` | Number of threads to use during computation |
| `-p` | `--prompt` | `PROMPT` | Prompt to start generation with |
| | `--random-prompt` | | Start with a randomized prompt |
| | `--in-prefix` | `STRING` | String to prefix user inputs with |
| `-f` | `--file` | `FNAME` | Prompt file to start generation |
| `-n` | `--n_predict` | `N` | Number of tokens to predict |
| | `--top_k` | `N` | Top-k sampling |
| | `--top_p` | `N` | Top-p sampling |
| | `--repeat_last_n` | `N` | Last n tokens to consider for penalize |
| | `--repeat_penalty` | `N` | Penalize repeat sequence of tokens |
| `-c` | `--ctx_size` | `N` | Size of the prompt context |
| | `--ignore-eos` | | Ignore end of stream token and continue generating |
| | `--memory_f32` | | Use `f32` instead of `f16` for memory key+value |
| | `--temp` | `N` | Temperature |
| | `--n_parts` | `N` | Number of model parts |
| `-b` | `--batch_size` | `N` | Batch size for prompt processing |
| | `--perplexity` | | Compute perplexity over the prompt |
| | `--keep` | | Number of tokens to keep from the initial prompt |
| | `--mlock` | | Force system to keep model in RAM |
| | `--mtest` | | Determine the maximum memory usage |
| | `--verbose-prompt` | | Print prompt before generation |
| `-m` | `--model` | `FNAME` | Model path |
### TensorRT-LLM Setup
#### **Docker and TensorRT-LLM build**
> Note: You should run with admin permission to make sure everything works fine
1. **Docker Image:**
```bash
sudo make -C docker build
```
2. **Run Container:**
```bash
sudo make -C docker run
```
Once in the container, TensorRT-LLM can be built from the source using the following:
3. **Build:**
```bash
# To build the TensorRT-LLM code.
python3 ./scripts/build_wheel.py --trt_root /usr/local/tensorrt
# Deploy TensorRT-LLM in your environment.
pip install ./build/tensorrt_llm*.whl
```
> Note: You can specify the GPU architecture (e.g. for 4090 is ADA) for compilation time reduction
> The list of supported architectures can be found in the `CMakeLists.txt` file.
```bash
python3 ./scripts/build_wheel.py --cuda_architectures "89-real;90-real"
```
#### Running TensorRT-LLM
1. **Requirements:**
```bash
pip install -r examples/bloom/requirements.txt && git lfs install
```
2. **Download Weights:**
```bash
cd examples/llama && rm -rf ./llama/7B && mkdir -p ./llama/7B && git clone https://huggingface.co/NousResearch/Llama-2-7b-hf ./llama/7B
```
3. **Build Engine:**
```bash
python build.py --model_dir ./llama/7B/ --dtype float16 --remove_input_padding --use_gpt_attention_plugin float16 --enable_context_fmha --use_gemm_plugin float16 --use_weight_only --output_dir ./llama/7B/trt_engines/weight_only/1-gpu/
```
4. Run Inference:
```bash
python3 run.py --max_output_len=2048 --tokenizer_dir ./llama/7B/ --engine_dir=./llama/7B/trt_engines/weight_only/1-gpu/ --input_text "Writing a thesis proposal can be done in 10 simple steps:\nStep 1:"
```
For the tensorRT-LLM CLI arguments, you can see in the `run.py`.

View File

@ -1,6 +1,5 @@
---
title: Onboarding
slug: /handbook
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
keywords:
[

View File

@ -1,122 +0,0 @@
---
title: Engineering
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
keywords: [Jan AI, Jan, ChatGPT alternative, local AI, private AI, conversational AI, no-subscription fee, large language model ]
---
## Connecting to Rigs
### Pritunl Setup
1. **Install Pritunl**: [Download here](https://client.pritunl.com/#install)
2. **Import .ovpn file**
3. **VSCode**: Install the "Remote-SSH" extension for connection
### Llama.cpp Setup
1. **Clone Repo**: `git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp`
2. **Build**:
```bash
mkdir build && cd build
cmake .. -DLLAMA_CUBLAS=ON -DLLAMA_CUDA_F16=ON -DLLAMA_CUDA_MMV_Y=8
cmake --build . --config Release
```
3. **Download Model:**
```bash
cd ../models && wget https://huggingface.co/TheBloke/Llama-2-7B-GGUF/resolve/main/llama-2-7b.Q8_0.gguf
```
4. **Run:**
```bash
cd ../build/bin/
./main -m ./models/llama-2-7b.Q8_0.gguf -p "Writing a thesis proposal can be done in 10 simple steps:\nStep 1:" -n 2048 -e -ngl 100 -t 48
```
For the llama.cpp CLI arguments you can see here:
| Short Option | Long Option | Param Value | Description |
|--------------|-----------------------|-------------|-------------|
| `-h` | `--help` | | Show this help message and exit |
| `-i` | `--interactive` | | Run in interactive mode |
| | `--interactive-first` | | Run in interactive mode and wait for input right away |
| | `-ins`, `--instruct` | | Run in instruction mode (use with Alpaca models) |
| `-r` | `--reverse-prompt` | `PROMPT` | Run in interactive mode and poll user input upon seeing `PROMPT` |
| | `--color` | | Colorise output to distinguish prompt and user input from |
|**Generations**|
| `-s` | `--seed` | `SEED` | Seed for random number generator |
| `-t` | `--threads` | `N` | Number of threads to use during computation |
| `-p` | `--prompt` | `PROMPT` | Prompt to start generation with |
| | `--random-prompt` | | Start with a randomized prompt |
| | `--in-prefix` | `STRING` | String to prefix user inputs with |
| `-f` | `--file` | `FNAME` | Prompt file to start generation |
| `-n` | `--n_predict` | `N` | Number of tokens to predict |
| | `--top_k` | `N` | Top-k sampling |
| | `--top_p` | `N` | Top-p sampling |
| | `--repeat_last_n` | `N` | Last n tokens to consider for penalize |
| | `--repeat_penalty` | `N` | Penalize repeat sequence of tokens |
| `-c` | `--ctx_size` | `N` | Size of the prompt context |
| | `--ignore-eos` | | Ignore end of stream token and continue generating |
| | `--memory_f32` | | Use `f32` instead of `f16` for memory key+value |
| | `--temp` | `N` | Temperature |
| | `--n_parts` | `N` | Number of model parts |
| `-b` | `--batch_size` | `N` | Batch size for prompt processing |
| | `--perplexity` | | Compute perplexity over the prompt |
| | `--keep` | | Number of tokens to keep from the initial prompt |
| | `--mlock` | | Force system to keep model in RAM |
| | `--mtest` | | Determine the maximum memory usage |
| | `--verbose-prompt` | | Print prompt before generation |
| `-m` | `--model` | `FNAME` | Model path |
### TensorRT-LLM Setup
#### **Docker and TensorRT-LLM build**
> Note: You should run with admin permission to make sure everything works fine
1. **Docker Image:**
```bash
sudo make -C docker build
```
2. **Run Container:**
```bash
sudo make -C docker run
```
Once in the container, TensorRT-LLM can be built from the source using the following:
3. **Build:**
```bash
# To build the TensorRT-LLM code.
python3 ./scripts/build_wheel.py --trt_root /usr/local/tensorrt
# Deploy TensorRT-LLM in your environment.
pip install ./build/tensorrt_llm*.whl
```
> Note: You can specify the GPU architecture (e.g. for 4090 is ADA) for compilation time reduction
> The list of supported architectures can be found in the `CMakeLists.txt` file.
```bash
python3 ./scripts/build_wheel.py --cuda_architectures "89-real;90-real"
```
#### Running TensorRT-LLM
1. **Requirements:**
```bash
pip install -r examples/bloom/requirements.txt && git lfs install
```
2. **Download Weights:**
```bash
cd examples/llama && rm -rf ./llama/7B && mkdir -p ./llama/7B && git clone https://huggingface.co/NousResearch/Llama-2-7b-hf ./llama/7B
```
3. **Build Engine:**
```bash
python build.py --model_dir ./llama/7B/ --dtype float16 --remove_input_padding --use_gpt_attention_plugin float16 --enable_context_fmha --use_gemm_plugin float16 --use_weight_only --output_dir ./llama/7B/trt_engines/weight_only/1-gpu/
```
4. Run Inference:
```bash
python3 run.py --max_output_len=2048 --tokenizer_dir ./llama/7B/ --engine_dir=./llama/7B/trt_engines/weight_only/1-gpu/ --input_text "Writing a thesis proposal can be done in 10 simple steps:\nStep 1:"
```
For the tensorRT-LLM CLI arguments, you can see in the `run.py`.

View File

@ -59,15 +59,9 @@ const sidebars = {
id: "about/about",
},
{
type: "category",
type: "doc",
label: "Company Handbook",
collapsible: true,
collapsed: false,
items: [
"handbook/onboarding",
"handbook/product",
"handbook/engineering",
],
id: "handbook/overview",
},
{
type: "link",
@ -75,6 +69,13 @@ const sidebars = {
href: "https://janai.bamboohr.com/careers",
},
],
handbookSidebar: [
{
type: "autogenerated",
dirName: "handbook",
},
],
};
module.exports = sidebars;