feat: reconfigure blog sidebar

This commit is contained in:
hieu-jan 2024-03-02 15:02:36 +09:00
parent 4d1017e40c
commit a231c4f662
11 changed files with 927 additions and 907 deletions

View File

@ -1,24 +1,24 @@
--- ---
title: "RAG is not enough: Lessons from Beating GPT-3.5 on Specialized Tasks with Mistral 7B" title: 'RAG is not enough: Lessons from Beating GPT-3.5 on Specialized Tasks with Mistral 7B'
description: "Creating Open Source Alternatives to Outperform ChatGPT" description: 'Creating Open Source Alternatives to Outperform ChatGPT'
slug: /surpassing-chatgpt-with-open-source-alternatives slug: /blog/surpassing-chatgpt-with-open-source-alternatives
tags: [Open Source ChatGPT Alternatives, Outperform ChatGPT] tags: [Open Source ChatGPT Alternatives, Outperform ChatGPT]
authors: authors:
- name: Rex Ha - name: Rex Ha
title: LLM Researcher & Content Writer title: LLM Researcher & Content Writer
url: https://github.com/hahuyhoang411 url: https://github.com/hahuyhoang411
image_url: https://avatars.githubusercontent.com/u/64120343?v=4 image_url: https://avatars.githubusercontent.com/u/64120343?v=4
email: rex@jan.ai email: rex@jan.ai
- name: Nicole Zhu - name: Nicole Zhu
title: Co-Founder title: Co-Founder
url: https://github.com/0xsage url: https://github.com/0xsage
image_url: https://avatars.githubusercontent.com/u/69952136?v=4 image_url: https://avatars.githubusercontent.com/u/69952136?v=4
email: nicole@jan.ai email: nicole@jan.ai
- name: Alan Dao - name: Alan Dao
title: AI Engineer title: AI Engineer
url: https://github.com/tikikun url: https://github.com/tikikun
image_url: https://avatars.githubusercontent.com/u/22268502?v=4 image_url: https://avatars.githubusercontent.com/u/22268502?v=4
email: alan@jan.ai email: alan@jan.ai
--- ---
## Abstract ## Abstract
@ -35,9 +35,9 @@ Problems still arise with catastrophic forgetting in general tasks, commonly obs
![Mistral vs LLama vs Gemma](assets/mistral-comparasion.png) ![Mistral vs LLama vs Gemma](assets/mistral-comparasion.png)
*Figure 1. Mistral 7B excels in benchmarks, ranking among the top foundational models.* _Figure 1. Mistral 7B excels in benchmarks, ranking among the top foundational models._
*Note: we are not sponsored by the Mistral team. Though many folks in their community do like to run Mistral locally using our desktop client - [Jan](https://jan.ai/).* _Note: we are not sponsored by the Mistral team. Though many folks in their community do like to run Mistral locally using our desktop client - [Jan](https://jan.ai/)._
## Cost-Effectively Improving the Base Model ## Cost-Effectively Improving the Base Model
@ -45,7 +45,7 @@ Mistral alone has known, poor math capabilities, which we needed for our highly
![Merged model vs finetuned models](assets/stealth-comparasion.png) ![Merged model vs finetuned models](assets/stealth-comparasion.png)
*Figure 2: The merged model, Stealth, doubles the mathematical capabilities of its foundational model while retaining the performance in other tasks.* _Figure 2: The merged model, Stealth, doubles the mathematical capabilities of its foundational model while retaining the performance in other tasks._
We found merging models is quick and cost-effective, enabling fast adjustments based on the result of each iteration. We found merging models is quick and cost-effective, enabling fast adjustments based on the result of each iteration.
@ -71,15 +71,15 @@ With the base model ready, we started on our specific use case.
Jan is an open-source & bootstrapped project - at one point during our unanticipated growth, we received 1 customer support ticket per minute, with no one to handle customer service. Jan is an open-source & bootstrapped project - at one point during our unanticipated growth, we received 1 customer support ticket per minute, with no one to handle customer service.
So, we directed our efforts toward training a model to answer user questions based on existing technical documentation. So, we directed our efforts toward training a model to answer user questions based on existing technical documentation.
Specifically, we trained it on Nitro [docs](https://nitro.jan.ai/docs). For context, Nitro is the default inference engine for Jan. Its a serious server implementation of LlamaCPP, written in C++, with multimodal, queues, and other production-level server capabilities. Specifically, we trained it on Nitro [docs](https://nitro.jan.ai/docs). For context, Nitro is the default inference engine for Jan. Its a serious server implementation of LlamaCPP, written in C++, with multimodal, queues, and other production-level server capabilities.
It made an interesting corpus because it was rife with post-2023 technical jargon, edge cases, and poor informational layout. It made an interesting corpus because it was rife with post-2023 technical jargon, edge cases, and poor informational layout.
## Generating a Training Dataset for GPT-4 ## Generating a Training Dataset for GPT-4
The first step was to transform Nitros unstructured format into a synthetic Q&A dataset designed for [instruction tuning](https://arxiv.org/pdf/2109.01652.pdf). The first step was to transform Nitros unstructured format into a synthetic Q&A dataset designed for [instruction tuning](https://arxiv.org/pdf/2109.01652.pdf).
The text was split into chunks of 300-token segments with 30-token overlaps. This helped to avoid a [lost-in-the-middle](https://arxiv.org/abs/2307.03172) problem where LLM cant use context efficiently to answer given questions. The text was split into chunks of 300-token segments with 30-token overlaps. This helped to avoid a [lost-in-the-middle](https://arxiv.org/abs/2307.03172) problem where LLM cant use context efficiently to answer given questions.
@ -87,7 +87,7 @@ The chunks were then given to GPT-4 with 8k context length to generate 3800 Q&A
## Training ## Training
The training was done with supervised finetuning (SFT) from the [Hugging Face's alignment handbook](https://github.com/huggingface/alignment-handbook) based on the [Huggingface's Zephyr Beta](https://github.com/huggingface/alignment-handbook/tree/main/recipes/zephyr-7b-beta) guidelines. The training was done with supervised finetuning (SFT) from the [Hugging Face's alignment handbook](https://github.com/huggingface/alignment-handbook) based on the [Huggingface's Zephyr Beta](https://github.com/huggingface/alignment-handbook/tree/main/recipes/zephyr-7b-beta) guidelines.
We used consumer-grade, dual Nvidia RTX 4090s for the training. The end-to-end training took 18 minutes. We found optimal hyperparameters in LoRA for this specific task to be `r = 256` and `alpha = 512`. We used consumer-grade, dual Nvidia RTX 4090s for the training. The end-to-end training took 18 minutes. We found optimal hyperparameters in LoRA for this specific task to be `r = 256` and `alpha = 512`.
@ -95,7 +95,7 @@ This final model is publicly available at https://huggingface.co/jan-hq/nitro-v1
![Using LLM locally](assets/nitro-on-jan.png) ![Using LLM locally](assets/nitro-on-jan.png)
*Figure 3. Using the new finetuned model in [Jan](https://jan.ai/)* _Figure 3. Using the new finetuned model in [Jan](https://jan.ai/)_
## Improving Results With Rag ## Improving Results With Rag
@ -109,18 +109,18 @@ We curated a new set of [50 multiple-choice questions](https://github.com/janhq/
![Opensource model outperforms GPT](assets/rag-comparasion.png) ![Opensource model outperforms GPT](assets/rag-comparasion.png)
*Figure 4. Comparison between fine-tuned model and OpenAI's GPT.* _Figure 4. Comparison between fine-tuned model and OpenAI's GPT._
**Results** **Results**
| Approach | Performance | | Approach | Performance |
| ------------------------------------ | ----------- | | ----------------------------------------------------------------------------------- | ----------- |
| GPT-3.5 with RAG | 56.7% | | GPT-3.5 with RAG | 56.7% |
| GPT-4 with RAG | 64.3% | | GPT-4 with RAG | 64.3% |
| Merged 7B Model ([Stealth 7B](https://huggingface.co/jan-hq/stealth-v1.3)) with RAG | 47.7% | | Merged 7B Model ([Stealth 7B](https://huggingface.co/jan-hq/stealth-v1.3)) with RAG | 47.7% |
| Finetuned 7B Model (Nitro 7B) with RAG | 57.8% | | Finetuned 7B Model (Nitro 7B) with RAG | 57.8% |
This indicates that with task-specific training, we can improve an open-source, Small Language Model to the level of GPT-3.5 on domain knowledge. This indicates that with task-specific training, we can improve an open-source, Small Language Model to the level of GPT-3.5 on domain knowledge.
Notably, the finetuned with RAG approach also demonstrated more consistency across benchmarking, as indicated by its lower standard deviation. Notably, the finetuned with RAG approach also demonstrated more consistency across benchmarking, as indicated by its lower standard deviation.
@ -134,18 +134,18 @@ A full research report with more statistics can be found at https://github.com/j
## References ## References
[1] Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, Quoc V. Le. Finetuned Language Models Are Zero-Shot Learners. *arXiv preprint arXiv:2109.01652*, 2021. URL: https://arxiv.org/abs/2109.01652 [1] Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, Quoc V. Le. Finetuned Language Models Are Zero-Shot Learners. _arXiv preprint arXiv:2109.01652_, 2021. URL: https://arxiv.org/abs/2109.01652
[2] Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, Dongmei Zhang. WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct. *arXiv preprint arXiv:2308.09583*, 2023. URL: https://arxiv.org/abs/2308.09583 [2] Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, Dongmei Zhang. WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct. _arXiv preprint arXiv:2308.09583_, 2023. URL: https://arxiv.org/abs/2308.09583
[3] Luo, Y., Yang, Z., Meng, F., Li, Y., Zhou, J., & Zhang, Y. An Empirical Study of Catastrophic Forgetting in Large Language Models During Continual Fine-tuning. *arXiv preprint arXiv:2308.08747*,2023 URL: https://arxiv.org/abs/2308.08747 [3] Luo, Y., Yang, Z., Meng, F., Li, Y., Zhou, J., & Zhang, Y. An Empirical Study of Catastrophic Forgetting in Large Language Models During Continual Fine-tuning. _arXiv preprint arXiv:2308.08747_,2023 URL: https://arxiv.org/abs/2308.08747
[4] Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang. WizardCoder: Empowering Code Large Language Models with Evol-Instruct., *arXiv preprint arXiv:2306.08568*, 2023. URL: https://arxiv.org/abs/2306.08568 [4] Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang. WizardCoder: Empowering Code Large Language Models with Evol-Instruct., _arXiv preprint arXiv:2306.08568_, 2023. URL: https://arxiv.org/abs/2306.08568
[5] SciPhi-AI, Agent Search. GitHub. URL: https://github.com/SciPhi-AI/agent-search [5] SciPhi-AI, Agent Search. GitHub. URL: https://github.com/SciPhi-AI/agent-search
[6] Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang. "Lost in the Middle: How Language Models Use Long Contexts." *arXiv preprint arXiv:2307.03172*, 2023. URL: https://arxiv.org/abs/2307.03172 [6] Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang. "Lost in the Middle: How Language Models Use Long Contexts." _arXiv preprint arXiv:2307.03172_, 2023. URL: https://arxiv.org/abs/2307.03172
[7] Luo, H., Sun, Q., Xu, C., Zhao, P., Lou, J., Tao, C., Geng, X., Lin, Q., Chen, S., & Zhang, D. WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct. *arXiv preprint arXiv:2308.09583*, 2023. URL: https://arxiv.org/abs/2308.09583 [7] Luo, H., Sun, Q., Xu, C., Zhao, P., Lou, J., Tao, C., Geng, X., Lin, Q., Chen, S., & Zhang, D. WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct. _arXiv preprint arXiv:2308.09583_, 2023. URL: https://arxiv.org/abs/2308.09583
[8] nlpxucan et al., WizardLM. GitHub. URL: https://github.com/nlpxucan/WizardLM [8] nlpxucan et al., WizardLM. GitHub. URL: https://github.com/nlpxucan/WizardLM

View File

@ -1,7 +1,7 @@
--- ---
title: "Post Mortem: Bitdefender False Positive Flag" title: "Post Mortem: Bitdefender False Positive Flag"
description: "10th January 2024, Jan's 0.4.4 Release on Windows triggered Bitdefender to incorrectly flag it as infected with Gen:Variant.Tedy.258323, leading to automatic quarantine warnings on users' computers." description: "10th January 2024, Jan's 0.4.4 Release on Windows triggered Bitdefender to incorrectly flag it as infected with Gen:Variant.Tedy.258323, leading to automatic quarantine warnings on users' computers."
slug: /postmortems/january-10-2024-bitdefender-false-positive-flag slug: /blog/postmortems/january-10-2024-bitdefender-false-positive-flag
tags: [Postmortem] tags: [Postmortem]
--- ---

View File

Before

Width:  |  Height:  |  Size: 64 KiB

After

Width:  |  Height:  |  Size: 64 KiB

View File

Before

Width:  |  Height:  |  Size: 226 KiB

After

Width:  |  Height:  |  Size: 226 KiB

View File

Before

Width:  |  Height:  |  Size: 98 KiB

After

Width:  |  Height:  |  Size: 98 KiB

View File

Before

Width:  |  Height:  |  Size: 74 KiB

After

Width:  |  Height:  |  Size: 74 KiB

View File

@ -1,36 +1,36 @@
// @ts-check // @ts-check
// Note: type annotations allow type checking and IDEs autocompletion // Note: type annotations allow type checking and IDEs autocompletion
require("dotenv").config(); require('dotenv').config()
const darkCodeTheme = require("prism-react-renderer/themes/dracula"); const darkCodeTheme = require('prism-react-renderer/themes/dracula')
/** @type {import('@docusaurus/types').Config} */ /** @type {import('@docusaurus/types').Config} */
const config = { const config = {
title: "Jan", title: 'Jan',
tagline: "Run your own AI", tagline: 'Run your own AI',
favicon: "img/favicon.ico", favicon: 'img/favicon.ico',
// Set the production url of your site here // Set the production url of your site here
url: "https://jan.ai", url: 'https://jan.ai',
// Set the /<baseUrl>/ pathname under which your site is served // Set the /<baseUrl>/ pathname under which your site is served
// For GitHub pages deployment, it is often '/<projectName>/' // For GitHub pages deployment, it is often '/<projectName>/'
baseUrl: "/", baseUrl: '/',
// GitHub pages deployment config. // GitHub pages deployment config.
// If you aren't using GitHub pages, you don't need these. // If you aren't using GitHub pages, you don't need these.
organizationName: "janhq", // Usually your GitHub org/user name. organizationName: 'janhq', // Usually your GitHub org/user name.
projectName: "jan", // Usually your repo name. projectName: 'jan', // Usually your repo name.
onBrokenLinks: "warn", onBrokenLinks: 'warn',
onBrokenMarkdownLinks: "warn", onBrokenMarkdownLinks: 'warn',
trailingSlash: true, trailingSlash: true,
// Even if you don't use internalization, you can use this field to set useful // Even if you don't use internalization, you can use this field to set useful
// metadata like html lang. For example, if your site is Chinese, you may want // metadata like html lang. For example, if your site is Chinese, you may want
// to replace "en" with "zh-Hans". // to replace "en" with "zh-Hans".
i18n: { i18n: {
defaultLocale: "en", defaultLocale: 'en',
locales: ["en"], locales: ['en'],
}, },
markdown: { markdown: {
@ -41,37 +41,37 @@ const config = {
// Plugins we added // Plugins we added
plugins: [ plugins: [
"docusaurus-plugin-sass", 'docusaurus-plugin-sass',
async function myPlugin(context, options) { async function myPlugin(context, options) {
return { return {
name: "docusaurus-tailwindcss", name: 'docusaurus-tailwindcss',
configurePostCss(postcssOptions) { configurePostCss(postcssOptions) {
// Appends TailwindCSS and AutoPrefixer. // Appends TailwindCSS and AutoPrefixer.
postcssOptions.plugins.push(require("tailwindcss")); postcssOptions.plugins.push(require('tailwindcss'))
postcssOptions.plugins.push(require("autoprefixer")); postcssOptions.plugins.push(require('autoprefixer'))
return postcssOptions; return postcssOptions
}, },
}; }
}, },
[ [
"posthog-docusaurus", 'posthog-docusaurus',
{ {
apiKey: process.env.POSTHOG_PROJECT_API_KEY || "XXX", apiKey: process.env.POSTHOG_PROJECT_API_KEY || 'XXX',
appUrl: process.env.POSTHOG_APP_URL || "XXX", // optional appUrl: process.env.POSTHOG_APP_URL || 'XXX', // optional
enableInDevelopment: false, // optional enableInDevelopment: false, // optional
}, },
], ],
[ [
"@docusaurus/plugin-client-redirects", '@docusaurus/plugin-client-redirects',
{ {
redirects: [ redirects: [
{ {
from: "/troubleshooting/failed-to-fetch", from: '/troubleshooting/failed-to-fetch',
to: "/troubleshooting/somethings-amiss", to: '/troubleshooting/somethings-amiss',
}, },
{ {
from: "/guides/troubleshooting/gpu-not-used/", from: '/guides/troubleshooting/gpu-not-used/',
to: "/troubleshooting/gpu-not-used", to: '/troubleshooting/gpu-not-used',
}, },
], ],
}, },
@ -81,35 +81,35 @@ const config = {
// The classic preset will relay each option entry to the respective sub plugin/theme. // The classic preset will relay each option entry to the respective sub plugin/theme.
presets: [ presets: [
[ [
"@docusaurus/preset-classic", '@docusaurus/preset-classic',
{ {
// Will be passed to @docusaurus/plugin-content-docs (false to disable) // Will be passed to @docusaurus/plugin-content-docs (false to disable)
docs: { docs: {
routeBasePath: "/", routeBasePath: '/',
sidebarPath: require.resolve("./sidebars.js"), sidebarPath: require.resolve('./sidebars.js'),
editUrl: "https://github.com/janhq/jan/tree/main/docs", editUrl: 'https://github.com/janhq/jan/tree/main/docs',
showLastUpdateAuthor: true, showLastUpdateAuthor: true,
showLastUpdateTime: true, showLastUpdateTime: true,
}, },
// Will be passed to @docusaurus/plugin-content-sitemap (false to disable) // Will be passed to @docusaurus/plugin-content-sitemap (false to disable)
sitemap: { sitemap: {
changefreq: "daily", changefreq: 'daily',
priority: 1.0, priority: 1.0,
ignorePatterns: ["/tags/**"], ignorePatterns: ['/tags/**'],
filename: "sitemap.xml", filename: 'sitemap.xml',
}, },
// Will be passed to @docusaurus/plugin-content-blog (false to disable) // Will be passed to @docusaurus/plugin-content-blog (false to disable)
blog: { // blog: {
blogSidebarTitle: "All Posts", // blogSidebarTitle: "All Posts",
blogSidebarCount: "ALL", // blogSidebarCount: "ALL",
}, // },
// Will be passed to @docusaurus/theme-classic. // Will be passed to @docusaurus/theme-classic.
theme: { theme: {
customCss: require.resolve("./src/styles/main.scss"), customCss: require.resolve('./src/styles/main.scss'),
}, },
// GTM is always inactive in development and only active in production to avoid polluting the analytics statistics. // GTM is always inactive in development and only active in production to avoid polluting the analytics statistics.
googleTagManager: { googleTagManager: {
containerId: process.env.GTM_ID || "XXX", containerId: process.env.GTM_ID || 'XXX',
}, },
// Will be passed to @docusaurus/plugin-content-pages (false to disable) // Will be passed to @docusaurus/plugin-content-pages (false to disable)
// pages: {}, // pages: {},
@ -117,17 +117,17 @@ const config = {
], ],
// Redoc preset // Redoc preset
[ [
"redocusaurus", 'redocusaurus',
{ {
specs: [ specs: [
{ {
spec: "openapi/jan.yaml", // can be local file, url, or parsed json object spec: 'openapi/jan.yaml', // can be local file, url, or parsed json object
route: "/api-reference/", // path where to render docs route: '/api-reference/', // path where to render docs
}, },
], ],
theme: { theme: {
primaryColor: "#1a73e8", primaryColor: '#1a73e8',
primaryColorDark: "#1a73e8", primaryColorDark: '#1a73e8',
options: { options: {
requiredPropsFirst: true, requiredPropsFirst: true,
noAutoAuth: true, noAutoAuth: true,
@ -140,10 +140,10 @@ const config = {
// Docs: https://docusaurus.io/docs/api/themes/configuration // Docs: https://docusaurus.io/docs/api/themes/configuration
themeConfig: { themeConfig: {
image: "img/og-image.png", image: 'img/og-image.png',
// Only for react live // Only for react live
liveCodeBlock: { liveCodeBlock: {
playgroundPosition: "bottom", playgroundPosition: 'bottom',
}, },
docs: { docs: {
sidebar: { sidebar: {
@ -153,89 +153,89 @@ const config = {
}, },
// Algolia Search Configuration // Algolia Search Configuration
algolia: { algolia: {
appId: process.env.ALGOLIA_APP_ID || "XXX", appId: process.env.ALGOLIA_APP_ID || 'XXX',
apiKey: process.env.ALGOLIA_API_KEY || "XXX", apiKey: process.env.ALGOLIA_API_KEY || 'XXX',
indexName: "jan_docs", indexName: 'jan_docs',
contextualSearch: true, contextualSearch: true,
insights: true, insights: true,
}, },
// SEO Docusarus // SEO Docusarus
metadata: [ metadata: [
{ {
name: "description", name: 'description',
content: content:
"Jan runs 100% offline on your computer, utilizes open-source AI models, prioritizes privacy, and is highly customizable.", 'Jan runs 100% offline on your computer, utilizes open-source AI models, prioritizes privacy, and is highly customizable.',
}, },
{ {
name: "keywords", name: 'keywords',
content: content:
"Jan AI, Jan, ChatGPT alternative, local AI, private AI, conversational AI, no-subscription fee, large language model ", 'Jan AI, Jan, ChatGPT alternative, local AI, private AI, conversational AI, no-subscription fee, large language model ',
}, },
{ name: "robots", content: "index, follow" }, { name: 'robots', content: 'index, follow' },
{ {
property: "og:title", property: 'og:title',
content: "Jan | Open-source ChatGPT Alternative", content: 'Jan | Open-source ChatGPT Alternative',
}, },
{ {
property: "og:description", property: 'og:description',
content: content:
"Jan runs 100% offline on your computer, utilizes open-source AI models, prioritizes privacy, and is highly customizable.", 'Jan runs 100% offline on your computer, utilizes open-source AI models, prioritizes privacy, and is highly customizable.',
}, },
{ {
property: "og:image", property: 'og:image',
content: "https://jan.ai/img/og-image.png", content: 'https://jan.ai/img/og-image.png',
}, },
{ property: "og:type", content: "website" }, { property: 'og:type', content: 'website' },
{ property: "twitter:card", content: "summary_large_image" }, { property: 'twitter:card', content: 'summary_large_image' },
{ property: "twitter:site", content: "@janframework" }, { property: 'twitter:site', content: '@janframework' },
{ {
property: "twitter:title", property: 'twitter:title',
content: "Jan | Open-source ChatGPT Alternative", content: 'Jan | Open-source ChatGPT Alternative',
}, },
{ {
property: "twitter:description", property: 'twitter:description',
content: content:
"Jan runs 100% offline on your computer, utilizes open-source AI models, prioritizes privacy, and is highly customizable.", 'Jan runs 100% offline on your computer, utilizes open-source AI models, prioritizes privacy, and is highly customizable.',
}, },
{ {
property: "twitter:image", property: 'twitter:image',
content: "https://jan.ai/img/og-image.png", content: 'https://jan.ai/img/og-image.png',
}, },
], ],
headTags: [ headTags: [
// Declare a <link> preconnect tag // Declare a <link> preconnect tag
{ {
tagName: "link", tagName: 'link',
attributes: { attributes: {
rel: "preconnect", rel: 'preconnect',
href: "https://jan.ai/", href: 'https://jan.ai/',
}, },
}, },
// Declare some json-ld structured data // Declare some json-ld structured data
{ {
tagName: "script", tagName: 'script',
attributes: { attributes: {
type: "application/ld+json", type: 'application/ld+json',
}, },
innerHTML: JSON.stringify({ innerHTML: JSON.stringify({
"@context": "https://schema.org/", '@context': 'https://schema.org/',
"@type": "localAI", '@type': 'localAI',
name: "Jan", 'name': 'Jan',
description: 'description':
"Jan runs 100% offline on your computer, utilizes open-source AI models, prioritizes privacy, and is highly customizable.", 'Jan runs 100% offline on your computer, utilizes open-source AI models, prioritizes privacy, and is highly customizable.',
keywords: 'keywords':
"Jan AI, Jan, ChatGPT alternative, local AI, private AI, conversational AI, no-subscription fee, large language model ", 'Jan AI, Jan, ChatGPT alternative, local AI, private AI, conversational AI, no-subscription fee, large language model ',
applicationCategory: "BusinessApplication", 'applicationCategory': 'BusinessApplication',
operatingSystem: "Multiple", 'operatingSystem': 'Multiple',
url: "https://jan.ai/", 'url': 'https://jan.ai/',
}), }),
}, },
], ],
navbar: { navbar: {
title: "Jan", title: 'Jan',
logo: { logo: {
alt: "Jan Logo", alt: 'Jan Logo',
src: "img/logo.svg", src: 'img/logo.svg',
}, },
items: [ items: [
// Navbar Left // Navbar Left
@ -246,38 +246,38 @@ const config = {
// label: "About", // label: "About",
// }, // },
{ {
type: "dropdown", type: 'dropdown',
label: "About", label: 'About',
position: "left", position: 'left',
items: [ items: [
{ {
type: "doc", type: 'doc',
label: "What is Jan?", label: 'What is Jan?',
docId: "about/about", docId: 'about/about',
}, },
{ {
type: "doc", type: 'doc',
label: "Who we are", label: 'Who we are',
docId: "team/team", docId: 'team/team',
}, },
{ {
type: "doc", type: 'doc',
label: "Wall of love", label: 'Wall of love',
docId: "wall-of-love", docId: 'wall-of-love',
}, },
], ],
}, },
{ {
type: "docSidebar", type: 'docSidebar',
sidebarId: "productSidebar", sidebarId: 'productSidebar',
position: "left", position: 'left',
label: "Product", label: 'Product',
}, },
{ {
type: "docSidebar", type: 'docSidebar',
sidebarId: "ecosystemSidebar", sidebarId: 'ecosystemSidebar',
position: "left", position: 'left',
label: "Ecosystem", label: 'Ecosystem',
}, },
// { // {
// type: "docSidebar", // type: "docSidebar",
@ -287,35 +287,36 @@ const config = {
// }, // },
// Navbar right // Navbar right
{ {
type: "dropdown", type: 'dropdown',
label: "Docs", label: 'Docs',
position: "right", position: 'right',
items: [ items: [
{ {
type: "docSidebar", type: 'docSidebar',
sidebarId: "guidesSidebar", sidebarId: 'guidesSidebar',
label: "User Guide", label: 'User Guide',
}, },
{ {
type: "docSidebar", type: 'docSidebar',
sidebarId: "developerSidebar", sidebarId: 'developerSidebar',
label: "Developer", label: 'Developer',
}, },
{ {
to: "/api-reference", to: '/api-reference',
label: "API Reference", label: 'API Reference',
}, },
{ {
type: "docSidebar", type: 'docSidebar',
sidebarId: "docsSidebar", sidebarId: 'docsSidebar',
label: "Framework", label: 'Framework',
}, },
], ],
}, },
{ {
to: "blog", type: 'docSidebar',
label: "Blog", sidebarId: 'blogSidebar',
position: "right", position: 'right',
label: 'Blog',
}, },
], ],
}, },
@ -323,21 +324,21 @@ const config = {
theme: darkCodeTheme, theme: darkCodeTheme,
darkTheme: darkCodeTheme, darkTheme: darkCodeTheme,
additionalLanguages: [ additionalLanguages: [
"python", 'python',
"powershell", 'powershell',
"bash", 'bash',
"json", 'json',
"javascript", 'javascript',
"jsx", 'jsx',
], ],
}, },
colorMode: { colorMode: {
defaultMode: "light", defaultMode: 'light',
disableSwitch: false, disableSwitch: false,
respectPrefersColorScheme: false, respectPrefersColorScheme: false,
}, },
}, },
themes: ["@docusaurus/theme-live-codeblock", "@docusaurus/theme-mermaid"], themes: ['@docusaurus/theme-live-codeblock', '@docusaurus/theme-mermaid'],
}; }
module.exports = config; module.exports = config

View File

@ -18,6 +18,7 @@
"@docsearch/react": "3", "@docsearch/react": "3",
"@docusaurus/core": "^3.0.0", "@docusaurus/core": "^3.0.0",
"@docusaurus/plugin-client-redirects": "^3.0.0", "@docusaurus/plugin-client-redirects": "^3.0.0",
"@docusaurus/plugin-content-blog": "^3.0.0",
"@docusaurus/plugin-content-docs": "^3.0.0", "@docusaurus/plugin-content-docs": "^3.0.0",
"@docusaurus/preset-classic": "^3.0.0", "@docusaurus/preset-classic": "^3.0.0",
"@docusaurus/theme-live-codeblock": "^3.0.0", "@docusaurus/theme-live-codeblock": "^3.0.0",

View File

@ -15,70 +15,70 @@
const sidebars = { const sidebars = {
aboutSidebar: [ aboutSidebar: [
{ {
type: "category", type: 'category',
label: "What is Jan?", label: 'What is Jan?',
link: { type: "doc", id: "about/about" }, link: { type: 'doc', id: 'about/about' },
items: [ items: [
//"about/roadmap", //"about/roadmap",
"community/community", 'community/community',
], ],
}, },
{ {
type: "category", type: 'category',
label: "Who we are", label: 'Who we are',
link: { type: "doc", id: "team/team" }, link: { type: 'doc', id: 'team/team' },
items: ["team/join-us", "team/contributor-program"], items: ['team/join-us', 'team/contributor-program'],
}, },
"wall-of-love", 'wall-of-love',
{ {
type: "category", type: 'category',
label: "How We Work", label: 'How We Work',
link: { type: "doc", id: "how-we-work" }, link: { type: 'doc', id: 'how-we-work' },
items: [ items: [
"how-we-work/strategy/strategy", 'how-we-work/strategy/strategy',
"how-we-work/project-management/project-management", 'how-we-work/project-management/project-management',
{ {
type: "category", type: 'category',
label: "Engineering", label: 'Engineering',
link: { type: "doc", id: "how-we-work/engineering/engineering" }, link: { type: 'doc', id: 'how-we-work/engineering/engineering' },
items: [ items: [
"how-we-work/engineering/ci-cd", 'how-we-work/engineering/ci-cd',
"how-we-work/engineering/qa", 'how-we-work/engineering/qa',
], ],
}, },
"how-we-work/product-design/product-design", 'how-we-work/product-design/product-design',
"how-we-work/analytics/analytics", 'how-we-work/analytics/analytics',
"how-we-work/website-docs/website-docs", 'how-we-work/website-docs/website-docs',
], ],
}, },
"acknowledgements", 'acknowledgements',
], ],
productSidebar: [ productSidebar: [
{ {
type: "category", type: 'category',
label: "Platforms", label: 'Platforms',
collapsible: false, collapsible: false,
items: [ items: [
"platforms/desktop", 'platforms/desktop',
"server-suite/home-server", 'server-suite/home-server',
// "server-suite/enterprise", // "server-suite/enterprise",
// "platforms/mobile", // "platforms/mobile",
// "platforms/hub", // "platforms/hub",
], ],
}, },
{ {
type: "category", type: 'category',
collapsible: true, collapsible: true,
collapsed: false, collapsed: false,
label: "Features", label: 'Features',
link: { type: "doc", id: "features/features" }, link: { type: 'doc', id: 'features/features' },
items: [ items: [
"features/local", 'features/local',
"features/remote", 'features/remote',
"features/api-server", 'features/api-server',
"features/extensions-framework", 'features/extensions-framework',
"features/agents-framework", 'features/agents-framework',
"features/data-security", 'features/data-security',
], ],
}, },
// NOTE: Jan Server Suite will be torn out into it's own section in the future // NOTE: Jan Server Suite will be torn out into it's own section in the future
@ -96,78 +96,84 @@ const sidebars = {
], ],
solutionSidebar: [ solutionSidebar: [
{ {
type: "category", type: 'category',
label: "Use Cases", label: 'Use Cases',
collapsed: true, collapsed: true,
collapsible: true, collapsible: true,
items: ["solutions/ai-pc", "solutions/chatgpt-alternative"], items: ['solutions/ai-pc', 'solutions/chatgpt-alternative'],
}, },
{ {
type: "category", type: 'category',
label: "Sectors", label: 'Sectors',
collapsed: true, collapsed: true,
collapsible: true, collapsible: true,
items: [ items: [
"solutions/finance", 'solutions/finance',
"solutions/healthcare", 'solutions/healthcare',
"solutions/legal", 'solutions/legal',
"solutions/government", 'solutions/government',
], ],
}, },
{ {
type: "category", type: 'category',
label: "Organization Type", label: 'Organization Type',
collapsed: true, collapsed: true,
collapsible: true, collapsible: true,
items: [ items: [
"solutions/developers", 'solutions/developers',
"solutions/consultants", 'solutions/consultants',
"solutions/startups", 'solutions/startups',
"solutions/enterprises", 'solutions/enterprises',
], ],
}, },
], ],
pricingSidebar: ["pricing/pricing"], pricingSidebar: ['pricing/pricing'],
ecosystemSidebar: [ ecosystemSidebar: [
"ecosystem/ecosystem", 'ecosystem/ecosystem',
{ {
type: "category", type: 'category',
label: "Partners", label: 'Partners',
link: { type: "doc", id: "partners/partners" }, link: { type: 'doc', id: 'partners/partners' },
collapsible: true, collapsible: true,
items: ["partners/become-a-partner"], items: ['partners/become-a-partner'],
}, },
{ {
type: "category", type: 'category',
label: "Integrations", label: 'Integrations',
link: { type: "doc", id: "integrations" }, link: { type: 'doc', id: 'integrations' },
items: [ items: [
{ {
type: "autogenerated", type: 'autogenerated',
dirName: "integrations", dirName: 'integrations',
}, },
], ],
}, },
], ],
guidesSidebar: [ guidesSidebar: [
{ {
type: "autogenerated", type: 'autogenerated',
dirName: "guides", dirName: 'guides',
}, },
], ],
developerSidebar: [ developerSidebar: [
{ {
type: "autogenerated", type: 'autogenerated',
dirName: "developer", dirName: 'developer',
}, },
], ],
docsSidebar: [ docsSidebar: [
{ {
type: "autogenerated", type: 'autogenerated',
dirName: "docs", dirName: 'docs',
}, },
], ],
}; blogSidebar: [
{
type: 'autogenerated',
dirName: 'blog',
},
],
}
module.exports = sidebars; module.exports = sidebars

File diff suppressed because it is too large Load Diff