Enhanced the wording on the overview and the quickstart pages.

This commit is contained in:
Ramon Perez 2025-04-21 12:01:03 -04:00
parent cfdd2f0cf8
commit 642931bb0a
3 changed files with 3364 additions and 94 deletions

3272
docs/bun.lock Normal file

File diff suppressed because it is too large Load Diff

View File

@ -25,29 +25,34 @@ import FAQBox from '@/components/FaqBox'
![Jan's Cover Image](./_assets/jan-app.png) ![Jan's Cover Image](./_assets/jan-app.png)
Jan is a ChatGPT-alternative that runs 100% offline on your desktop & mobile (*coming soon*). Our goal is to make it easy for a layperson[^1] to download and run LLMs and use AI with full control and [privacy](https://www.reuters.com/legal/legalindustry/privacy-paradox-with-ai-2023-10-31/). Jan is an AI chat application that runs 100% offline on your desktop & mobile (*coming soon*). Our goal is to
make it easy for anyone, with or without coding skills, to download and use AI models with full control and
[privacy](https://www.reuters.com/legal/legalindustry/privacy-paradox-with-ai-2023-10-31/).
Jan is powered by [Cortex](https://cortex.so/), our embeddable local AI engine. Jan is powered by [Cortex](https://cortex.so/), our embeddable local AI engine which provides an OpenAI-compatible
API that can run in the background at `https://localhost:1337` (or a custom port). This enables you to power other
applications running locally with AI capabilities. For example, you can connect tools like [Continue.dev](https://jan.ai/integrations/coding/vscode)
and [Cline](https://cline.bot/), or any OpenAI-compatible app, to Jan and start coding on their supported editors using
models hosted in Jan.
<Callout> Jan doesn't limit you to locally hosted models, meaning, you can create an API key from your favorite model provider
**OpenAI-equivalent API:** Jan runs a Cortex Server in the background, which provides an OpenAI-equivalent API at https://localhost:1337. and add it to Jan via the configuration's page and start talking to your favorite paid models.
You'll be able to use it with [Continue.dev](https://jan.ai/integrations/coding/vscode), [Open Interpreter](https://jan.ai/integrations/function-calling/interpreter), or any OpenAI-compatible app.
</Callout>
### Features ### Features
- Download popular open-source LLMs (Llama3, Gemma or Mistral,...) from [Model Hub](./docs/models/manage-models.mdx) or import any GGUF models - Download popular open-source LLMs (Llama3, Gemma3, Mistral,and more) from the HugggingFace [Model Hub](./docs/models/manage-models.mdx)
or import any GGUF models available locally
- Connect to [cloud model services](/docs/remote-models/openai) (OpenAI, Anthropic, Mistral, Groq,...) - Connect to [cloud model services](/docs/remote-models/openai) (OpenAI, Anthropic, Mistral, Groq,...)
- [Chat](./docs/threads.mdx) with AI models & [customize their parameters](./docs/models/model-parameters.mdx) in an intuitive interface - [Chat](./docs/threads.mdx) with AI models & [customize their parameters](./docs/models/model-parameters.mdx) via our
- Use [local API server](https://jan.ai/api-reference) with OpenAI-equivalent API intuitive interface
- Use our [local API server](https://jan.ai/api-reference) with an OpenAI-equivalent API
- Customize Jan with [extensions](/docs/extensions) - Customize Jan with [extensions](/docs/extensions)
### Philosophy ### Philosophy
Jan is built to be [user-owned](about#-user-owned): Jan is built to be [user-owned](about#-user-owned), this means that Jan is:
- Open source via the [AGPLv3 license](https://github.com/menloresearch/jan/blob/dev/LICENSE) - Truly open source via the [AGPLv3 license](https://github.com/menloresearch/jan/blob/dev/LICENSE)
- [Local-first](https://www.inkandswitch.com/local-first/), with all data stored locally - [Data is stored locally, following one of the many local-first principles](https://www.inkandswitch.com/local-first)
- Runs 100% offline, with privacy by default - Runs 100% offline, with privacy by default
- Free choice of AI models, both local and cloud-based - Free choice of AI models, both local and cloud-based
- We do not collect or sell user data. See our [Privacy](/privacy). - We do not collect or sell user data. See our [Privacy](/privacy).
@ -73,95 +78,77 @@ Jan is built on the shoulders of many upstream open-source projects:
## FAQs ## FAQs
<FAQBox title="What is Jan?"> <FAQBox title="What is Jan?">
Jan is a customizable AI assistant that runs offline on your computer - a privacy-focused alternative to ChatGPT, with optional cloud AI support. Jan is a customizable AI assistant that runs offline on your computer - a privacy-focused alternative to tools like
ChatGPT, Anthropic's Claude, and Google Gemini, with optional cloud AI support.
</FAQBox> </FAQBox>
<FAQBox title="How do I use Jan?"> <FAQBox title="How do I get started with Jan?">
Download Jan on your computer, download a compatible model or connect to a cloud AI, and start chatting. See details in our [Quick Start](/docs/quickstart) guide. Download Jan on your computer, download a model or add API key for a cloud-based one, and start chatting. For
detailed setup instructions, see our [Quick Start](/docs/quickstart) guide.
</FAQBox> </FAQBox>
<FAQBox title="Is Jan compatible with my operating system?"> <FAQBox title="Is Jan compatible with my system?">
See our comapatibility guide for [Mac](/docs/desktop/mac#compatibility), [Windows](/docs/desktop/windows#compatibility), and [Linux](docs/desktop/linux). Jan supports all major operating systems, [Mac](/docs/desktop/mac#compatibility), [Windows](/docs/desktop/windows#compatibility),
and [Linux](docs/desktop/linux).
GPU-wise, Jan supports: Hardware compatibility includes:
- NVIDIA GPUs (CUDA) - NVIDIA GPUs (CUDA)
- AMD GPUs (Vulkan) - AMD GPUs (Vulkan)
- Intel Arc GPUs (Vulkan) - Intel Arc GPUs (Vulkan)
- Other GPUs with Vulkan support - Any GPU with Vulkan support
</FAQBox> </FAQBox>
<FAQBox title="Does Jan use my data?"> <FAQBox title="How does Jan protect my privacy?">
No data is collected. Everything stays local on your device. Jan prioritizes privacy by:
- Running 100% offline with locally-stored data
- Using open-source models that keep your conversations private
- Storing all files and chat history on your device in the [Jan Data Folder](/docs/data-folder)
- Never collecting or selling your data
<Callout type="warning"> <Callout type="warning">
When using cloud AI services (like GPT-4 or Claude) through Jan, their data collection is outside our control. Please check their privacy policies. When using third-party cloud AI services through Jan, their data policies apply. Check their privacy terms.
</Callout> </Callout>
You can help improve Jan by choosing to opt in anonymous basic usage data. Even so, your chats and personal information are never collected. Read more about what data you can contribute to us at [Privacy](./docs/privacy.mdx).
You can optionally share anonymous usage statistics to help improve Jan, but your conversations are never
shared. See our complete [Privacy Policy](./docs/privacy.mdx).
</FAQBox> </FAQBox>
<FAQBox title="What models can I use with Jan?">
<FAQBox title="Do you sell my data?"> - Download optimized models from [Jan Hub](/docs/models/manage-models#1-download-from-jan-hub-recommended)
No, and we never will. - Import GGUF models from Hugging Face or your local files
- Connect to cloud providers like OpenAI, Anthropic, Mistral and Groq (requires your own API keys)
</FAQBox> </FAQBox>
<FAQBox title="How does Jan ensure my data remains private?"> <FAQBox title="Is Jan really free? What's the catch?">
Jan prioritizes your privacy by running open-source AI models 100% offline on your computer. Conversations, documents, and files stay on your device in [Jan Data Folder](/docs/data-folder) located at: Jan is completely free and open-source with no subscription fees for local models and features. When using cloud-based
- Windows: `%APPDATA%/Jan/data` models (like GPT-4o or Claude Sonnet 3.7), you'll only pay the standard rates to those providers—we add no markup.
- Linux: `$XDG_CONFIG_HOME/Jan/data` or `~/.config/Jan/data`
- macOS: `~/Library/Application Support/Jan/data`
</FAQBox> </FAQBox>
<FAQBox title="Can I use Jan without an internet connection?"> <FAQBox title="Can I use Jan offline?">
Yes, Jan can run without an internet connection, but you'll need to [download a local model](/docs/models/manage-models#1-download-from-jan-hub-recommended) first. Once you've downloaded your preferred models, Jan will work entirely offline by default. Yes! Once you've downloaded a local model, Jan works completely offline with no internet connection needed.
</FAQBox> </FAQBox>
<FAQBox title="Are there any costs associated with using Jan?"> <FAQBox title="How can I customize or extend Jan?">
Jan is free and open-source. There are no subscription fees or hidden costs for all local models & features. Jan has an extensible architecture similar to VSCode and Obsidian. You can build custom features using our
[extensions API](/docs/extensions), which powers many of Jan's core features.
To use [cloud AI models](/docs/models/manage-models#cloud-model) (like GPT-4 or Claude):
- You'll need to have your own API keys & pay the standard rates charged by those providers.
- Jan doesn't add any markup.
</FAQBox>
<FAQBox title="What types of AI models can I download or import with Jan?">
- Models from [Jan Hub](/docs/models/manage-models#1-download-from-jan-hub-recommended) are recommended for best compatibility.
- You can also [import GGUF models](/docs/models/manage-models#2-import-from-hugging-face) from Hugging Face or from your local files.
</FAQBox> </FAQBox>
<FAQBox title="How do I customize Jan using the programmable API?"> <FAQBox title="How can I contribute or get community help?">
Jan has an extensible architecture like VSCode and Obsidian - you can build custom features using our extensions API. Most of Jan's features are actually built as [extensions](/docs/extensions). - Join our [Discord community](https://discord.gg/qSwXFx6Krr) to connect with other users
- Contribute through [GitHub](https://github.com/menloresearch/jan) (no permission needed!)
- Get troubleshooting help in our [Discord](https://discord.com/invite/FTk2MvZwJH) #🆘|get-help channel
- Check our [Troubleshooting](./docs/troubleshooting.mdx) guide for common issues
</FAQBox> </FAQBox>
<FAQBox title="How can I contribute to Jan's development or suggest features?"> <FAQBox title="Can I self-host Jan?">
Contributions can be made through [GitHub](https://github.com/menloresearch/jan) and [Discord](https://discord.gg/Exe46xPMbK), where you can also suggest features and make pull requests. No need to ask for permission. We're fully open-source! Yes! We fully support the self-hosted movement. Either [download Jan](./download.mdx) directly or fork and build
</FAQBox> from our [GitHub repository](https://github.com/menloresearch/jan).
<FAQBox title="How can I get involved with the Jan community?">
Joining our [Discord](https://discord.gg/qSwXFx6Krr) is a great way to get involved with the community.
</FAQBox>
<FAQBox title="How do I troubleshoot issues with installing or using Jan?">
For troubleshooting, please visit [Troubleshooting](./docs/troubleshooting.mdx).
In case you can't find what you need in our troubleshooting guide, please reach out to us for extra help on our [Discord](https://discord.com/invite/FTk2MvZwJH) in the **#🆘|get-help** channel.
</FAQBox>
<FAQBox title="Can I self-host?">
Yes! We love the self-hosted movement. You can:
- [Download Jan](./download.mdx) and run it directly.
- Fork and build from our [GitHub](https://github.com/menloresearch/jan) repository.
</FAQBox> </FAQBox>
<FAQBox title="What does Jan stand for?"> <FAQBox title="What does Jan stand for?">
Jan stands for Just a Name". We are, admittedly, bad at marketing 😂. Jan stands for "Just a Name". We are, admittedly, bad at marketing 😂.
</FAQBox> </FAQBox>
<FAQBox title="Are you hiring?"> <FAQBox title="Are you hiring?">
Yes! We love hiring from our community. Check out our open positions at [Careers](https://menlo.bamboohr.com/careers). Yes! We love hiring from our community. Check out our open positions at [Careers](https://menlo.bamboohr.com/careers).
</FAQBox> </FAQBox>
## Footnotes
[^1]: Our definition of "Non-technical" == don't need to know how to use Command Line

View File

@ -47,7 +47,7 @@ Jan offers various local AI models, from smaller efficient models to larger more
1. Go to **Hub** 1. Go to **Hub**
2. Browse available models and click on any model to see details about it 2. Browse available models and click on any model to see details about it
3. Choose a model that fits your needs & hardware specifications 3. Choose a model that fits your needs & hardware specifications
4. Click **Download** to begin installation 4. Click **Download** to begin
<Callout type="info"> <Callout type="info">
Local models run directly on your computer, which means they use your computer's memory (RAM) and processing power. Please choose models carefully based on your hardware specifications ([Mac](/docs/desktop/mac#minimum-requirements), [Windows](/docs/desktop/windows#compatibility), [Linux](/docs/desktop/linux#compatibility)). Local models run directly on your computer, which means they use your computer's memory (RAM) and processing power. Please choose models carefully based on your hardware specifications ([Mac](/docs/desktop/mac#minimum-requirements), [Windows](/docs/desktop/windows#compatibility), [Linux](/docs/desktop/linux#compatibility)).
</Callout> </Callout>
@ -59,38 +59,47 @@ For more model installation methods, please visit [Model Management](/docs/model
<br/> <br/>
### Step 3: Turn on GPU Acceleration (Optional) ### Step 3: Turn on GPU Acceleration (Optional)
While the model downloads, let's optimize your hardware setup. If you're on **Windows** or **Linux** and have a compatible graphics card, you can significantly boost model performance by enabling GPU acceleration. While the model downloads, let's optimize your hardware setup. If you're on **Windows** or **Linux** and have a
compatible graphics card, you can significantly boost model performance by enabling GPU acceleration.
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Local Engine** > **Llama.cpp** 1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Local Engine** > **Llama.cpp**
2. At **llama-cpp Backend**, select backend. For example `windows-amd64-vulkan` if you have and AMD gaphic card. For more info, see [our guide](/docs/local-engines/llama-cpp). 2. At **llama-cpp Backend**, select backend. For example `windows-amd64-vulkan` if you have and AMD gaphic card. For
more info, see [our guide](/docs/local-engines/llama-cpp).
<Callout type="info"> <Callout type="info">
Ensure you have installed all required dependencies and drivers before enabling GPU acceleration. See **GPU Setup Guide** on [Windows](/docs/desktop/windows#gpu-acceleration) & [Linux](/docs/desktop/linux#gpu-acceleration) for detailed instructions. Ensure you have installed all required dependencies and drivers before enabling GPU acceleration. See **GPU Setup Guide**
on [Windows](/docs/desktop/windows#gpu-acceleration) & [Linux](/docs/desktop/linux#gpu-acceleration) for detailed instructions.
</Callout> </Callout>
<br/> <br/>
![Turn on GPU acceleration](./_assets/trouble-shooting-03.png) ![Turn on GPU acceleration](./_assets/trouble-shooting-03.png)
### Step 4: Customize Assistant Instructions ### Step 4: Customize Assistant Instructions
Once your model has been downloaded and you're ready to start your first conversation, you can customize how it responds by setting specific instructions: Once your model has been downloaded and you're ready to start your first conversation, you can customize how the model
should respond by setting specific instructions:
1. In any **Thread**, click the **Assistant** tab in the **right sidebar** 1. In any **Thread**, click the **Assistant** tab in the **right sidebar**
2. Enter your instructions in **Instructions** field to define how Jan should respond 2. Enter your instructions in **Instructions** field to define how the model should respond. For example, "You are an
expert storyteller who writes engaging and imaginative stories for marketing campaigns. You don't follow the herd and
rather think outside the box when putting your copywriting skills to the test."
You can modify these instructions at any time during your conversation to adjust a model's behavior for that specific
thread. See detailed guide at [Assistant](/docs/assistants).
You can modify these instructions at any time during your conversation to adjust Jan's behavior for that specific thread. See detailed guide at [Assistant](/docs/assistants).
<br/> <br/>
![Assistant Instruction](./_assets/quick-start-02.png) ![Assistant Instruction](./_assets/quick-start-02.png)
<br/> <br/>
### Step 5: Start Chatting and Fine-tune Settings ### Step 5: Start Chatting and Fine-tune Settings
Now that your model is downloaded and instructions are set, you can begin chatting with Jan. Type your message in the **input field** at the bottom of the thread to start the conversation.
Now that your model is downloaded and instructions are set, you can begin chatting with it. Type your message in
the **input field** at the bottom of the thread to start the conversation.
You can further customize your experience by: You can further customize your experience by:
- Adjust [model parameters](/docs/models/model-parameters) in the **Model** tab in the **right sidebar** - Adjusting the [model parameters](/docs/models/model-parameters) in the **Model** tab in the **right sidebar**
- Try different models for different tasks by clicking the **model selector** in **Model** tab or **input field** - Try different models for different tasks by clicking the **model selector** in **Model** tab or **input field**
- [Create new threads](/docs/threads#creating-new-thread) with different instructions and model configurations - [Create new threads](/docs/threads#creating-new-thread) with different instructions and model configurations
<br/> <br/>
![Chat with a Model](./_assets/model-parameters.png) ![Chat with a Model](./_assets/model-parameters.png)
@ -99,7 +108,9 @@ You can further customize your experience by:
### Step 6: Connect to cloud models (Optional) ### Step 6: Connect to cloud models (Optional)
Jan supports both local and cloud AI models. You can connect to cloud AI services that are OpenAI API-compatible, including: OpenAI (GPT-4, o1,...), Anthropic (Claude), Groq, Mistral, and more.
Jan supports both open source and cloud-based models. You can connect to cloud model providers that are including: OpenAI
(GPT-4o, o1,...), Anthropic (Claude), Groq, Mistral, and more.
1. Open any **Thread** 1. Open any **Thread**
2. Click **Model** tab in the **right sidebar** or **model selector** in input field 2. Click **Model** tab in the **right sidebar** or **model selector** in input field
3. Once the selector is poped up, choose the **Cloud** tab 3. Once the selector is poped up, choose the **Cloud** tab