docs: preserve docs and website changes from rp/docs-v0.6.8 with latest dev base
BIN
docs/public/assets/images/changelog/mcp-linear.png
Normal file
|
After Width: | Height: | Size: 170 KiB |
BIN
docs/public/assets/images/changelog/mcplinear2.gif
Normal file
|
After Width: | Height: | Size: 36 MiB |
@ -11,11 +11,6 @@
|
||||
"type": "page",
|
||||
"title": "Documentation"
|
||||
},
|
||||
"cortex": {
|
||||
"type": "page",
|
||||
"title": "Cortex",
|
||||
"display": "hidden"
|
||||
},
|
||||
"platforms": {
|
||||
"type": "page",
|
||||
"title": "Platforms",
|
||||
|
||||
77
docs/src/pages/changelog/2025-08-14-general-improvs.mdx
Normal file
@ -0,0 +1,77 @@
|
||||
---
|
||||
title: "Jan v0.6.8: Engine fixes, new MCP tutorials, and cleaner docs"
|
||||
version: 0.6.8
|
||||
description: "Llama.cpp stability upgrades, Linear/Todoist MCP tutorials, new model pages (Lucy, Jan‑v1), and docs reorganization"
|
||||
date: 2025-08-14
|
||||
ogImage: "/assets/images/changelog/mcplinear2.gif"
|
||||
---
|
||||
|
||||
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
|
||||
import { Callout } from 'nextra/components'
|
||||
|
||||
<ChangelogHeader title="Jan v0.6.8: Engine fixes, new MCP tutorials, and cleaner docs" date="2025-08-14" ogImage="/assets/images/changelog/mcplinear2.gif" />
|
||||
|
||||
## Highlights 🎉
|
||||
|
||||
v0.6.8 focuses on stability and real workflows: major llama.cpp hardening, two new MCP productivity tutorials, new model pages, and a cleaner docs structure.
|
||||
|
||||
|
||||
### 🚀 New tutorials & docs
|
||||
|
||||
- Linear MCP tutorial: create/update issues, projects, comments, cycles — directly from chat
|
||||
- Todoist MCP tutorial: add, list, update, complete, and delete tasks via natural language
|
||||
- New model pages:
|
||||
- Lucy (1.7B) — optimized for web_search tool calling
|
||||
- Jan‑v1 (4B) — strong SimpleQA (91.1%), solid tool use
|
||||
- Docs updates:
|
||||
- Reorganized landing and Products sections; streamlined QuickStart
|
||||
- Ongoing Docs v2 (Astro) migration with handbook, blog, and changelog sections added and then removed
|
||||
|
||||
### 🧱 Llama.cpp engine: stability & correctness
|
||||
|
||||
- Structured error handling for llama.cpp extension
|
||||
- Better argument handling, improved model path resolution, clearer error messages
|
||||
- Device parsing tests; conditional Vulkan support; support for missing CUDA backends
|
||||
- AVX2 instruction support check (Mac Intel) for MCP
|
||||
- Server hang on model load — fixed
|
||||
- Session management & port allocation moved to backend for robustness
|
||||
- Recommended labels in settings; per‑model Jinja template customization
|
||||
- Tensor buffer type override support
|
||||
- “Continuous batching” description corrected
|
||||
|
||||
### ✨ UX polish
|
||||
|
||||
- Thread sorting fixed; assistant dropdown click reliability improved
|
||||
- Responsive left panel text color; provider logo blur cleanup
|
||||
- Show toast on download errors; context size error dialog restored
|
||||
- Prevent accidental message submit for IME users
|
||||
- Onboarding loop fixed; GPU detection brought back
|
||||
- Connected MCP servers status stays in sync after JSON edits
|
||||
|
||||
### 🔍 Hub & providers
|
||||
|
||||
- Hugging Face token respected for repo search and private README visualization
|
||||
- Deep links and model details fixed
|
||||
- Factory reset unblocked; special chars in `modelId` handled
|
||||
- Feature toggle for auto‑updater respected
|
||||
|
||||
### 🧪 CI & housekeeping
|
||||
|
||||
- Nightly/PR workflow tweaks; clearer API server logs
|
||||
- Cleaned unused hardware APIs
|
||||
- Release workflows updated; docs release paths consolidated
|
||||
|
||||
### 🤖 Reasoning model fixes
|
||||
|
||||
- gpt‑oss “thinking block” rendering fixed
|
||||
- Reasoning text no longer included in chat completion requests
|
||||
|
||||
## Thanks to new contributors
|
||||
|
||||
· @cmppoon · @shmutalov · @B0sh
|
||||
|
||||
---
|
||||
|
||||
Update your Jan or [download the latest](https://jan.ai/).
|
||||
|
||||
For the complete list of changes, see the [GitHub release notes](https://github.com/menloresearch/jan/releases/tag/v0.6.8).
|
||||
BIN
docs/src/pages/docs/_assets/chat_jan_v1.png
Normal file
|
After Width: | Height: | Size: 428 KiB |
BIN
docs/src/pages/docs/_assets/creative_bench_jan_v1.png
Normal file
|
After Width: | Height: | Size: 127 KiB |
BIN
docs/src/pages/docs/_assets/download_janv1.png
Normal file
|
After Width: | Height: | Size: 353 KiB |
BIN
docs/src/pages/docs/_assets/enable_mcp.png
Normal file
|
After Width: | Height: | Size: 474 KiB |
BIN
docs/src/pages/docs/_assets/jan_v1_demo.gif
Normal file
|
After Width: | Height: | Size: 7.7 MiB |
BIN
docs/src/pages/docs/_assets/jan_v1_serper.png
Normal file
|
After Width: | Height: | Size: 625 KiB |
BIN
docs/src/pages/docs/_assets/jan_v1_serper1.png
Normal file
|
After Width: | Height: | Size: 930 KiB |
BIN
docs/src/pages/docs/_assets/linear1.png
Normal file
|
After Width: | Height: | Size: 161 KiB |
BIN
docs/src/pages/docs/_assets/linear2.png
Normal file
|
After Width: | Height: | Size: 695 KiB |
BIN
docs/src/pages/docs/_assets/linear3.png
Normal file
|
After Width: | Height: | Size: 232 KiB |
BIN
docs/src/pages/docs/_assets/linear4.png
Normal file
|
After Width: | Height: | Size: 176 KiB |
BIN
docs/src/pages/docs/_assets/linear5.png
Normal file
|
After Width: | Height: | Size: 926 KiB |
BIN
docs/src/pages/docs/_assets/linear6.png
Normal file
|
After Width: | Height: | Size: 175 KiB |
BIN
docs/src/pages/docs/_assets/linear7.png
Normal file
|
After Width: | Height: | Size: 197 KiB |
BIN
docs/src/pages/docs/_assets/linear8.png
Normal file
|
After Width: | Height: | Size: 369 KiB |
BIN
docs/src/pages/docs/_assets/lucy_demo.gif
Normal file
|
After Width: | Height: | Size: 23 MiB |
BIN
docs/src/pages/docs/_assets/mcplinear2.gif
Normal file
|
After Width: | Height: | Size: 36 MiB |
BIN
docs/src/pages/docs/_assets/mcptodoist_extreme.gif
Normal file
|
After Width: | Height: | Size: 3.4 MiB |
BIN
docs/src/pages/docs/_assets/serper_janparams.png
Normal file
|
After Width: | Height: | Size: 248 KiB |
BIN
docs/src/pages/docs/_assets/serper_page.png
Normal file
|
After Width: | Height: | Size: 1021 KiB |
BIN
docs/src/pages/docs/_assets/serper_playground.png
Normal file
|
After Width: | Height: | Size: 600 KiB |
BIN
docs/src/pages/docs/_assets/simpleqa_jan_v1.png
Normal file
|
After Width: | Height: | Size: 212 KiB |
BIN
docs/src/pages/docs/_assets/simpleqa_lucy.png
Normal file
|
After Width: | Height: | Size: 217 KiB |
BIN
docs/src/pages/docs/_assets/todoist1.png
Normal file
|
After Width: | Height: | Size: 247 KiB |
BIN
docs/src/pages/docs/_assets/todoist2.png
Normal file
|
After Width: | Height: | Size: 383 KiB |
BIN
docs/src/pages/docs/_assets/todoist3.png
Normal file
|
After Width: | Height: | Size: 328 KiB |
BIN
docs/src/pages/docs/_assets/todoist4.png
Normal file
|
After Width: | Height: | Size: 216 KiB |
BIN
docs/src/pages/docs/_assets/todoist5.png
Normal file
|
After Width: | Height: | Size: 514 KiB |
BIN
docs/src/pages/docs/_assets/toggle_tools.png
Normal file
|
After Width: | Height: | Size: 586 KiB |
BIN
docs/src/pages/docs/_assets/turn_on_mcp.png
Normal file
|
After Width: | Height: | Size: 209 KiB |
@ -4,20 +4,16 @@
|
||||
"title": "Switcher"
|
||||
},
|
||||
"index": "Overview",
|
||||
"how-to-separator": {
|
||||
"title": "HOW TO",
|
||||
"getting-started-separator": {
|
||||
"title": "GETTING STARTED",
|
||||
"type": "separator"
|
||||
},
|
||||
"quickstart": "QuickStart",
|
||||
"desktop": "Install 👋 Jan",
|
||||
"threads": "Start Chatting",
|
||||
"jan-models": "Use Jan Models",
|
||||
"jan-models": "Models",
|
||||
"assistants": "Create Assistants",
|
||||
|
||||
"tutorials-separators": {
|
||||
"title": "TUTORIALS",
|
||||
"type": "separator"
|
||||
},
|
||||
"remote-models": "Connect to Remote Models",
|
||||
"remote-models": "Cloud Providers",
|
||||
"mcp-examples": "Tutorials",
|
||||
|
||||
"explanation-separator": {
|
||||
"title": "EXPLANATION",
|
||||
@ -38,7 +34,6 @@
|
||||
},
|
||||
"manage-models": "Manage Models",
|
||||
"mcp": "Model Context Protocol",
|
||||
"mcp-examples": "MCP Examples",
|
||||
|
||||
"localserver": {
|
||||
"title": "LOCAL SERVER",
|
||||
|
||||
@ -1,17 +1,19 @@
|
||||
---
|
||||
title: Jan
|
||||
description: Jan is an open-source ChatGPT-alternative and self-hosted AI platform - build and run AI on your own desktop or server.
|
||||
description: Build, run, and own your AI. From laptop to superintelligence.
|
||||
keywords:
|
||||
[
|
||||
Jan,
|
||||
Jan AI,
|
||||
ChatGPT alternative,
|
||||
OpenAI platform alternative,
|
||||
local API,
|
||||
open superintelligence,
|
||||
AI ecosystem,
|
||||
local AI,
|
||||
private AI,
|
||||
conversational AI,
|
||||
no-subscription fee,
|
||||
self-hosted AI,
|
||||
llama.cpp,
|
||||
Model Context Protocol,
|
||||
MCP,
|
||||
GGUF models,
|
||||
large language model,
|
||||
LLM,
|
||||
]
|
||||
@ -24,123 +26,152 @@ import FAQBox from '@/components/FaqBox'
|
||||
|
||||

|
||||
|
||||
## Jan's Goal
|
||||
|
||||
Jan is a ChatGPT alternative that runs 100% offline on your desktop and (*soon*) on mobile. Our goal is to
|
||||
make it easy for anyone, with or without coding skills, to download and use AI models with full control and
|
||||
[privacy](https://www.reuters.com/legal/legalindustry/privacy-paradox-with-ai-2023-10-31/).
|
||||
> Jan's goal is to build superintelligence that you can self-host and use locally.
|
||||
|
||||
Jan is powered by [Llama.cpp](https://github.com/ggerganov/llama.cpp), a local AI engine that provides an OpenAI-compatible
|
||||
API that can run in the background by default at `https://localhost:1337` (or your custom port). This enables you to power all sorts of
|
||||
applications with AI capabilities from your laptop/PC. For example, you can connect local tools like [Continue](https://jan.ai/docs/server-examples/continue-dev)
|
||||
and [Cline](https://cline.bot/) to Jan and power them using your favorite models.
|
||||
## What is Jan?
|
||||
|
||||
Jan doesn't limit you to locally hosted models, meaning, you can create an API key from your favorite model provider,
|
||||
add it to Jan via the configuration's page and start talking to your favorite models.
|
||||
Jan is an open-source AI ecosystem that runs on your hardware. We're building towards open superintelligence - a complete AI platform you actually own.
|
||||
|
||||
### Features
|
||||
### The Ecosystem
|
||||
|
||||
- Download popular open-source LLMs (Llama3, Gemma3, Qwen3, and more) from the HuggingFace [Model Hub](./docs/manage-models.mdx)
|
||||
or import any GGUF files (the model format used by llama.cpp) available locally
|
||||
- Connect to [cloud services](/docs/remote-models/openai) (OpenAI, Anthropic, Mistral, Groq, etc.)
|
||||
- [Chat](./docs/threads.mdx) with AI models & [customize their parameters](/docs/model-parameters.mdx) via our
|
||||
intuitive interface
|
||||
- Use our [local API server](https://jan.ai/api-reference) with an OpenAI-equivalent API to power other apps.
|
||||
**Models**: We build specialized models for real tasks, not general-purpose assistants:
|
||||
- **Jan-Nano (32k/128k)**: 4B parameters designed for deep research with MCP. The 128k version processes entire papers, codebases, or legal documents in one go
|
||||
- **Lucy**: 1.7B model that runs agentic web search on your phone. Small enough for CPU, smart enough for complex searches
|
||||
- **Jan-v1**: 4B model for agentic reasoning and tool use, achieving 91.1% on SimpleQA
|
||||
|
||||
### Philosophy
|
||||
We also integrate the best open-source models - from OpenAI's gpt-oss to community GGUF models on Hugging Face. The goal: make powerful AI accessible to everyone, not just those with server farms.
|
||||
|
||||
Jan is built to be [user-owned](about#-user-owned), this means that Jan is:
|
||||
- Truly open source via the [Apache 2.0 license](https://github.com/menloresearch/jan/blob/dev/LICENSE)
|
||||
- [Data is stored locally, following one of the many local-first principles](https://www.inkandswitch.com/local-first)
|
||||
- Internet is optional, Jan can run 100% offline
|
||||
- Free choice of AI models, both local and cloud-based
|
||||
- We do not collect or sell user data. See our [Privacy Policy](./privacy).
|
||||
**Applications**: Jan Desktop runs on your computer today. Web, mobile, and server versions coming in late 2025. Everything syncs, everything works together.
|
||||
|
||||
**Tools**: Connect to the real world through [Model Context Protocol (MCP)](./mcp). Design with Canva, analyze data in Jupyter notebooks, control browsers, execute code in E2B sandboxes. Your AI can actually do things, not just talk about them.
|
||||
|
||||
<Callout>
|
||||
You can read more about our [philosophy](/about#philosophy) here.
|
||||
API keys are optional. No account needed. Just download and run. Bring your own API keys to connect your favorite cloud models.
|
||||
</Callout>
|
||||
|
||||
### Inspirations
|
||||
### Core Features
|
||||
|
||||
Jan is inspired by the concepts of [Calm Computing](https://en.wikipedia.org/wiki/Calm_technology), and the Disappearing Computer.
|
||||
- **Run Models Locally**: Download any GGUF model from Hugging Face, use OpenAI's gpt-oss models, or connect to cloud providers
|
||||
- **OpenAI-Compatible API**: Local server at `localhost:1337` works with tools like [Continue](./server-examples/continue-dev) and [Cline](https://cline.bot/)
|
||||
- **Extend with MCP Tools**: Browser automation, web search, data analysis, design tools - all through natural language
|
||||
- **Your Choice of Infrastructure**: Run on your laptop, self-host on your servers (soon), or use cloud when you need it
|
||||
|
||||
### Growing MCP Integrations
|
||||
|
||||
Jan connects to real tools through MCP:
|
||||
- **Creative Work**: Generate designs with Canva
|
||||
- **Data Analysis**: Execute Python in Jupyter notebooks
|
||||
- **Web Automation**: Control browsers with Browserbase and Browser Use
|
||||
- **Code Execution**: Run code safely in E2B sandboxes
|
||||
- **Search & Research**: Access current information via Exa, Perplexity, and Octagon
|
||||
- **More coming**: The MCP ecosystem is expanding rapidly
|
||||
|
||||
## Philosophy
|
||||
|
||||
Jan is built to be user-owned:
|
||||
- **Open Source**: Apache 2.0 license - truly free
|
||||
- **Local First**: Your data stays on your device. Internet is optional
|
||||
- **Privacy Focused**: We don't collect or sell user data. See our [Privacy Policy](./privacy)
|
||||
- **No Lock-in**: Export your data anytime. Use any model. Switch between local and cloud
|
||||
|
||||
<Callout type="info">
|
||||
We're building AI that respects your choices. Not another wrapper around someone else's API.
|
||||
</Callout>
|
||||
|
||||
## Quick Start
|
||||
|
||||
1. [Download Jan](./quickstart) for your operating system
|
||||
2. Choose a model - download locally or add cloud API keys
|
||||
3. Start chatting or connect tools via MCP
|
||||
4. Build with our [API](https://jan.ai/api-reference)
|
||||
|
||||
## Acknowledgements
|
||||
|
||||
Jan is built on the shoulders of many open-source projects like:
|
||||
|
||||
- [Llama.cpp](https://github.com/ggerganov/llama.cpp/blob/master/LICENSE)
|
||||
- [Scalar](https://github.com/scalar/scalar)
|
||||
Jan is built on the shoulders of giants:
|
||||
- [Llama.cpp](https://github.com/ggerganov/llama.cpp) for inference
|
||||
- [Model Context Protocol](https://modelcontextprotocol.io) for tool integration
|
||||
- The open-source community that makes this possible
|
||||
|
||||
## FAQs
|
||||
|
||||
<FAQBox title="What is Jan?">
|
||||
Jan is a customizable AI assistant that can run offline on your computer - a privacy-focused alternative to tools like
|
||||
ChatGPT, Anthropic's Claude, and Google Gemini, with optional cloud AI support.
|
||||
Jan is an open-source AI ecosystem building towards superintelligence you can self-host. Today it's a desktop app that runs AI models locally. Tomorrow it's a complete platform across all your devices.
|
||||
</FAQBox>
|
||||
|
||||
<FAQBox title="How do I get started with Jan?">
|
||||
Download Jan on your computer, download a model or add API key for a cloud-based one, and start chatting. For
|
||||
detailed setup instructions, see our [Quick Start](/docs/quickstart) guide.
|
||||
<FAQBox title="How is this different from other AI platforms?">
|
||||
Other platforms are models behind APIs you rent. Jan is a complete AI ecosystem you own. Run any model, use real tools through MCP, keep your data private, and never pay subscriptions for local use.
|
||||
</FAQBox>
|
||||
|
||||
<FAQBox title="What models can I use?">
|
||||
**Jan Models:**
|
||||
- Jan-Nano (32k/128k) - Deep research with MCP integration
|
||||
- Lucy - Mobile-optimized agentic search (1.7B)
|
||||
- Jan-v1 - Agentic reasoning and tool use (4B)
|
||||
|
||||
**Open Source:**
|
||||
- OpenAI's gpt-oss models (120b and 20b)
|
||||
- Any GGUF model from Hugging Face
|
||||
|
||||
**Cloud (with your API keys):**
|
||||
- OpenAI, Anthropic, Mistral, Groq, and more
|
||||
</FAQBox>
|
||||
|
||||
<FAQBox title="What are MCP tools?">
|
||||
MCP (Model Context Protocol) lets AI interact with real applications. Instead of just generating text, your AI can create designs in Canva, analyze data in Jupyter, browse the web, and execute code - all through conversation.
|
||||
</FAQBox>
|
||||
|
||||
<FAQBox title="Is Jan compatible with my system?">
|
||||
Jan supports all major operating systems,
|
||||
- [Mac](/docs/desktop/mac#compatibility)
|
||||
- [Windows](/docs/desktop/windows#compatibility)
|
||||
- [Linux](/docs/desktop/linux)
|
||||
**Supported OS**:
|
||||
- [Windows 10+](/docs/desktop/windows#compatibility)
|
||||
- [macOS 12+](/docs/desktop/mac#compatibility)
|
||||
- [Linux (Ubuntu 20.04+)](/docs/desktop/linux)
|
||||
|
||||
Hardware compatibility includes:
|
||||
- NVIDIA GPUs (CUDA)
|
||||
- AMD GPUs (Vulkan)
|
||||
- Intel Arc GPUs (Vulkan)
|
||||
- Any GPU with Vulkan support
|
||||
**Hardware**:
|
||||
- Minimum: 8GB RAM, 10GB storage
|
||||
- Recommended: 16GB RAM, GPU (NVIDIA/AMD/Intel), 50GB storage
|
||||
- Works with: NVIDIA (CUDA), AMD (Vulkan), Intel Arc, Apple Silicon
|
||||
</FAQBox>
|
||||
|
||||
<FAQBox title="How does Jan protect my privacy?">
|
||||
Jan prioritizes privacy by:
|
||||
- Running 100% offline with locally-stored data
|
||||
- Using open-source models that keep your conversations private
|
||||
- Storing all files and chat history on your device in the [Jan Data Folder](/docs/data-folder)
|
||||
- Never collecting or selling your data
|
||||
<FAQBox title="Is Jan really free?">
|
||||
**Local use**: Always free, no catches
|
||||
**Cloud models**: You pay providers directly (we add no markup)
|
||||
**Jan cloud**: Optional paid services coming 2025
|
||||
|
||||
The core platform will always be free and open source.
|
||||
</FAQBox>
|
||||
|
||||
<FAQBox title="How does Jan protect privacy?">
|
||||
- Runs 100% offline once models are downloaded
|
||||
- All data stored locally in [Jan Data Folder](/docs/data-folder)
|
||||
- No telemetry without explicit consent
|
||||
- Open source code you can audit
|
||||
|
||||
<Callout type="warning">
|
||||
When using third-party cloud AI services through Jan, their data policies apply. Check their privacy terms.
|
||||
When using cloud providers through Jan, their privacy policies apply.
|
||||
</Callout>
|
||||
|
||||
You can optionally share anonymous usage statistics to help improve Jan, but your conversations are never
|
||||
shared. See our complete [Privacy Policy](./docs/privacy).
|
||||
</FAQBox>
|
||||
|
||||
<FAQBox title="What models can I use with Jan?">
|
||||
- Download optimized models from the [Jan Hub](/docs/manage-models)
|
||||
- Import GGUF models from Hugging Face or your local files
|
||||
- Connect to cloud providers like OpenAI, Anthropic, Mistral and Groq (requires your own API keys)
|
||||
</FAQBox>
|
||||
|
||||
<FAQBox title="Is Jan really free? What's the catch?">
|
||||
Jan is completely free and open-source with no subscription fees for local models and features. When using cloud-based
|
||||
models (like GPT-4o or Claude Sonnet 3.7), you'll only pay the standard rates to those providers—we add no markup.
|
||||
</FAQBox>
|
||||
|
||||
<FAQBox title="Can I use Jan offline?">
|
||||
Yes! Once you've downloaded a local model, Jan works completely offline with no internet connection needed.
|
||||
</FAQBox>
|
||||
|
||||
<FAQBox title="How can I contribute or get community help?">
|
||||
- Join our [Discord community](https://discord.gg/qSwXFx6Krr) to connect with other users
|
||||
- Contribute through [GitHub](https://github.com/menloresearch/jan) (no permission needed!)
|
||||
- Get troubleshooting help in our [Discord](https://discord.com/invite/FTk2MvZwJH) channel [#🆘|jan-help](https://discord.com/channels/1107178041848909847/1192090449725358130)
|
||||
- Check our [Troubleshooting](./docs/troubleshooting) guide for common issues
|
||||
</FAQBox>
|
||||
|
||||
<FAQBox title="Can I self-host Jan?">
|
||||
Yes! We fully support the self-hosted movement. Either download Jan directly or fork it on
|
||||
[GitHub repository](https://github.com/menloresearch/jan) and build it from source.
|
||||
Yes. Download directly or build from [source](https://github.com/menloresearch/jan). Jan Server for production deployments coming late 2025.
|
||||
</FAQBox>
|
||||
|
||||
<FAQBox title="What does Jan stand for?">
|
||||
Jan stands for "Just a Name". We are, admittedly, bad at marketing 😂.
|
||||
<FAQBox title="When will mobile/web versions launch?">
|
||||
- **Jan Web**: Beta late 2025
|
||||
- **Jan Mobile**: Late 2025
|
||||
- **Jan Server**: Late 2025
|
||||
|
||||
All versions will sync seamlessly.
|
||||
</FAQBox>
|
||||
|
||||
<FAQBox title="How can I contribute?">
|
||||
- Code: [GitHub](https://github.com/menloresearch/jan)
|
||||
- Community: [Discord](https://discord.gg/FTk2MvZwJH)
|
||||
- Testing: Help evaluate models and report bugs
|
||||
- Documentation: Improve guides and tutorials
|
||||
</FAQBox>
|
||||
|
||||
<FAQBox title="Are you hiring?">
|
||||
Yes! We love hiring from our community. Check out our open positions at [Careers](https://menlo.bamboohr.com/careers).
|
||||
</FAQBox>
|
||||
Yes! We love hiring from our community. Check [Careers](https://menlo.bamboohr.com/careers).
|
||||
</FAQBox>
|
||||
129
docs/src/pages/docs/jan-models/jan-v1.mdx
Normal file
@ -0,0 +1,129 @@
|
||||
---
|
||||
title: Jan-v1
|
||||
description: 4B parameter model with strong performance on reasoning benchmarks
|
||||
keywords:
|
||||
[
|
||||
Jan,
|
||||
Jan-v1,
|
||||
Jan Models,
|
||||
reasoning,
|
||||
SimpleQA,
|
||||
tool calling,
|
||||
GGUF,
|
||||
4B model,
|
||||
]
|
||||
---
|
||||
|
||||
import { Callout } from 'nextra/components'
|
||||
|
||||
# Jan-v1
|
||||
|
||||
## Overview
|
||||
|
||||
Jan-v1 is a 4B parameter model based on Qwen3-4B-thinking, designed for reasoning and problem-solving tasks. The model achieves 91.1% accuracy on SimpleQA through model scaling and fine-tuning approaches.
|
||||
|
||||
## Performance
|
||||
|
||||
### SimpleQA Benchmark
|
||||
|
||||
Jan-v1 demonstrates strong factual question-answering capabilities:
|
||||
|
||||

|
||||
|
||||
At 91.1% accuracy, Jan-v1 outperforms several larger models on SimpleQA, including Perplexity's 70B model. This performance represents effective scaling and fine-tuning for a 4B parameter model.
|
||||
|
||||
### Chat and Creativity Benchmarks
|
||||
|
||||
Jan-v1 has been evaluated on conversational and creative tasks:
|
||||
|
||||

|
||||
|
||||
These benchmarks (EQBench, CreativeWriting, and IFBench) measure the model's ability to handle conversational nuance, creative expression, and instruction following.
|
||||
|
||||
## Requirements
|
||||
|
||||
- **Memory**:
|
||||
- Minimum: 8GB RAM (with Q4 quantization)
|
||||
- Recommended: 16GB RAM (with Q8 quantization)
|
||||
- **Hardware**: CPU or GPU
|
||||
- **API Support**: OpenAI-compatible at localhost:1337
|
||||
|
||||
## Using Jan-v1
|
||||
|
||||
### Quick Start
|
||||
|
||||
1. Download Jan Desktop
|
||||
2. Select Jan-v1 from the model list
|
||||
3. Start chatting - no additional configuration needed
|
||||
|
||||
### Demo
|
||||
|
||||

|
||||
|
||||
### Deployment Options
|
||||
|
||||
**Using vLLM:**
|
||||
```bash
|
||||
vllm serve janhq/Jan-v1-4B \
|
||||
--host 0.0.0.0 \
|
||||
--port 1234 \
|
||||
--enable-auto-tool-choice \
|
||||
--tool-call-parser hermes
|
||||
```
|
||||
|
||||
**Using llama.cpp:**
|
||||
```bash
|
||||
llama-server --model jan-v1.gguf \
|
||||
--host 0.0.0.0 \
|
||||
--port 1234 \
|
||||
--jinja \
|
||||
--no-context-shift
|
||||
```
|
||||
|
||||
### Recommended Parameters
|
||||
|
||||
```yaml
|
||||
temperature: 0.6
|
||||
top_p: 0.95
|
||||
top_k: 20
|
||||
min_p: 0.0
|
||||
max_tokens: 2048
|
||||
```
|
||||
|
||||
## What Jan-v1 Does Well
|
||||
|
||||
- **Question Answering**: 91.1% accuracy on SimpleQA
|
||||
- **Reasoning Tasks**: Built on thinking-optimized base model
|
||||
- **Tool Calling**: Supports function calling through hermes parser
|
||||
- **Instruction Following**: Reliable response to user instructions
|
||||
|
||||
## Limitations
|
||||
|
||||
- **Model Size**: 4B parameters limits complex reasoning compared to larger models
|
||||
- **Specialized Tasks**: Optimized for Q&A and reasoning, not specialized domains
|
||||
- **Context Window**: Standard context limitations apply
|
||||
|
||||
## Available Formats
|
||||
|
||||
### GGUF Quantizations
|
||||
|
||||
- **Q4_K_M**: 2.5 GB - Good balance of size and quality
|
||||
- **Q5_K_M**: 2.89 GB - Better quality, slightly larger
|
||||
- **Q6_K**: 3.31 GB - Near-full quality
|
||||
- **Q8_0**: 4.28 GB - Highest quality quantization
|
||||
|
||||
## Models Available
|
||||
|
||||
- [Jan-v1 on Hugging Face](https://huggingface.co/janhq/Jan-v1-4B)
|
||||
- [Jan-v1 GGUF on Hugging Face](https://huggingface.co/janhq/Jan-v1-4B-GGUF)
|
||||
|
||||
## Technical Notes
|
||||
|
||||
<Callout type="info">
|
||||
The model includes a system prompt in the chat template by default to match benchmark performance. A vanilla template without system prompt is available in `chat_template_raw.jinja`.
|
||||
</Callout>
|
||||
|
||||
## Community
|
||||
|
||||
- **Discussions**: [HuggingFace Community](https://huggingface.co/janhq/Jan-v1-4B/discussions)
|
||||
- **Support**: Available through Jan App at [jan.ai](https://jan.ai)
|
||||
122
docs/src/pages/docs/jan-models/lucy.mdx
Normal file
@ -0,0 +1,122 @@
|
||||
---
|
||||
title: Lucy
|
||||
description: Compact 1.7B model optimized for web search with tool calling
|
||||
keywords:
|
||||
[
|
||||
Jan,
|
||||
Lucy,
|
||||
Jan Models,
|
||||
web search,
|
||||
tool calling,
|
||||
Serper API,
|
||||
GGUF,
|
||||
1.7B model,
|
||||
]
|
||||
---
|
||||
|
||||
import { Callout } from 'nextra/components'
|
||||
|
||||
# Lucy
|
||||
|
||||
## Overview
|
||||
|
||||
Lucy is a 1.7B parameter model built on Qwen3-1.7B, optimized for web search through tool calling. The model has been trained to work effectively with search APIs like Serper, enabling web search capabilities in resource-constrained environments.
|
||||
|
||||
## Performance
|
||||
|
||||
### SimpleQA Benchmark
|
||||
|
||||
Lucy achieves competitive performance on SimpleQA despite its small size:
|
||||
|
||||

|
||||
|
||||
The benchmark shows Lucy (1.7B) compared against models ranging from 4B to 600B+ parameters. While larger models generally perform better, Lucy demonstrates that effective web search integration can partially compensate for smaller model size.
|
||||
|
||||
## Requirements
|
||||
|
||||
- **Memory**:
|
||||
- Minimum: 4GB RAM (with Q4 quantization)
|
||||
- Recommended: 8GB RAM (with Q8 quantization)
|
||||
- **Search API**: Serper API key required for web search functionality
|
||||
- **Hardware**: Runs on CPU or GPU
|
||||
|
||||
<Callout type="info">
|
||||
To use Lucy's web search capabilities, you'll need a Serper API key. Get one at [serper.dev](https://serper.dev).
|
||||
</Callout>
|
||||
|
||||
## Using Lucy
|
||||
|
||||
### Quick Start
|
||||
|
||||
1. Download Jan Desktop
|
||||
2. Download Lucy from the Hub
|
||||
3. Configure Serper MCP with your API key
|
||||
4. Start using web search through natural language
|
||||
|
||||
### Demo
|
||||
|
||||

|
||||
|
||||
### Deployment Options
|
||||
|
||||
**Using vLLM:**
|
||||
```bash
|
||||
vllm serve Menlo/Lucy-128k \
|
||||
--host 0.0.0.0 \
|
||||
--port 1234 \
|
||||
--enable-auto-tool-choice \
|
||||
--tool-call-parser hermes \
|
||||
--rope-scaling '{"rope_type":"yarn","factor":3.2,"original_max_position_embeddings":40960}' \
|
||||
--max-model-len 131072
|
||||
```
|
||||
|
||||
**Using llama.cpp:**
|
||||
```bash
|
||||
llama-server model.gguf \
|
||||
--host 0.0.0.0 \
|
||||
--port 1234 \
|
||||
--rope-scaling yarn \
|
||||
--rope-scale 3.2 \
|
||||
--yarn-orig-ctx 40960
|
||||
```
|
||||
|
||||
### Recommended Parameters
|
||||
|
||||
```yaml
|
||||
Temperature: 0.7
|
||||
Top-p: 0.9
|
||||
Top-k: 20
|
||||
Min-p: 0.0
|
||||
```
|
||||
|
||||
## What Lucy Does Well
|
||||
|
||||
- **Web Search Integration**: Optimized to call search tools and process results
|
||||
- **Small Footprint**: 1.7B parameters means lower memory requirements
|
||||
- **Tool Calling**: Reliable function calling for search APIs
|
||||
|
||||
## Limitations
|
||||
|
||||
- **Requires Internet**: Web search functionality needs active connection
|
||||
- **API Costs**: Serper API has usage limits and costs
|
||||
- **Context Processing**: While supporting 128k context, performance may vary with very long inputs
|
||||
- **General Knowledge**: Limited by 1.7B parameter size for tasks beyond search
|
||||
|
||||
## Models Available
|
||||
|
||||
- [Lucy on Hugging Face](https://huggingface.co/Menlo/Lucy-128k)
|
||||
- [Lucy GGUF on Hugging Face](https://huggingface.co/Menlo/Lucy-128k-gguf)
|
||||
|
||||
## Citation
|
||||
|
||||
```bibtex
|
||||
@misc{dao2025lucyedgerunningagenticweb,
|
||||
title={Lucy: edgerunning agentic web search on mobile with machine generated task vectors},
|
||||
author={Alan Dao and Dinh Bach Vu and Alex Nguyen and Norapat Buppodom},
|
||||
year={2025},
|
||||
eprint={2508.00360},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.CL},
|
||||
url={https://arxiv.org/abs/2508.00360},
|
||||
}
|
||||
```
|
||||
20
docs/src/pages/docs/mcp-examples/_meta.json
Normal file
@ -0,0 +1,20 @@
|
||||
{
|
||||
"browser": {
|
||||
"title": "Browser Automation"
|
||||
},
|
||||
"data-analysis": {
|
||||
"title": "Data Analysis"
|
||||
},
|
||||
"search": {
|
||||
"title": "Search & Research"
|
||||
},
|
||||
"design": {
|
||||
"title": "Design Tools"
|
||||
},
|
||||
"deepresearch": {
|
||||
"title": "Deep Research"
|
||||
},
|
||||
"productivity": {
|
||||
"title": "Productivity"
|
||||
}
|
||||
}
|
||||
6
docs/src/pages/docs/mcp-examples/browser/_meta.json
Normal file
@ -0,0 +1,6 @@
|
||||
{
|
||||
"browserbase": {
|
||||
"title": "Browserbase",
|
||||
"href": "/docs/mcp-examples/browser/browserbase"
|
||||
}
|
||||
}
|
||||
10
docs/src/pages/docs/mcp-examples/data-analysis/_meta.json
Normal file
@ -0,0 +1,10 @@
|
||||
{
|
||||
"e2b": {
|
||||
"title": "E2B Code Sandbox",
|
||||
"href": "/docs/mcp-examples/data-analysis/e2b"
|
||||
},
|
||||
"jupyter": {
|
||||
"title": "Jupyter Notebooks",
|
||||
"href": "/docs/mcp-examples/data-analysis/jupyter"
|
||||
}
|
||||
}
|
||||
6
docs/src/pages/docs/mcp-examples/deepresearch/_meta.json
Normal file
@ -0,0 +1,6 @@
|
||||
{
|
||||
"octagon": {
|
||||
"title": "Octagon Deep Research",
|
||||
"href": "/docs/mcp-examples/deepresearch/octagon"
|
||||
}
|
||||
}
|
||||
6
docs/src/pages/docs/mcp-examples/design/_meta.json
Normal file
@ -0,0 +1,6 @@
|
||||
{
|
||||
"canva": {
|
||||
"title": "Canva",
|
||||
"href": "/docs/mcp-examples/design/canva"
|
||||
}
|
||||
}
|
||||
10
docs/src/pages/docs/mcp-examples/productivity/_meta.json
Normal file
@ -0,0 +1,10 @@
|
||||
{
|
||||
"todoist": {
|
||||
"title": "Todoist",
|
||||
"href": "/docs/mcp-examples/productivity/todoist"
|
||||
},
|
||||
"linear": {
|
||||
"title": "Linear",
|
||||
"href": "/docs/mcp-examples/productivity/linear"
|
||||
}
|
||||
}
|
||||
268
docs/src/pages/docs/mcp-examples/productivity/linear.mdx
Normal file
@ -0,0 +1,268 @@
|
||||
---
|
||||
title: Linear MCP
|
||||
description: Manage software projects and issue tracking through natural language with Linear integration.
|
||||
keywords:
|
||||
[
|
||||
Jan,
|
||||
MCP,
|
||||
Model Context Protocol,
|
||||
Linear,
|
||||
project management,
|
||||
issue tracking,
|
||||
agile,
|
||||
software development,
|
||||
tool calling,
|
||||
]
|
||||
---
|
||||
|
||||
import { Callout, Steps } from 'nextra/components'
|
||||
|
||||
# Linear MCP
|
||||
|
||||
[Linear MCP](https://linear.app) provides comprehensive project management capabilities through natural conversation. Transform your software development workflow by managing issues, projects, and team collaboration directly through AI.
|
||||
|
||||
## Available Tools
|
||||
|
||||
Linear MCP offers extensive project management capabilities:
|
||||
|
||||
### Issue Management
|
||||
- `list_issues`: View all issues in your workspace
|
||||
- `get_issue`: Get details of a specific issue
|
||||
- `create_issue`: Create new issues with full details
|
||||
- `update_issue`: Modify existing issues
|
||||
- `list_my_issues`: See your assigned issues
|
||||
- `list_issue_statuses`: View available workflow states
|
||||
- `list_issue_labels`: See and manage labels
|
||||
- `create_issue_label`: Create new labels
|
||||
|
||||
### Project & Team
|
||||
- `list_projects`: View all projects
|
||||
- `get_project`: Get project details
|
||||
- `create_project`: Start new projects
|
||||
- `update_project`: Modify project settings
|
||||
- `list_teams`: See all teams
|
||||
- `get_team`: Get team information
|
||||
- `list_users`: View team members
|
||||
|
||||
### Documentation & Collaboration
|
||||
- `list_documents`: Browse documentation
|
||||
- `get_document`: Read specific documents
|
||||
- `search_documentation`: Find information
|
||||
- `list_comments`: View issue comments
|
||||
- `create_comment`: Add comments to issues
|
||||
- `list_cycles`: View sprint cycles
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Jan with experimental features enabled
|
||||
- Linear account (free for up to 250 issues)
|
||||
- Model with strong tool calling support
|
||||
- Active internet connection
|
||||
|
||||
<Callout type="info">
|
||||
Linear offers a generous free tier perfect for small teams and personal projects. Unlimited users, 250 active issues, and full API access included.
|
||||
</Callout>
|
||||
|
||||
## Setup
|
||||
|
||||
### Create Linear Account
|
||||
|
||||
1. Sign up at [linear.app](https://linear.app)
|
||||
2. Complete the onboarding process
|
||||
|
||||

|
||||
|
||||
Once logged in, you'll see your workspace:
|
||||
|
||||

|
||||
|
||||
### Enable MCP in Jan
|
||||
|
||||
<Callout type="warning">
|
||||
Enable **Experimental Features** in **Settings > General** if you don't see the MCP Servers option.
|
||||
</Callout>
|
||||
|
||||
1. Go to **Settings > MCP Servers**
|
||||
2. Toggle **Allow All MCP Tool Permission** ON
|
||||
|
||||
### Configure Linear MCP
|
||||
|
||||
Click the `+` button to add Linear MCP:
|
||||
|
||||
**Configuration:**
|
||||
- **Server Name**: `linear`
|
||||
- **Command**: `npx`
|
||||
- **Arguments**: `-y mcp-remote https://mcp.linear.app/sse`
|
||||
|
||||

|
||||
|
||||
### Authenticate with Linear
|
||||
|
||||
When you first use Linear tools, a browser tab will open for authentication:
|
||||
|
||||

|
||||
|
||||
Complete the OAuth flow to grant Jan access to your Linear workspace.
|
||||
|
||||
## Usage
|
||||
|
||||
### Select a Model with Tool Calling
|
||||
|
||||
For this example, we'll use kimi-k2 from Groq:
|
||||
|
||||
1. Add the model in Groq settings: `moonshotai/kimi-k2-instruct`
|
||||
|
||||

|
||||
|
||||
2. Enable tools for the model:
|
||||
|
||||

|
||||
|
||||
### Verify Available Tools
|
||||
|
||||
You should see all Linear tools in the chat interface:
|
||||
|
||||

|
||||
|
||||
### Epic Project Management
|
||||
|
||||
Watch AI transform mundane tasks into epic narratives:
|
||||
|
||||

|
||||
|
||||
## Creative Examples
|
||||
|
||||
### 🎭 Shakespearean Sprint Planning
|
||||
```
|
||||
Create Linear tickets in the '👋Jan' team for my AGI project as battles in a Shakespearean war epic. Each sprint is a military campaign, bugs are enemy spies, and merge conflicts are sword fights between rival houses. Invent unique epic titles and dramatic descriptions with battle cries and victory speeches. Characterize bugs as enemy villains and developers as heroic warriors in this noble quest for AGI glory. Make tasks like model training, testing, and deployment sound like grand military campaigns with honor and valor.
|
||||
```
|
||||
|
||||
### 🚀 Space Mission Development
|
||||
```
|
||||
Transform our mobile app redesign into a NASA space mission. Create issues where each feature is a mission objective, bugs are space debris to clear, and releases are launch windows. Add dramatic mission briefings, countdown sequences, and astronaut logs. Priority levels become mission criticality ratings.
|
||||
```
|
||||
|
||||
### 🏴☠️ Pirate Ship Operations
|
||||
```
|
||||
Set up our e-commerce platform project as a pirate fleet adventure. Features are islands to conquer, bugs are sea monsters, deployments are naval battles. Create colorful pirate-themed tickets with treasure maps, crew assignments, and tales of high seas adventure.
|
||||
```
|
||||
|
||||
### 🎮 Video Game Quest Log
|
||||
```
|
||||
Structure our API refactoring project like an RPG quest system. Create issues as quests with XP rewards, boss battles for major features, side quests for minor tasks. Include loot drops (completed features), skill trees (learning requirements), and epic boss fight descriptions for challenging bugs.
|
||||
```
|
||||
|
||||
### 🍳 Gordon Ramsay's Kitchen
|
||||
```
|
||||
Manage our restaurant app project as if Gordon Ramsay is the head chef. Create brutally honest tickets criticizing code quality, demanding perfection in UX like a Michelin star dish. Bugs are "bloody disasters" and successful features are "finally, some good code." Include Kitchen Nightmares-style rescue plans.
|
||||
```
|
||||
|
||||
## Practical Workflows
|
||||
|
||||
### Sprint Planning
|
||||
```
|
||||
Review all open issues in the Backend team, identify the top 10 by priority, and create a new sprint cycle called "Q1 Performance Sprint" with appropriate issues assigned.
|
||||
```
|
||||
|
||||
### Bug Triage
|
||||
```
|
||||
List all bugs labeled "critical" or "high-priority", analyze their descriptions, and suggest which ones should be fixed first based on user impact. Update their status to "In Progress" for the top 3.
|
||||
```
|
||||
|
||||
### Documentation Audit
|
||||
```
|
||||
Search our documentation for anything related to API authentication. Create issues for any gaps or outdated sections you find, labeled as "documentation" with detailed improvement suggestions.
|
||||
```
|
||||
|
||||
### Team Workload Balance
|
||||
```
|
||||
Show me all active issues grouped by assignee. Identify anyone with more than 5 high-priority items and suggest redistributions to balance the workload.
|
||||
```
|
||||
|
||||
### Release Planning
|
||||
```
|
||||
Create a project called "v2.0 Release" with milestones for: feature freeze, beta testing, documentation, and launch. Generate appropriate issues for each phase with realistic time estimates.
|
||||
```
|
||||
|
||||
## Advanced Integration Patterns
|
||||
|
||||
### Cross-Project Dependencies
|
||||
```
|
||||
Find all issues labeled "blocked" across all projects. For each one, identify what they're waiting on and create linked issues for the blocking items if they don't exist.
|
||||
```
|
||||
|
||||
### Automated Status Updates
|
||||
```
|
||||
Look at all issues assigned to me that haven't been updated in 3 days. Add a comment with a status update based on their current state and any blockers.
|
||||
```
|
||||
|
||||
### Smart Labeling
|
||||
```
|
||||
Analyze all unlabeled issues in our workspace. Based on their titles and descriptions, suggest appropriate labels and apply them. Create any missing label categories we need.
|
||||
```
|
||||
|
||||
### Sprint Retrospectives
|
||||
```
|
||||
Generate a retrospective report for our last completed cycle. List what was completed, what was pushed to next sprint, and create discussion issues for any patterns you notice.
|
||||
```
|
||||
|
||||
## Tips for Maximum Productivity
|
||||
|
||||
- **Batch Operations**: Create multiple related issues in one request
|
||||
- **Smart Templates**: Ask AI to remember your issue templates
|
||||
- **Natural Queries**: "Show me what John is working on this week"
|
||||
- **Context Awareness**: Reference previous issues in new requests
|
||||
- **Automated Workflows**: Set up recurring management tasks
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Authentication Issues:**
|
||||
- Clear browser cookies for Linear
|
||||
- Re-authenticate through the OAuth flow
|
||||
- Check Linear workspace permissions
|
||||
- Verify API access is enabled
|
||||
|
||||
**Tool Calling Errors:**
|
||||
- Ensure model supports multiple tool calls
|
||||
- Try breaking complex requests into steps
|
||||
- Verify all required fields are provided
|
||||
- Check Linear service status
|
||||
|
||||
**Missing Data:**
|
||||
- Refresh authentication token
|
||||
- Verify workspace access permissions
|
||||
- Check if issues are in archived projects
|
||||
- Ensure proper team selection
|
||||
|
||||
**Performance Issues:**
|
||||
- Linear API has rate limits (see dashboard)
|
||||
- Break bulk operations into batches
|
||||
- Cache frequently accessed data
|
||||
- Use specific filters to reduce data
|
||||
|
||||
<Callout type="tip">
|
||||
Linear's keyboard shortcuts work great alongside MCP! Use CMD+K for quick navigation while AI handles the heavy lifting.
|
||||
</Callout>
|
||||
|
||||
## Integration Ideas
|
||||
|
||||
Combine Linear with other MCP tools:
|
||||
|
||||
- **Serper + Linear**: Research technical solutions, then create implementation tickets
|
||||
- **Jupyter + Linear**: Analyze project metrics, generate data-driven sprint plans
|
||||
- **Todoist + Linear**: Sync personal tasks with work issues
|
||||
- **E2B + Linear**: Run code tests, automatically create bug reports
|
||||
|
||||
## Privacy & Security
|
||||
|
||||
Linear MCP uses OAuth for authentication, meaning:
|
||||
- Your credentials are never shared with Jan
|
||||
- Access can be revoked anytime from Linear settings
|
||||
- Data stays within Linear's infrastructure
|
||||
- Only requested permissions are granted
|
||||
|
||||
## Next Steps
|
||||
|
||||
Linear MCP transforms project management from clicking through interfaces into natural conversation. Whether you're planning sprints, triaging bugs, or crafting epic development sagas, AI becomes your project management companion.
|
||||
|
||||
Start with simple issue creation, then explore complex workflows like automated sprint planning and workload balancing. The combination of Linear's powerful platform with AI's creative capabilities makes project management both efficient and entertaining!
|
||||
259
docs/src/pages/docs/mcp-examples/productivity/todoist.mdx
Normal file
@ -0,0 +1,259 @@
|
||||
---
|
||||
title: Todoist MCP
|
||||
description: Manage your tasks and todo lists through natural language with Todoist integration.
|
||||
keywords:
|
||||
[
|
||||
Jan,
|
||||
MCP,
|
||||
Model Context Protocol,
|
||||
Todoist,
|
||||
task management,
|
||||
productivity,
|
||||
todo list,
|
||||
tool calling,
|
||||
]
|
||||
---
|
||||
|
||||
import { Callout, Steps } from 'nextra/components'
|
||||
|
||||
# Todoist MCP
|
||||
|
||||
[Todoist MCP Server](https://github.com/abhiz123/todoist-mcp-server) enables AI models to manage your Todoist tasks through natural conversation. Instead of switching between apps, you can create, update, and complete tasks by simply chatting with your AI assistant.
|
||||
|
||||
## Available Tools
|
||||
|
||||
- `todoist_create_task`: Add new tasks to your todo list
|
||||
- `todoist_get_tasks`: Retrieve and view your current tasks
|
||||
- `todoist_update_task`: Modify existing tasks
|
||||
- `todoist_complete_task`: Mark tasks as done
|
||||
- `todoist_delete_task`: Remove tasks from your list
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Jan with experimental features enabled
|
||||
- Todoist account (free or premium)
|
||||
- Model with strong tool calling support
|
||||
- Node.js installed
|
||||
|
||||
<Callout type="info">
|
||||
Todoist offers a generous free tier perfect for personal task management. Premium features add labels, reminders, and more projects.
|
||||
</Callout>
|
||||
|
||||
## Setup
|
||||
|
||||
### Create Todoist Account
|
||||
|
||||
1. Sign up at [todoist.com](https://todoist.com) or log in if you have an account
|
||||
2. Complete the onboarding process
|
||||
|
||||

|
||||
|
||||
Once logged in, you'll see your main dashboard:
|
||||
|
||||

|
||||
|
||||
### Get Your API Token
|
||||
|
||||
1. Click **Settings** (gear icon)
|
||||
2. Navigate to **Integrations**
|
||||
3. Click on the **Developer** tab
|
||||
4. Copy your API token (it's already generated for you)
|
||||
|
||||

|
||||
|
||||
### Enable MCP in Jan
|
||||
|
||||
<Callout type="warning">
|
||||
If you don't see the MCP Servers option, enable **Experimental Features** in **Settings > General** first.
|
||||
</Callout>
|
||||
|
||||
1. Go to **Settings > MCP Servers**
|
||||
2. Toggle **Allow All MCP Tool Permission** ON
|
||||
|
||||
### Configure Todoist MCP
|
||||
|
||||
Click the `+` button to add a new MCP server:
|
||||
|
||||
**Configuration:**
|
||||
- **Server Name**: `todoist`
|
||||
- **Command**: `npx`
|
||||
- **Arguments**: `-y @abhiz123/todoist-mcp-server`
|
||||
- **Environment Variables**:
|
||||
- Key: `TODOIST_API_TOKEN`, Value: `your_api_token_here`
|
||||
|
||||

|
||||
|
||||
## Usage
|
||||
|
||||
### Select a Model with Tool Calling
|
||||
|
||||
Open a new chat and select a model that excels at tool calling. Make sure tools are enabled for your chosen model.
|
||||
|
||||

|
||||
|
||||
### Verify Tools Available
|
||||
|
||||
You should see the Todoist tools in the tools panel:
|
||||
|
||||

|
||||
|
||||
### Start Managing Tasks
|
||||
|
||||
Now you can manage your todo list through natural conversation:
|
||||
|
||||

|
||||
|
||||
## Example Prompts
|
||||
|
||||
### Blog Writing Workflow
|
||||
```
|
||||
I need to write a blog post about AI and productivity tools today. Please add some tasks to my todo list to make sure I have a good set of steps to accomplish this task.
|
||||
```
|
||||
|
||||
The AI will create structured tasks like:
|
||||
- Research AI productivity tools
|
||||
- Create blog outline
|
||||
- Write introduction
|
||||
- Draft main sections
|
||||
- Add examples and screenshots
|
||||
- Edit and proofread
|
||||
- Publish and promote
|
||||
|
||||
### Weekly Meal Planning
|
||||
```
|
||||
Help me plan meals for the week. Create a grocery shopping list and cooking schedule for Monday through Friday, focusing on healthy, quick dinners.
|
||||
```
|
||||
|
||||
### Home Improvement Project
|
||||
```
|
||||
I'm renovating my home office this weekend. Break down the project into manageable tasks including shopping, prep work, and the actual renovation steps.
|
||||
```
|
||||
|
||||
### Study Schedule
|
||||
```
|
||||
I have a statistics exam in 2 weeks. Create a study plan with daily tasks covering all chapters, practice problems, and review sessions.
|
||||
```
|
||||
|
||||
### Fitness Goals
|
||||
```
|
||||
Set up a 30-day fitness challenge for me. Include daily workout tasks, rest days, and weekly progress check-ins.
|
||||
```
|
||||
|
||||
### Event Planning
|
||||
```
|
||||
I'm organizing a surprise birthday party for next month. Create a comprehensive task list covering invitations, decorations, food, entertainment, and day-of coordination.
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Task Management Commands
|
||||
|
||||
**View all tasks:**
|
||||
```
|
||||
Show me all my pending tasks for today
|
||||
```
|
||||
|
||||
**Update priorities:**
|
||||
```
|
||||
Make "Write blog introduction" high priority and move it to the top of my list
|
||||
```
|
||||
|
||||
**Bulk completion:**
|
||||
```
|
||||
Mark all my morning routine tasks as complete
|
||||
```
|
||||
|
||||
**Clean up:**
|
||||
```
|
||||
Delete all completed tasks from last week
|
||||
```
|
||||
|
||||
### Project Organization
|
||||
|
||||
Todoist supports projects, though the MCP may have limitations. Try:
|
||||
```
|
||||
Create a new project called "Q1 Goals" and add 5 key objectives as tasks
|
||||
```
|
||||
|
||||
### Recurring Tasks
|
||||
|
||||
Set up repeating tasks:
|
||||
```
|
||||
Add a daily task to review my calendar at 9 AM
|
||||
Add a weekly task for meal prep on Sundays
|
||||
Add a monthly task to pay bills on the 1st
|
||||
```
|
||||
|
||||
## Creative Use Cases
|
||||
|
||||
### 🎮 Game Development Sprint
|
||||
```
|
||||
I'm participating in a 48-hour game jam. Create an hour-by-hour task schedule covering ideation, prototyping, art creation, programming, testing, and submission.
|
||||
```
|
||||
|
||||
### 📚 Book Writing Challenge
|
||||
```
|
||||
I'm doing NaNoWriMo (writing a novel in a month). Break down a 50,000-word goal into daily writing tasks with word count targets and plot milestones.
|
||||
```
|
||||
|
||||
### 🌱 Garden Planning
|
||||
```
|
||||
It's spring planting season. Create a gardening schedule for the next 3 months including soil prep, planting dates for different vegetables, watering reminders, and harvest times.
|
||||
```
|
||||
|
||||
### 🎂 Baking Business Launch
|
||||
```
|
||||
I'm starting a home bakery. Create tasks for getting permits, setting up social media, creating a menu, pricing strategy, and first week's baking schedule.
|
||||
```
|
||||
|
||||
### 🏠 Moving Checklist
|
||||
```
|
||||
I'm moving to a new apartment next month. Generate a comprehensive moving checklist including utilities setup, packing by room, change of address notifications, and moving day logistics.
|
||||
```
|
||||
|
||||
## Tips for Best Results
|
||||
|
||||
- **Be specific**: "Add task: Call dentist tomorrow at 2 PM" works better than "remind me about dentist"
|
||||
- **Use natural language**: The AI understands context, so chat naturally
|
||||
- **Batch operations**: Ask to create multiple related tasks at once
|
||||
- **Review regularly**: Ask the AI to show your tasks and help prioritize
|
||||
- **Iterate**: If the tasks aren't quite right, ask the AI to modify them
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Tasks not appearing in Todoist:**
|
||||
- Verify API token is correct
|
||||
- Check Todoist website/app and refresh
|
||||
- Ensure MCP server shows as active
|
||||
|
||||
**Tool calling errors:**
|
||||
- Confirm model supports tool calling
|
||||
- Enable tools in model settings
|
||||
- Try a different model (Claude 3.5+ or GPT-4o recommended)
|
||||
|
||||
**Connection issues:**
|
||||
- Check internet connectivity
|
||||
- Verify Node.js installation
|
||||
- Restart Jan after configuration
|
||||
|
||||
**Rate limiting:**
|
||||
- Todoist API has rate limits
|
||||
- Space out bulk operations
|
||||
- Wait a moment between large task batches
|
||||
|
||||
<Callout type="tip">
|
||||
Todoist syncs across all devices. Tasks created through Jan instantly appear on your phone, tablet, and web app!
|
||||
</Callout>
|
||||
|
||||
## Privacy Note
|
||||
|
||||
Your tasks are synced with Todoist's servers. While the MCP runs locally, task data is stored in Todoist's cloud for sync functionality. Review Todoist's privacy policy if you're handling sensitive information.
|
||||
|
||||
## Next Steps
|
||||
|
||||
Combine Todoist MCP with other tools for powerful workflows:
|
||||
- Use Serper MCP to research topics, then create action items in Todoist
|
||||
- Generate code with E2B, then add testing tasks to your todo list
|
||||
- Analyze data with Jupyter, then create follow-up tasks for insights
|
||||
|
||||
Task management through natural language makes staying organized effortless. Let your AI assistant handle the overhead while you focus on getting things done!
|
||||
10
docs/src/pages/docs/mcp-examples/search/_meta.json
Normal file
@ -0,0 +1,10 @@
|
||||
{
|
||||
"exa": {
|
||||
"title": "Exa Search",
|
||||
"href": "/docs/mcp-examples/search/exa"
|
||||
},
|
||||
"serper": {
|
||||
"title": "Serper Search",
|
||||
"href": "/docs/mcp-examples/search/serper"
|
||||
}
|
||||
}
|
||||
179
docs/src/pages/docs/mcp-examples/search/serper.mdx
Normal file
@ -0,0 +1,179 @@
|
||||
---
|
||||
title: Serper Search MCP
|
||||
description: Connect Jan to real-time web search with Google results through Serper API.
|
||||
keywords:
|
||||
[
|
||||
Jan,
|
||||
MCP,
|
||||
Model Context Protocol,
|
||||
Serper,
|
||||
Google search,
|
||||
web search,
|
||||
real-time search,
|
||||
tool calling,
|
||||
Jan v1,
|
||||
]
|
||||
---
|
||||
|
||||
import { Callout, Steps } from 'nextra/components'
|
||||
|
||||
# Serper Search MCP
|
||||
|
||||
[Serper](https://serper.dev) provides Google search results through a simple API, making it perfect for giving AI models access to current web information. The Serper MCP integration enables Jan models to search the web and retrieve real-time information.
|
||||
|
||||
## Available Tools
|
||||
|
||||
- `google_search`: Search Google and retrieve results with snippets
|
||||
- `scrape`: Extract content from specific web pages
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Jan with experimental features enabled
|
||||
- Serper API key from [serper.dev](https://serper.dev)
|
||||
- Model with tool calling support (recommended: Jan v1)
|
||||
|
||||
<Callout type="info">
|
||||
Serper offers 2,500 free searches upon signup - enough for extensive testing and personal use.
|
||||
</Callout>
|
||||
|
||||
## Setup
|
||||
|
||||
### Enable Experimental Features
|
||||
|
||||
1. Go to **Settings** > **General**
|
||||
2. Toggle **Experimental Features** ON
|
||||
|
||||

|
||||
|
||||
### Enable MCP
|
||||
|
||||
1. Go to **Settings** > **MCP Servers**
|
||||
2. Toggle **Allow All MCP Tool Permission** ON
|
||||
|
||||

|
||||
|
||||
### Get Serper API Key
|
||||
|
||||
1. Visit [serper.dev](https://serper.dev)
|
||||
2. Sign up for a free account
|
||||
3. Copy your API key from the playground
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
### Configure MCP Server
|
||||
|
||||
Click `+` in MCP Servers section:
|
||||
|
||||
**Configuration:**
|
||||
- **Server Name**: `serper`
|
||||
- **Command**: `npx`
|
||||
- **Arguments**: `-y serper-search-scrape-mcp-server`
|
||||
- **Environment Variables**:
|
||||
- Key: `SERPER_API_KEY`, Value: `your-api-key`
|
||||
|
||||

|
||||
|
||||
### Download Jan v1
|
||||
|
||||
Jan v1 is optimized for tool calling and works excellently with Serper:
|
||||
|
||||
1. Go to the **Hub** tab
|
||||
2. Search for **Jan v1**
|
||||
3. Choose your preferred quantization
|
||||
4. Click **Download**
|
||||
|
||||

|
||||
|
||||
### Enable Tool Calling
|
||||
|
||||
1. Go to **Settings** > **Model Providers** > **Llama.cpp**
|
||||
2. Find Jan v1 in your models list
|
||||
3. Click the edit icon
|
||||
4. Toggle **Tools** ON
|
||||
|
||||

|
||||
|
||||
## Usage
|
||||
|
||||
### Start a New Chat
|
||||
|
||||
With Jan v1 selected, you'll see the available Serper tools:
|
||||
|
||||

|
||||
|
||||
### Example Queries
|
||||
|
||||
**Current Information:**
|
||||
```
|
||||
What are the latest developments in quantum computing this week?
|
||||
```
|
||||
|
||||
**Comparative Analysis:**
|
||||
```
|
||||
What are the main differences between the Rust programming language and C++? Be spicy, hot takes are encouraged. 😌
|
||||
```
|
||||

|
||||
|
||||

|
||||
|
||||
**Research Tasks:**
|
||||
```
|
||||
Find the current stock price of NVIDIA and recent news about their AI chips.
|
||||
```
|
||||
|
||||
**Fact-Checking:**
|
||||
```
|
||||
Is it true that the James Webb telescope found signs of life on an exoplanet? What's the latest?
|
||||
```
|
||||
|
||||
**Local Information:**
|
||||
```
|
||||
What restaurants opened in San Francisco this month? Focus on Japanese cuisine.
|
||||
```
|
||||
|
||||
## How It Works
|
||||
|
||||
1. **Query Processing**: Jan v1 analyzes your question and determines what to search
|
||||
2. **Web Search**: Calls Serper API to get Google search results
|
||||
3. **Content Extraction**: Can scrape specific pages for detailed information
|
||||
4. **Synthesis**: Combines search results into a comprehensive answer
|
||||
|
||||
## Tips for Best Results
|
||||
|
||||
- **Be specific**: "Tesla Model 3 2024 price Australia" works better than "Tesla price"
|
||||
- **Request recent info**: Add "latest", "current", or "2024/2025" to get recent results
|
||||
- **Ask follow-ups**: Jan v1 maintains context for deeper research
|
||||
- **Combine with analysis**: Ask for comparisons, summaries, or insights
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**No search results:**
|
||||
- Verify API key is correct
|
||||
- Check remaining credits at serper.dev
|
||||
- Ensure MCP server shows as active
|
||||
|
||||
**Tools not appearing:**
|
||||
- Confirm experimental features are enabled
|
||||
- Verify tool calling is enabled for your model
|
||||
- Restart Jan after configuration changes
|
||||
|
||||
**Poor search quality:**
|
||||
- Use more specific search terms
|
||||
- Try rephrasing your question
|
||||
- Check if Serper service is operational
|
||||
|
||||
<Callout type="warning">
|
||||
Each search query consumes one API credit. Monitor usage at serper.dev dashboard.
|
||||
</Callout>
|
||||
|
||||
## API Limits
|
||||
|
||||
- **Free tier**: 2,500 searches
|
||||
- **Paid plans**: Starting at $50/month for 50,000 searches
|
||||
- **Rate limits**: 100 requests per second
|
||||
|
||||
## Next Steps
|
||||
|
||||
Serper MCP enables Jan v1 to access current web information, making it a powerful research assistant. Combine with other MCP tools for even more capabilities - use Serper for search, then E2B for data analysis, or Jupyter for visualization.
|
||||
158
docs/src/pages/docs/quickstart.mdx
Normal file
@ -0,0 +1,158 @@
|
||||
---
|
||||
title: QuickStart
|
||||
description: Get started with Jan and start chatting with AI in minutes.
|
||||
keywords:
|
||||
[
|
||||
Jan,
|
||||
local AI,
|
||||
LLM,
|
||||
chat,
|
||||
threads,
|
||||
models,
|
||||
download,
|
||||
installation,
|
||||
conversations,
|
||||
]
|
||||
---
|
||||
|
||||
import { Callout, Steps } from 'nextra/components'
|
||||
import { SquarePen, Pencil, Ellipsis, Paintbrush, Trash2, Settings } from 'lucide-react'
|
||||
|
||||
# QuickStart
|
||||
|
||||
Get up and running with Jan in minutes. This guide will help you install Jan, download a model, and start chatting immediately.
|
||||
|
||||
<Steps>
|
||||
|
||||
### Step 1: Install Jan
|
||||
|
||||
1. [Download Jan](/download)
|
||||
2. Install the app ([Mac](/docs/desktop/mac), [Windows](/docs/desktop/windows), [Linux](/docs/desktop/linux))
|
||||
3. Launch Jan
|
||||
|
||||
### Step 2: Download Jan v1
|
||||
|
||||
We recommend starting with **Jan v1**, our 4B parameter model optimized for reasoning and tool calling:
|
||||
|
||||
1. Go to the **Hub Tab**
|
||||
2. Search for **Jan v1**
|
||||
3. Choose a quantization that fits your hardware:
|
||||
- **Q4_K_M** (2.5 GB) - Good balance for most users
|
||||
- **Q8_0** (4.28 GB) - Best quality if you have the RAM
|
||||
4. Click **Download**
|
||||
|
||||

|
||||
|
||||
<Callout type="info">
|
||||
Jan v1 achieves 91.1% accuracy on SimpleQA and excels at tool calling, making it perfect for web search and reasoning tasks.
|
||||
</Callout>
|
||||
|
||||
**HuggingFace models:** Some require an access token. Add yours in **Settings > Model Providers > Llama.cpp > Hugging Face Access Token**.
|
||||
|
||||

|
||||
|
||||
### Step 3: Enable GPU Acceleration (Optional)
|
||||
|
||||
For Windows/Linux with compatible graphics cards:
|
||||
|
||||
1. Go to **(<Settings width={16} height={16} style={{display:"inline"}}/>) Settings** > **Hardware**
|
||||
2. Toggle **GPUs** to ON
|
||||
|
||||

|
||||
|
||||
<Callout type="info">
|
||||
Install required drivers before enabling GPU acceleration. See setup guides for [Windows](/docs/desktop/windows#gpu-acceleration) & [Linux](/docs/desktop/linux#gpu-acceleration).
|
||||
</Callout>
|
||||
|
||||
### Step 4: Start Chatting
|
||||
|
||||
1. Click **New Chat** (<SquarePen width={16} height={16} style={{display:"inline"}}/>) icon
|
||||
2. Select your model in the input field dropdown
|
||||
3. Type your message and start chatting
|
||||
|
||||

|
||||
|
||||
Try asking Jan v1 questions like:
|
||||
- "Explain quantum computing in simple terms"
|
||||
- "Help me write a Python function to sort a list"
|
||||
- "What are the pros and cons of electric vehicles?"
|
||||
|
||||
<Callout type="tip">
|
||||
**Want to give Jan v1 access to current web information?** Check out our [Serper MCP tutorial](/docs/mcp-examples/search/serper) to enable real-time web search with 2,500 free searches!
|
||||
</Callout>
|
||||
|
||||
</Steps>
|
||||
|
||||
## Managing Conversations
|
||||
|
||||
Jan organizes conversations into threads for easy tracking and revisiting.
|
||||
|
||||
### View Chat History
|
||||
|
||||
- **Left sidebar** shows all conversations
|
||||
- Click any chat to open the full conversation
|
||||
- **Favorites**: Pin important threads for quick access
|
||||
- **Recents**: Access recently used threads
|
||||
|
||||

|
||||
|
||||
### Edit Chat Titles
|
||||
|
||||
1. Hover over a conversation in the sidebar
|
||||
2. Click **three dots** (<Ellipsis width={16} height={16} style={{display:"inline"}}/>) icon
|
||||
3. Click <Pencil width={16} height={16} style={{display:"inline"}}/> **Rename**
|
||||
4. Enter new title and save
|
||||
|
||||

|
||||
|
||||
### Delete Threads
|
||||
|
||||
<Callout type="warning">
|
||||
Thread deletion is permanent. No undo available.
|
||||
</Callout>
|
||||
|
||||
**Single thread:**
|
||||
1. Hover over thread in sidebar
|
||||
2. Click **three dots** (<Ellipsis width={16} height={16} style={{display:"inline"}}/>) icon
|
||||
3. Click <Trash2 width={16} height={16} style={{display:"inline"}}/> **Delete**
|
||||
|
||||
**All threads:**
|
||||
1. Hover over `Recents` category
|
||||
2. Click **three dots** (<Ellipsis width={16} height={16} style={{display:"inline"}}/>) icon
|
||||
3. Select <Trash2 width={16} height={16} style={{display:"inline"}}/> **Delete All**
|
||||
|
||||
## Advanced Features
|
||||
|
||||
### Custom Assistant Instructions
|
||||
|
||||
Customize how models respond:
|
||||
|
||||
1. Use the assistant dropdown in the input field
|
||||
2. Or go to the **Assistant tab** to create custom instructions
|
||||
3. Instructions work across all models
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
### Model Parameters
|
||||
|
||||
Fine-tune model behavior:
|
||||
- Click the **Gear icon** next to your model
|
||||
- Adjust parameters in **Assistant Settings**
|
||||
- Switch models via the **model selector**
|
||||
|
||||

|
||||
|
||||
### Connect Cloud Models (Optional)
|
||||
|
||||
Connect to OpenAI, Anthropic, Groq, Mistral, and others:
|
||||
|
||||
1. Open any thread
|
||||
2. Select a cloud model from the dropdown
|
||||
3. Click the **Gear icon** beside the provider
|
||||
4. Add your API key (ensure sufficient credits)
|
||||
|
||||

|
||||
|
||||
For detailed setup, see [Remote APIs](/docs/remote-models/openai).
|
||||
@ -55,59 +55,51 @@ export default defineConfig({
|
||||
[
|
||||
{
|
||||
label: 'Jan Desktop',
|
||||
link: '/jan/',
|
||||
link: '/',
|
||||
icon: 'rocket',
|
||||
items: [
|
||||
{
|
||||
label: 'HOW TO',
|
||||
label: 'GETTING STARTED',
|
||||
items: [
|
||||
{
|
||||
label: 'Install 👋 Jan',
|
||||
collapsed: false,
|
||||
autogenerate: { directory: 'jan/installation' },
|
||||
},
|
||||
{ label: 'Start Chatting', slug: 'jan/threads' },
|
||||
{ label: 'QuickStart', slug: 'jan/quickstart' },
|
||||
{
|
||||
label: 'Use Jan Models',
|
||||
label: 'Models',
|
||||
collapsed: true,
|
||||
autogenerate: { directory: 'jan/jan-models' },
|
||||
},
|
||||
{ label: 'Assistants', slug: 'jan/assistants' },
|
||||
],
|
||||
},
|
||||
{
|
||||
label: 'Cloud Providers',
|
||||
items: [
|
||||
{ label: 'Anthropic', slug: 'jan/remote-models/anthropic' },
|
||||
{ label: 'OpenAI', slug: 'jan/remote-models/openai' },
|
||||
{ label: 'Gemini', slug: 'jan/remote-models/google' },
|
||||
{
|
||||
label: 'OpenRouter',
|
||||
slug: 'jan/remote-models/openrouter',
|
||||
},
|
||||
{ label: 'Cohere', slug: 'jan/remote-models/cohere' },
|
||||
{ label: 'Mistral', slug: 'jan/remote-models/mistralai' },
|
||||
{ label: 'Groq', slug: 'jan/remote-models/groq' },
|
||||
],
|
||||
},
|
||||
{
|
||||
label: 'EXPLANATION',
|
||||
items: [
|
||||
{
|
||||
label: 'Local AI Engine',
|
||||
slug: 'jan/explanation/llama-cpp',
|
||||
},
|
||||
{
|
||||
label: 'Model Parameters',
|
||||
slug: 'jan/explanation/model-parameters',
|
||||
label: 'Cloud Providers',
|
||||
collapsed: true,
|
||||
items: [
|
||||
{
|
||||
label: 'Anthropic',
|
||||
slug: 'jan/remote-models/anthropic',
|
||||
},
|
||||
{ label: 'OpenAI', slug: 'jan/remote-models/openai' },
|
||||
{ label: 'Gemini', slug: 'jan/remote-models/google' },
|
||||
{
|
||||
label: 'OpenRouter',
|
||||
slug: 'jan/remote-models/openrouter',
|
||||
},
|
||||
{ label: 'Cohere', slug: 'jan/remote-models/cohere' },
|
||||
{
|
||||
label: 'Mistral',
|
||||
slug: 'jan/remote-models/mistralai',
|
||||
},
|
||||
{ label: 'Groq', slug: 'jan/remote-models/groq' },
|
||||
],
|
||||
},
|
||||
],
|
||||
},
|
||||
{
|
||||
label: 'ADVANCED',
|
||||
label: 'TUTORIALS',
|
||||
items: [
|
||||
{ label: 'Manage Models', slug: 'jan/manage-models' },
|
||||
{ label: 'Model Context Protocol', slug: 'jan/mcp' },
|
||||
{
|
||||
label: 'MCP Examples',
|
||||
collapsed: true,
|
||||
@ -129,13 +121,37 @@ export default defineConfig({
|
||||
slug: 'jan/mcp-examples/deepresearch/octagon',
|
||||
},
|
||||
{
|
||||
label: 'Web Search with Exa',
|
||||
label: 'Serper Search',
|
||||
slug: 'jan/mcp-examples/search/serper',
|
||||
},
|
||||
{
|
||||
label: 'Web Search (Exa)',
|
||||
slug: 'jan/mcp-examples/search/exa',
|
||||
},
|
||||
],
|
||||
},
|
||||
],
|
||||
},
|
||||
{
|
||||
label: 'EXPLANATION',
|
||||
items: [
|
||||
{
|
||||
label: 'Local AI Engine',
|
||||
slug: 'jan/explanation/llama-cpp',
|
||||
},
|
||||
{
|
||||
label: 'Model Parameters',
|
||||
slug: 'jan/explanation/model-parameters',
|
||||
},
|
||||
],
|
||||
},
|
||||
{
|
||||
label: 'ADVANCED',
|
||||
items: [
|
||||
{ label: 'Manage Models', slug: 'jan/manage-models' },
|
||||
{ label: 'Model Context Protocol', slug: 'jan/mcp' },
|
||||
],
|
||||
},
|
||||
{
|
||||
label: 'Local Server',
|
||||
items: [
|
||||
@ -171,97 +187,9 @@ export default defineConfig({
|
||||
icon: 'forward-slash',
|
||||
items: [{ label: 'Overview', slug: 'server' }],
|
||||
},
|
||||
{
|
||||
label: 'Handbook',
|
||||
link: '/handbook/',
|
||||
icon: 'open-book',
|
||||
items: [
|
||||
{ label: 'Welcome', slug: 'handbook' },
|
||||
{
|
||||
label: 'About Jan',
|
||||
items: [
|
||||
{
|
||||
label: 'Why does Jan Exist?',
|
||||
collapsed: true,
|
||||
autogenerate: { directory: 'handbook/why' },
|
||||
},
|
||||
{
|
||||
label: 'How we make Money',
|
||||
collapsed: true,
|
||||
autogenerate: { directory: 'handbook/money' },
|
||||
},
|
||||
{
|
||||
label: 'Who We Hire',
|
||||
collapsed: true,
|
||||
autogenerate: { directory: 'handbook/who' },
|
||||
},
|
||||
{
|
||||
label: "Jan's Philosophies",
|
||||
collapsed: true,
|
||||
autogenerate: { directory: 'handbook/philosophy' },
|
||||
},
|
||||
{
|
||||
label: 'Brand & Identity',
|
||||
collapsed: true,
|
||||
autogenerate: { directory: 'handbook/brand' },
|
||||
},
|
||||
],
|
||||
},
|
||||
{
|
||||
label: 'How We Work',
|
||||
items: [
|
||||
{
|
||||
label: 'Team Roster',
|
||||
collapsed: true,
|
||||
autogenerate: { directory: 'handbook/team' },
|
||||
},
|
||||
{
|
||||
label: "Jan's Culture",
|
||||
collapsed: true,
|
||||
autogenerate: { directory: 'handbook/culture' },
|
||||
},
|
||||
{
|
||||
label: 'How We Build',
|
||||
collapsed: true,
|
||||
autogenerate: { directory: 'handbook/how' },
|
||||
},
|
||||
{
|
||||
label: 'How We Sell',
|
||||
collapsed: true,
|
||||
autogenerate: { directory: 'handbook/sell' },
|
||||
},
|
||||
],
|
||||
},
|
||||
{
|
||||
label: 'HR',
|
||||
items: [
|
||||
{
|
||||
label: 'HR Lifecycle',
|
||||
collapsed: true,
|
||||
autogenerate: { directory: 'handbook/lifecycle' },
|
||||
},
|
||||
{
|
||||
label: 'HR Policies',
|
||||
collapsed: true,
|
||||
autogenerate: { directory: 'handbook/hr' },
|
||||
},
|
||||
{
|
||||
label: 'Compensation',
|
||||
collapsed: true,
|
||||
autogenerate: { directory: 'handbook/comp' },
|
||||
},
|
||||
],
|
||||
},
|
||||
],
|
||||
},
|
||||
],
|
||||
{
|
||||
exclude: [
|
||||
'/prods',
|
||||
'/api-reference',
|
||||
'/products',
|
||||
'/products/**/*',
|
||||
],
|
||||
exclude: ['/api-reference'],
|
||||
}
|
||||
),
|
||||
],
|
||||
@ -282,9 +210,6 @@ export default defineConfig({
|
||||
href: 'https://discord.com/invite/FTk2MvZwJH',
|
||||
},
|
||||
],
|
||||
components: {
|
||||
Header: './src/components/CustomNav.astro',
|
||||
},
|
||||
}),
|
||||
],
|
||||
})
|
||||
|
||||
BIN
website/public/gifs/lucy_demo.gif
Normal file
|
After Width: | Height: | Size: 23 MiB |
BIN
website/src/assets/chat_jan_v1.png
Normal file
|
After Width: | Height: | Size: 167 KiB |
BIN
website/src/assets/creative_bench_jan_v1.png
Normal file
|
After Width: | Height: | Size: 127 KiB |
BIN
website/src/assets/download_janv1.png
Normal file
|
After Width: | Height: | Size: 353 KiB |
BIN
website/src/assets/enable_mcp.png
Normal file
|
After Width: | Height: | Size: 474 KiB |
BIN
website/src/assets/lucy.jpeg
Normal file
|
After Width: | Height: | Size: 323 KiB |
BIN
website/src/assets/serper_janparams.png
Normal file
|
After Width: | Height: | Size: 248 KiB |
BIN
website/src/assets/serper_page.png
Normal file
|
After Width: | Height: | Size: 1021 KiB |
BIN
website/src/assets/serper_playground.png
Normal file
|
After Width: | Height: | Size: 600 KiB |
BIN
website/src/assets/simpleqa_jan_v1.png
Normal file
|
After Width: | Height: | Size: 212 KiB |
BIN
website/src/assets/simpleqa_lucy.png
Normal file
|
After Width: | Height: | Size: 217 KiB |
BIN
website/src/assets/toggle_tools
Normal file
|
After Width: | Height: | Size: 586 KiB |
BIN
website/src/assets/toggle_tools.png
Normal file
|
After Width: | Height: | Size: 586 KiB |
BIN
website/src/assets/turn_on_mcp.png
Normal file
|
After Width: | Height: | Size: 209 KiB |
@ -1,38 +1,11 @@
|
||||
import { defineCollection, z } from 'astro:content';
|
||||
import { docsLoader } from '@astrojs/starlight/loaders';
|
||||
import { docsSchema } from '@astrojs/starlight/schema';
|
||||
import { videosSchema } from 'starlight-videos/schemas';
|
||||
|
||||
const changelogSchema = z.object({
|
||||
title: z.string(),
|
||||
description: z.string(),
|
||||
date: z.date(),
|
||||
version: z.string().optional(),
|
||||
image: z.string().optional(),
|
||||
gif: z.string().optional(),
|
||||
video: z.string().optional(),
|
||||
featured: z.boolean().default(false),
|
||||
});
|
||||
|
||||
const blogSchema = z.object({
|
||||
title: z.string(),
|
||||
description: z.string(),
|
||||
date: z.date(),
|
||||
tags: z.string().optional(),
|
||||
categories: z.string().optional(),
|
||||
author: z.string().optional(),
|
||||
ogImage: z.string().optional(),
|
||||
featured: z.boolean().default(false),
|
||||
});
|
||||
import { defineCollection, z } from 'astro:content'
|
||||
import { docsLoader } from '@astrojs/starlight/loaders'
|
||||
import { docsSchema } from '@astrojs/starlight/schema'
|
||||
import { videosSchema } from 'starlight-videos/schemas'
|
||||
|
||||
export const collections = {
|
||||
docs: defineCollection({ loader: docsLoader(), schema: docsSchema({ extend: videosSchema }) }),
|
||||
changelog: defineCollection({
|
||||
type: 'content',
|
||||
schema: changelogSchema,
|
||||
}),
|
||||
blog: defineCollection({
|
||||
type: 'content',
|
||||
schema: blogSchema,
|
||||
}),
|
||||
};
|
||||
docs: defineCollection({
|
||||
loader: docsLoader(),
|
||||
schema: docsSchema({ extend: videosSchema }),
|
||||
}),
|
||||
}
|
||||
|
||||
199
website/src/content/docs/index.mdx
Normal file
@ -0,0 +1,199 @@
|
||||
---
|
||||
title: Jan
|
||||
description: Build, run, and own your AI. From laptop to superintelligence.
|
||||
keywords:
|
||||
[
|
||||
Jan,
|
||||
open superintelligence,
|
||||
AI ecosystem,
|
||||
self-hosted AI,
|
||||
local AI,
|
||||
llama.cpp,
|
||||
GGUF models,
|
||||
MCP tools,
|
||||
Model Context Protocol
|
||||
]
|
||||
---
|
||||
|
||||
import { Aside } from '@astrojs/starlight/components';
|
||||
|
||||

|
||||
|
||||
## Jan's Goal
|
||||
|
||||
> Jan's goal is to build superintelligence that you can self-host and use locally.
|
||||
|
||||
## What is Jan?
|
||||
|
||||
Jan is an open-source AI ecosystem that runs on your hardware. We're building towards open superintelligence - a complete AI platform you actually own.
|
||||
|
||||
### The Ecosystem
|
||||
|
||||
**Models**: We build specialized models for real tasks, not general-purpose assistants:
|
||||
- **Jan-Nano (32k/128k)**: 4B parameters designed for deep research with MCP. The 128k version processes entire papers, codebases, or legal documents in one go
|
||||
- **Lucy**: 1.7B model that runs agentic web search on your phone. Small enough for CPU, smart enough for complex searches
|
||||
- **Jan-v1**: 4B model for agentic reasoning and tool use, achieving 91.1% on SimpleQA
|
||||
|
||||
We also integrate the best open-source models - from OpenAI's gpt-oss to community GGUF models on Hugging Face. The goal: make powerful AI accessible to everyone, not just those with server farms.
|
||||
|
||||
**Applications**: Jan Desktop runs on your computer today. Web, mobile, and server versions coming in late 2025. Everything syncs, everything works together.
|
||||
|
||||
**Tools**: Connect to the real world through [Model Context Protocol (MCP)](https://modelcontextprotocol.io). Design with Canva, analyze data in Jupyter notebooks, control browsers, execute code in E2B sandboxes. Your AI can actually do things, not just talk about them.
|
||||
|
||||
<Aside type="tip">
|
||||
API keys are optional. No account needed. Just download and run. Bring your own API keys to connect your favorite cloud models.
|
||||
</Aside>
|
||||
|
||||
## Core Features
|
||||
|
||||
### Run Models Locally
|
||||
- Download any GGUF model from Hugging Face
|
||||
- Use OpenAI's gpt-oss models (120b and 20b)
|
||||
- Automatic GPU acceleration (NVIDIA/AMD/Intel/Apple Silicon)
|
||||
- OpenAI-compatible API at `localhost:1337`
|
||||
|
||||
### Connect to Cloud (Optional)
|
||||
- Your API keys for OpenAI, Anthropic, etc.
|
||||
- Jan.ai cloud models (coming late 2025)
|
||||
- Self-hosted Jan Server (soon)
|
||||
|
||||
### Extend with MCP Tools
|
||||
Growing ecosystem of real-world integrations:
|
||||
- **Creative Work**: Generate designs with Canva
|
||||
- **Data Analysis**: Execute Python in Jupyter notebooks
|
||||
- **Web Automation**: Control browsers with Browserbase and Browser Use
|
||||
- **Code Execution**: Run code safely in E2B sandboxes
|
||||
- **Search & Research**: Access current information via Exa, Perplexity, and Octagon
|
||||
- **More coming**: The MCP ecosystem is expanding rapidly
|
||||
|
||||
## Architecture
|
||||
|
||||
Jan is built on:
|
||||
- [Llama.cpp](https://github.com/ggerganov/llama.cpp) for inference
|
||||
- [Model Context Protocol](https://modelcontextprotocol.io) for tool integration
|
||||
- Local-first data storage in `~/jan`
|
||||
|
||||
## Why Jan?
|
||||
|
||||
| Feature | Other AI Platforms | Jan |
|
||||
|:--------|:-------------------|:----|
|
||||
| **Deployment** | Their servers only | Your device, your servers, or our cloud |
|
||||
| **Models** | One-size-fits-all | Specialized models for specific tasks |
|
||||
| **Data** | Stored on their servers | Stays on your hardware |
|
||||
| **Cost** | Monthly subscription | Free locally, pay for cloud |
|
||||
| **Extensibility** | Limited APIs | Full ecosystem with MCP tools |
|
||||
| **Ownership** | You rent access | You own everything |
|
||||
|
||||
## Development Philosophy
|
||||
|
||||
1. **Local First**: Everything works offline. Cloud is optional.
|
||||
2. **User Owned**: Your data, your models, your compute.
|
||||
3. **Built in Public**: Watch our models train. See our code. Track our progress.
|
||||
|
||||
<Aside>
|
||||
We're building AI that respects your choices. Not another wrapper around someone else's API.
|
||||
</Aside>
|
||||
|
||||
## System Requirements
|
||||
|
||||
**Minimum**: 8GB RAM, 10GB storage
|
||||
**Recommended**: 16GB RAM, GPU (NVIDIA/AMD/Intel), 50GB storage
|
||||
**Supported**: Windows 10+, macOS 12+, Linux (Ubuntu 20.04+)
|
||||
|
||||
## What's Next?
|
||||
|
||||
<details>
|
||||
<summary><strong>When will mobile/web versions launch?</strong></summary>
|
||||
|
||||
- **Jan Web**: Beta late 2025
|
||||
- **Jan Mobile**: Late 2025
|
||||
- **Jan Server**: Late 2025
|
||||
|
||||
All versions will sync seamlessly.
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><strong>What models are available?</strong></summary>
|
||||
|
||||
**Jan Models:**
|
||||
- **Jan-Nano (32k/128k)**: Deep research with MCP integration
|
||||
- **Lucy**: Mobile-optimized agentic search (1.7B)
|
||||
- **Jan-v1**: Agentic reasoning and tool use (4B)
|
||||
|
||||
**Open Source:**
|
||||
- OpenAI's gpt-oss models (120b and 20b)
|
||||
- Any GGUF model from Hugging Face
|
||||
|
||||
**Cloud (with your API keys):**
|
||||
- OpenAI, Anthropic, Mistral, Groq, and more
|
||||
|
||||
**Coming late 2025:**
|
||||
- More specialized models for specific tasks
|
||||
|
||||
[Watch live training progress →](https://train.jan.ai)
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><strong>What are MCP tools?</strong></summary>
|
||||
|
||||
MCP (Model Context Protocol) lets AI interact with real applications. Instead of just generating text, your AI can:
|
||||
- Create designs in Canva
|
||||
- Analyze data in Jupyter notebooks
|
||||
- Browse and interact with websites
|
||||
- Execute code in sandboxes
|
||||
- Search the web for current information
|
||||
|
||||
All through natural language conversation.
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><strong>How does Jan make money?</strong></summary>
|
||||
|
||||
- **Local use**: Always free
|
||||
- **Cloud features**: Optional paid services (coming late 2025)
|
||||
- **Enterprise**: Self-hosted deployment and support
|
||||
|
||||
We don't sell your data. We sell software and services.
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><strong>Can I contribute?</strong></summary>
|
||||
|
||||
Yes. Everything is open:
|
||||
- [GitHub](https://github.com/janhq/jan) - Code contributions
|
||||
- [Model Training](https://jan.ai/docs/models) - See how we train
|
||||
- [Discord](https://discord.gg/FTk2MvZwJH) - Join discussions
|
||||
- [Model Testing](https://eval.jan.ai) - Help evaluate models
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><strong>Is this just another AI wrapper?</strong></summary>
|
||||
|
||||
No. We're building:
|
||||
- Our own models trained for specific tasks
|
||||
- Complete local AI infrastructure
|
||||
- Tools that extend model capabilities via MCP
|
||||
- An ecosystem that works offline
|
||||
|
||||
Other platforms are models behind APIs you rent. Jan is a complete AI platform you own.
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><strong>What about privacy?</strong></summary>
|
||||
|
||||
**Local mode**: Your data never leaves your device. Period.
|
||||
**Cloud mode**: You choose when to use cloud features. Clear separation.
|
||||
|
||||
See our [Privacy Policy](./privacy).
|
||||
</details>
|
||||
|
||||
## Get Started
|
||||
|
||||
1. [Install Jan Desktop](./jan/installation) - Your AI workstation
|
||||
2. [Download Models](./jan/models) - Choose from gpt-oss, community models, or cloud
|
||||
3. [Explore MCP Tools](./mcp) - Connect to real applications
|
||||
4. [Build with our API](./api-reference) - OpenAI-compatible at localhost:1337
|
||||
|
||||
---
|
||||
|
||||
**Questions?** Join our [Discord](https://discord.gg/FTk2MvZwJH) or check [GitHub](https://github.com/janhq/jan/).
|
||||
116
website/src/content/docs/jan/jan-models/jan-v1.mdx
Normal file
@ -0,0 +1,116 @@
|
||||
---
|
||||
title: Jan-v1
|
||||
description: 4B parameter model with strong performance on reasoning benchmarks
|
||||
---
|
||||
|
||||
import { Aside } from '@astrojs/starlight/components';
|
||||
|
||||
## Overview
|
||||
|
||||
Jan-v1 is a 4B parameter model based on Qwen3-4B-thinking, designed for reasoning and problem-solving tasks. The model achieves 91.1% accuracy on SimpleQA through model scaling and fine-tuning approaches.
|
||||
|
||||
## Performance
|
||||
|
||||
### SimpleQA Benchmark
|
||||
|
||||
Jan-v1 demonstrates strong factual question-answering capabilities:
|
||||
|
||||

|
||||
|
||||
At 91.1% accuracy, Jan-v1 outperforms several larger models on SimpleQA, including Perplexity's 70B model. This performance represents effective scaling and fine-tuning for a 4B parameter model.
|
||||
|
||||
### Chat and Creativity Benchmarks
|
||||
|
||||
Jan-v1 has been evaluated on conversational and creative tasks:
|
||||
|
||||

|
||||
|
||||
These benchmarks (EQBench, CreativeWriting, and IFBench) measure the model's ability to handle conversational nuance, creative expression, and instruction following.
|
||||
|
||||
## Requirements
|
||||
|
||||
- **Memory**:
|
||||
- Minimum: 8GB RAM (with Q4 quantization)
|
||||
- Recommended: 16GB RAM (with Q8 quantization)
|
||||
- **Hardware**: CPU or GPU
|
||||
- **API Support**: OpenAI-compatible at localhost:1337
|
||||
|
||||
## Using Jan-v1
|
||||
|
||||
### Quick Start
|
||||
|
||||
1. Download Jan Desktop
|
||||
2. Select Jan-v1 from the model list
|
||||
3. Start chatting - no additional configuration needed
|
||||
|
||||
### Demo
|
||||
|
||||

|
||||
|
||||
### Deployment Options
|
||||
|
||||
**Using vLLM:**
|
||||
```bash
|
||||
vllm serve janhq/Jan-v1-4B \
|
||||
--host 0.0.0.0 \
|
||||
--port 1234 \
|
||||
--enable-auto-tool-choice \
|
||||
--tool-call-parser hermes
|
||||
```
|
||||
|
||||
**Using llama.cpp:**
|
||||
```bash
|
||||
llama-server --model jan-v1.gguf \
|
||||
--host 0.0.0.0 \
|
||||
--port 1234 \
|
||||
--jinja \
|
||||
--no-context-shift
|
||||
```
|
||||
|
||||
### Recommended Parameters
|
||||
|
||||
```yaml
|
||||
temperature: 0.6
|
||||
top_p: 0.95
|
||||
top_k: 20
|
||||
min_p: 0.0
|
||||
max_tokens: 2048
|
||||
```
|
||||
|
||||
## What Jan-v1 Does Well
|
||||
|
||||
- **Question Answering**: 91.1% accuracy on SimpleQA
|
||||
- **Reasoning Tasks**: Built on thinking-optimized base model
|
||||
- **Tool Calling**: Supports function calling through hermes parser
|
||||
- **Instruction Following**: Reliable response to user instructions
|
||||
|
||||
## Limitations
|
||||
|
||||
- **Model Size**: 4B parameters limits complex reasoning compared to larger models
|
||||
- **Specialized Tasks**: Optimized for Q&A and reasoning, not specialized domains
|
||||
- **Context Window**: Standard context limitations apply
|
||||
|
||||
## Available Formats
|
||||
|
||||
### GGUF Quantizations
|
||||
|
||||
- **Q4_K_M**: 2.5 GB - Good balance of size and quality
|
||||
- **Q5_K_M**: 2.89 GB - Better quality, slightly larger
|
||||
- **Q6_K**: 3.31 GB - Near-full quality
|
||||
- **Q8_0**: 4.28 GB - Highest quality quantization
|
||||
|
||||
## Models Available
|
||||
|
||||
- [Jan-v1 on Hugging Face](https://huggingface.co/janhq/Jan-v1-4B)
|
||||
- [Jan-v1 GGUF on Hugging Face](https://huggingface.co/janhq/Jan-v1-4B-GGUF)
|
||||
|
||||
## Technical Notes
|
||||
|
||||
<Aside type="note">
|
||||
The model includes a system prompt in the chat template by default to match benchmark performance. A vanilla template without system prompt is available in `chat_template_raw.jinja`.
|
||||
</Aside>
|
||||
|
||||
## Community
|
||||
|
||||
- **Discussions**: [HuggingFace Community](https://huggingface.co/janhq/Jan-v1-4B/discussions)
|
||||
- **Support**: Available through Jan App at [jan.ai](https://jan.ai)
|
||||
111
website/src/content/docs/jan/jan-models/lucy.mdx
Normal file
@ -0,0 +1,111 @@
|
||||
---
|
||||
title: Lucy
|
||||
description: Compact 1.7B model optimized for web search with tool calling
|
||||
---
|
||||
|
||||
import { Aside } from '@astrojs/starlight/components';
|
||||
|
||||

|
||||
|
||||
## Overview
|
||||
|
||||
Lucy is a 1.7B parameter model built on Qwen3-1.7B, optimized for web search through tool calling. The model has been trained to work effectively with search APIs like Serper, enabling web search capabilities in resource-constrained environments.
|
||||
|
||||
## Performance
|
||||
|
||||
### SimpleQA Benchmark
|
||||
|
||||
Lucy achieves competitive performance on SimpleQA despite its small size:
|
||||
|
||||

|
||||
|
||||
The benchmark shows Lucy (1.7B) compared against models ranging from 4B to 600B+ parameters. While larger models generally perform better, Lucy demonstrates that effective web search integration can partially compensate for smaller model size.
|
||||
|
||||
## Requirements
|
||||
|
||||
- **Memory**:
|
||||
- Minimum: 4GB RAM (with Q4 quantization)
|
||||
- Recommended: 8GB RAM (with Q8 quantization)
|
||||
- **Search API**: Serper API key required for web search functionality
|
||||
- **Hardware**: Runs on CPU or GPU
|
||||
|
||||
<Aside type="tip">
|
||||
To use Lucy's web search capabilities, you'll need a Serper API key. Get one at [serper.dev](https://serper.dev).
|
||||
</Aside>
|
||||
|
||||
## Using Lucy
|
||||
|
||||
### Quick Start
|
||||
|
||||
1. Download Jan Desktop
|
||||
2. Download Lucy from the Hub
|
||||
3. Configure Serper MCP with your API key
|
||||
4. Start using web search through natural language
|
||||
|
||||
### Demo
|
||||
|
||||

|
||||
|
||||
### Deployment Options
|
||||
|
||||
**Using vLLM:**
|
||||
```bash
|
||||
vllm serve Menlo/Lucy-128k \
|
||||
--host 0.0.0.0 \
|
||||
--port 1234 \
|
||||
--enable-auto-tool-choice \
|
||||
--tool-call-parser hermes \
|
||||
--rope-scaling '{"rope_type":"yarn","factor":3.2,"original_max_position_embeddings":40960}' \
|
||||
--max-model-len 131072
|
||||
```
|
||||
|
||||
**Using llama.cpp:**
|
||||
```bash
|
||||
llama-server model.gguf \
|
||||
--host 0.0.0.0 \
|
||||
--port 1234 \
|
||||
--rope-scaling yarn \
|
||||
--rope-scale 3.2 \
|
||||
--yarn-orig-ctx 40960
|
||||
```
|
||||
|
||||
### Recommended Parameters
|
||||
|
||||
```yaml
|
||||
Temperature: 0.7
|
||||
Top-p: 0.9
|
||||
Top-k: 20
|
||||
Min-p: 0.0
|
||||
```
|
||||
|
||||
## What Lucy Does Well
|
||||
|
||||
- **Web Search Integration**: Optimized to call search tools and process results
|
||||
- **Small Footprint**: 1.7B parameters means lower memory requirements
|
||||
- **Tool Calling**: Reliable function calling for search APIs
|
||||
|
||||
## Limitations
|
||||
|
||||
- **Requires Internet**: Web search functionality needs active connection
|
||||
- **API Costs**: Serper API has usage limits and costs
|
||||
- **Context Processing**: While supporting 128k context, performance may vary with very long inputs
|
||||
- **General Knowledge**: Limited by 1.7B parameter size for tasks beyond search
|
||||
|
||||
## Models Available
|
||||
|
||||
- [Lucy on Hugging Face](https://huggingface.co/Menlo/Lucy-128k)
|
||||
- [Lucy GGUF on Hugging Face](https://huggingface.co/Menlo/Lucy-128k-gguf)
|
||||
|
||||
## Citation
|
||||
|
||||
```bibtex
|
||||
@misc{dao2025lucyedgerunningagenticweb,
|
||||
title={Lucy: edgerunning agentic web search on mobile with machine generated task vectors},
|
||||
author={Alan Dao and Dinh Bach Vu and Alex Nguyen and Norapat Buppodom},
|
||||
year={2025},
|
||||
eprint={2508.00360},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.CL},
|
||||
url={https://arxiv.org/abs/2508.00360},
|
||||
}
|
||||
```
|
||||
165
website/src/content/docs/jan/mcp-examples/search/serper.mdx
Normal file
@ -0,0 +1,165 @@
|
||||
---
|
||||
title: Serper Search MCP
|
||||
description: Connect Jan to real-time web search with Google results through Serper API.
|
||||
---
|
||||
|
||||
import { Aside } from '@astrojs/starlight/components';
|
||||
|
||||
# Serper Search MCP
|
||||
|
||||
[Serper](https://serper.dev) provides Google search results through a simple API, making it perfect for giving AI models access to current web information. The Serper MCP integration enables Jan models to search the web and retrieve real-time information.
|
||||
|
||||
## Available Tools
|
||||
|
||||
- `google_search`: Search Google and retrieve results with snippets
|
||||
- `scrape`: Extract content from specific web pages
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Jan with experimental features enabled
|
||||
- Serper API key from [serper.dev](https://serper.dev)
|
||||
- Model with tool calling support (recommended: Jan v1)
|
||||
|
||||
<Aside type="tip">
|
||||
Serper offers 2,500 free searches upon signup - enough for extensive testing and personal use.
|
||||
</Aside>
|
||||
|
||||
## Setup
|
||||
|
||||
### Enable Experimental Features
|
||||
|
||||
1. Go to **Settings** > **General**
|
||||
2. Toggle **Experimental Features** ON
|
||||
|
||||

|
||||
|
||||
### Enable MCP
|
||||
|
||||
1. Go to **Settings** > **MCP Servers**
|
||||
2. Toggle **Allow All MCP Tool Permission** ON
|
||||
|
||||

|
||||
|
||||
### Get Serper API Key
|
||||
|
||||
1. Visit [serper.dev](https://serper.dev)
|
||||
2. Sign up for a free account
|
||||
3. Copy your API key from the playground
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
### Configure MCP Server
|
||||
|
||||
Click `+` in MCP Servers section:
|
||||
|
||||
**Configuration:**
|
||||
- **Server Name**: `serper`
|
||||
- **Command**: `npx`
|
||||
- **Arguments**: `-y serper-search-scrape-mcp-server`
|
||||
- **Environment Variables**:
|
||||
- Key: `SERPER_API_KEY`, Value: `your-api-key`
|
||||
|
||||

|
||||
|
||||
### Download Jan v1
|
||||
|
||||
Jan v1 is optimized for tool calling and works excellently with Serper:
|
||||
|
||||
1. Go to the **Hub** tab
|
||||
2. Search for **Jan v1**
|
||||
3. Choose your preferred quantization
|
||||
4. Click **Download**
|
||||
|
||||

|
||||
|
||||
### Enable Tool Calling
|
||||
|
||||
1. Go to **Settings** > **Model Providers** > **Llama.cpp**
|
||||
2. Find Jan v1 in your models list
|
||||
3. Click the edit icon
|
||||
4. Toggle **Tools** ON
|
||||
|
||||

|
||||
|
||||
## Usage
|
||||
|
||||
### Start a New Chat
|
||||
|
||||
With Jan v1 selected, you'll see the available Serper tools:
|
||||
|
||||

|
||||
|
||||
### Example Queries
|
||||
|
||||
**Current Information:**
|
||||
```
|
||||
What are the latest developments in quantum computing this week?
|
||||
```
|
||||
|
||||
**Comparative Analysis:**
|
||||
```
|
||||
What are the main differences between the Rust programming language and C++? Be spicy, hot takes are encouraged. 😌
|
||||
```
|
||||
|
||||
|
||||
**Research Tasks:**
|
||||
```
|
||||
Find the current stock price of NVIDIA and recent news about their AI chips.
|
||||
```
|
||||
|
||||
**Fact-Checking:**
|
||||
```
|
||||
Is it true that the James Webb telescope found signs of life on an exoplanet? What's the latest?
|
||||
```
|
||||
|
||||
**Local Information:**
|
||||
```
|
||||
What restaurants opened in San Francisco this month? Focus on Japanese cuisine.
|
||||
```
|
||||
|
||||
## How It Works
|
||||
|
||||
1. **Query Processing**: Jan v1 analyzes your question and determines what to search
|
||||
2. **Web Search**: Calls Serper API to get Google search results
|
||||
3. **Content Extraction**: Can scrape specific pages for detailed information
|
||||
4. **Synthesis**: Combines search results into a comprehensive answer
|
||||
|
||||
## Tips for Best Results
|
||||
|
||||
- **Be specific**: "Tesla Model 3 2024 price Australia" works better than "Tesla price"
|
||||
- **Request recent info**: Add "latest", "current", or "2024/2025" to get recent results
|
||||
- **Ask follow-ups**: Jan v1 maintains context for deeper research
|
||||
- **Combine with analysis**: Ask for comparisons, summaries, or insights
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**No search results:**
|
||||
- Verify API key is correct
|
||||
- Check remaining credits at serper.dev
|
||||
- Ensure MCP server shows as active
|
||||
|
||||
**Tools not appearing:**
|
||||
- Confirm experimental features are enabled
|
||||
- Verify tool calling is enabled for your model
|
||||
- Restart Jan after configuration changes
|
||||
|
||||
**Poor search quality:**
|
||||
- Use more specific search terms
|
||||
- Try rephrasing your question
|
||||
- Check if Serper service is operational
|
||||
|
||||
<Aside type="caution">
|
||||
Each search query consumes one API credit. Monitor usage at serper.dev dashboard.
|
||||
</Aside>
|
||||
|
||||
## API Limits
|
||||
|
||||
- **Free tier**: 2,500 searches
|
||||
- **Paid plans**: Starting at $50/month for 50,000 searches
|
||||
- **Rate limits**: 100 requests per second
|
||||
|
||||
## Next Steps
|
||||
|
||||
Serper MCP enables Jan v1 to access current web information, making it a powerful research assistant. Combine with other MCP tools for even more capabilities - use Serper for search, then E2B for data analysis, or Jupyter for visualization.
|
||||
157
website/src/content/docs/jan/quickstart.mdx
Normal file
@ -0,0 +1,157 @@
|
||||
---
|
||||
title: QuickStart
|
||||
description: Get started with Jan and start chatting with AI in minutes.
|
||||
keywords:
|
||||
[
|
||||
Jan,
|
||||
local AI,
|
||||
LLM,
|
||||
chat,
|
||||
threads,
|
||||
models,
|
||||
download,
|
||||
installation,
|
||||
conversations,
|
||||
]
|
||||
---
|
||||
|
||||
import { Aside } from '@astrojs/starlight/components';
|
||||
|
||||
# QuickStart
|
||||
|
||||
Get up and running with Jan in minutes. This guide will help you install Jan, download a model, and start chatting immediately.
|
||||
|
||||
<ol>
|
||||
|
||||
### Step 1: Install Jan
|
||||
|
||||
1. [Download Jan](/download)
|
||||
2. Install the app ([Mac](/docs/desktop/mac), [Windows](/docs/desktop/windows), [Linux](/docs/desktop/linux))
|
||||
3. Launch Jan
|
||||
|
||||
### Step 2: Download Jan v1
|
||||
|
||||
We recommend starting with **Jan v1**, our 4B parameter model optimized for reasoning and tool calling:
|
||||
|
||||
1. Go to the **Hub Tab**
|
||||
2. Search for **Jan v1**
|
||||
3. Choose a quantization that fits your hardware:
|
||||
- **Q4_K_M** (2.5 GB) - Good balance for most users
|
||||
- **Q8_0** (4.28 GB) - Best quality if you have the RAM
|
||||
4. Click **Download**
|
||||
|
||||

|
||||
|
||||
<Aside type="tip">
|
||||
Jan v1 achieves 91.1% accuracy on SimpleQA and excels at tool calling, making it perfect for web search and reasoning tasks.
|
||||
</Aside>
|
||||
|
||||
**HuggingFace models:** Some require an access token. Add yours in **Settings > Model Providers > Llama.cpp > Hugging Face Access Token**.
|
||||
|
||||

|
||||
|
||||
### Step 3: Enable GPU Acceleration (Optional)
|
||||
|
||||
For Windows/Linux with compatible graphics cards:
|
||||
|
||||
1. Go to **Settings** > **Hardware**
|
||||
2. Toggle **GPUs** to ON
|
||||
|
||||

|
||||
|
||||
<Aside type="note">
|
||||
Install required drivers before enabling GPU acceleration. See setup guides for [Windows](/docs/desktop/windows#gpu-acceleration) & [Linux](/docs/desktop/linux#gpu-acceleration).
|
||||
</Aside>
|
||||
|
||||
### Step 4: Start Chatting
|
||||
|
||||
1. Click the **New Chat** icon
|
||||
2. Select your model in the input field dropdown
|
||||
3. Type your message and start chatting
|
||||
|
||||

|
||||
|
||||
Try asking Jan v1 questions like:
|
||||
- "Explain quantum computing in simple terms"
|
||||
- "Help me write a Python function to sort a list"
|
||||
- "What are the pros and cons of electric vehicles?"
|
||||
|
||||
<Aside type="tip">
|
||||
**Want to give Jan v1 access to current web information?** Check out our [Serper MCP tutorial](/docs/mcp-examples/search/serper) to enable real-time web search with 2,500 free searches!
|
||||
</Aside>
|
||||
|
||||
</ol>
|
||||
|
||||
## Managing Conversations
|
||||
|
||||
Jan organizes conversations into threads for easy tracking and revisiting.
|
||||
|
||||
### View Chat History
|
||||
|
||||
- **Left sidebar** shows all conversations
|
||||
- Click any chat to open the full conversation
|
||||
- **Favorites**: Pin important threads for quick access
|
||||
- **Recents**: Access recently used threads
|
||||
|
||||

|
||||
|
||||
### Edit Chat Titles
|
||||
|
||||
1. Hover over a conversation in the sidebar
|
||||
2. Click the **three dots** icon
|
||||
3. Click **Rename**
|
||||
4. Enter new title and save
|
||||
|
||||

|
||||
|
||||
### Delete Threads
|
||||
|
||||
<Aside type="caution">
|
||||
Thread deletion is permanent. No undo available.
|
||||
</Aside>
|
||||
|
||||
**Single thread:**
|
||||
1. Hover over thread in sidebar
|
||||
2. Click the **three dots** icon
|
||||
3. Click **Delete**
|
||||
|
||||
**All threads:**
|
||||
1. Hover over `Recents` category
|
||||
2. Click the **three dots** icon
|
||||
3. Select **Delete All**
|
||||
|
||||
## Advanced Features
|
||||
|
||||
### Custom Assistant Instructions
|
||||
|
||||
Customize how models respond:
|
||||
|
||||
1. Use the assistant dropdown in the input field
|
||||
2. Or go to the **Assistant tab** to create custom instructions
|
||||
3. Instructions work across all models
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
### Model Parameters
|
||||
|
||||
Fine-tune model behavior:
|
||||
- Click the **Gear icon** next to your model
|
||||
- Adjust parameters in **Assistant Settings**
|
||||
- Switch models via the **model selector**
|
||||
|
||||

|
||||
|
||||
### Connect Cloud Models (Optional)
|
||||
|
||||
Connect to OpenAI, Anthropic, Groq, Mistral, and others:
|
||||
|
||||
1. Open any thread
|
||||
2. Select a cloud model from the dropdown
|
||||
3. Click the **Gear icon** beside the provider
|
||||
4. Add your API key (ensure sufficient credits)
|
||||
|
||||

|
||||
|
||||
For detailed setup, see [Remote APIs](/docs/remote-models/openai).
|
||||