Merge pull request #6562 from menloresearch/emre/docsv2

Update handbook content with Nextra callout and content improvements
This commit is contained in:
Faisal Amir 2025-09-26 22:13:28 +07:00 committed by GitHub
commit abb0da491b
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
16 changed files with 84 additions and 286 deletions

View File

@ -1,6 +1,6 @@
# Jan - Local AI Assistant
# Jan - Open-source ChatGPT replacement
![Jan AI](docs/src/pages/docs/_assets/jan-app.png)
<img width="2048" height="280" alt="github jan banner" src="https://github.com/user-attachments/assets/f3f87889-c133-433b-b250-236218150d3f" />
<p align="center">
<!-- ALL-CONTRIBUTORS-BADGE:START - Do not remove or modify this section -->
@ -12,15 +12,13 @@
</p>
<p align="center">
<a href="https://jan.ai/docs/quickstart">Getting Started</a>
- <a href="https://jan.ai/docs">Docs</a>
<a href="https://www.jan.ai/docs/desktop">Getting Started</a>
- <a href="https://discord.gg/Exe46xPMbK">Community</a>
- <a href="https://jan.ai/changelog">Changelog</a>
- <a href="https://github.com/menloresearch/jan/issues">Bug reports</a>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
Jan is an AI assistant that can run 100% offline on your device. Download and run LLMs with
**full control** and **privacy**.
Jan is bringing the best of open-source AI in an easy-to-use product. Download and run LLMs with **full control** and **privacy**.
## Installation
@ -29,41 +27,36 @@ The easiest way to get started is by downloading one of the following versions f
<table>
<tr>
<td><b>Platform</b></td>
<td><b>Stable</b></td>
<td><b>Nightly</b></td>
<td><b>Download</b></td>
</tr>
<tr>
<td><b>Windows</b></td>
<td><a href='https://app.jan.ai/download/latest/win-x64'>jan.exe</a></td>
<td><a href='https://app.jan.ai/download/nightly/win-x64'>jan.exe</a></td>
</tr>
<tr>
<td><b>macOS</b></td>
<td><a href='https://app.jan.ai/download/latest/mac-universal'>jan.dmg</a></td>
<td><a href='https://app.jan.ai/download/nightly/mac-universal'>jan.dmg</a></td>
</tr>
<tr>
<td><b>Linux (deb)</b></td>
<td><a href='https://app.jan.ai/download/latest/linux-amd64-deb'>jan.deb</a></td>
<td><a href='https://app.jan.ai/download/nightly/linux-amd64-deb'>jan.deb</a></td>
</tr>
<tr>
<td><b>Linux (AppImage)</b></td>
<td><a href='https://app.jan.ai/download/latest/linux-amd64-appimage'>jan.AppImage</a></td>
<td><a href='https://app.jan.ai/download/nightly/linux-amd64-appimage'>jan.AppImage</a></td>
</tr>
</table>
Download from [jan.ai](https://jan.ai/) or [GitHub Releases](https://github.com/menloresearch/jan/releases).
Download from [jan.ai](https://jan.ai/) or [GitHub Releases](https://github.com/menloresearch/jan/releases).
## Features
- **Local AI Models**: Download and run LLMs (Llama, Gemma, Qwen, etc.) from HuggingFace
- **Cloud Integration**: Connect to OpenAI, Anthropic, Mistral, Groq, and others
- **Local AI Models**: Download and run LLMs (Llama, Gemma, Qwen, GPT-oss etc.) from HuggingFace
- **Cloud Integration**: Connect to GPT models via OpenAI, Claude models via Anthropic, Mistral, Groq, and others
- **Custom Assistants**: Create specialized AI assistants for your tasks
- **OpenAI-Compatible API**: Local server at `localhost:1337` for other applications
- **Model Context Protocol**: MCP integration for enhanced capabilities
- **Model Context Protocol**: MCP integration for agentic capabilities
- **Privacy First**: Everything runs locally when you want it to
## Build from Source

View File

@ -9,7 +9,7 @@
},
"desktop": {
"type": "page",
"title": "Jan Desktop & Mobile"
"title": "Jan Desktop"
},
"server": {
"type": "page",

View File

@ -42,6 +42,5 @@
},
"settings": "Settings",
"data-folder": "Jan Data Folder",
"troubleshooting": "Troubleshooting",
"privacy": "Privacy"
"troubleshooting": "Troubleshooting"
}

View File

@ -22,228 +22,52 @@ keywords:
import { Callout } from 'nextra/components'
import FAQBox from '@/components/FaqBox'
# Jan
![Jan's Cover Image](./_assets/jan-app-new.png)
## Jan's Goal
> We're working towards open superintelligence to make a viable open-source alternative to platforms like ChatGPT
and Claude that anyone can own and run.
## What is Jan Today
Jan is an open-source AI platform that runs on your hardware. We believe AI should be in the hands of many, not
controlled by a few tech giants.
Today, Jan is:
- **A desktop app** that runs AI models locally or connects to cloud providers
- **A model hub** making the latest open-source models accessible
- **A connector system** that lets AI interact with real-world tools via MCP
Tomorrow, Jan aims to be a complete ecosystem where open models rival or exceed closed alternatives.
# Overview
<Callout type="info">
We're building this with the open-source AI community, using the best available tools, and sharing everything
we learn along the way.
We're building [Open Superintelligence](https://jan.ai/handbook/open-superintelligence) together.
</Callout>
## The Jan Ecosystem
Jan is an open-source replacement for ChatGPT:
- AI Models: Use AI models with agentic capabilities
- [Open-source Models](/docs/desktop/manage-models): Run open-source locally
- [Cloud Models](/docs/desktop/remote-models/anthropic): Connect to remote models with API keys
- [Assistants](/docs/desktop/assistants): Create custom AI assistants
- [MCP Servers](/docs/desktop/mcp): Integrate MCP Servers to give agentic capabilities to AI models
- Jan Hub: Browse, install, and [manage models](/docs/desktop/manage-models)
- Local API Server: Expose an [OpenAI-compatible API](/docs/desktop/api-server) from your own machine or server
### Jan Apps
**Available Now:**
- **Desktop**: Full-featured AI workstation for Windows, Mac, and Linux
## Product Suite
**Coming Late 2025:**
- **Mobile**: Jan on your phone
- **Web**: Browser-based access at jan.ai
- **Server**: Self-hosted for teams
- **Extensions**: Browser extension for Chrome-based browsers
Jan is a full [product suite](https://en.wikipedia.org/wiki/Software_suite) that offers an alternative to Big AI:
- [Jan Desktop](/docs/desktop/quickstart): macOS, Windows, and Linux apps with offline mode
- [Jan Web](https://chat.jan.ai): Jan on browser, a direct alternative to chatgpt.com
- Jan Mobile: iOS and Android apps (Coming Soon)
- [Jan Server](/docs/server): deploy locally, in your cloud, or on-prem
- [Jan Models](/docs/models): Open-source models optimized for deep research, tool use, and reasoning
### Jan Model Hub
Making open-source AI accessible to everyone:
- **Easy Downloads**: One-click model installation
- **Jan Models**: Our own models optimized for local use
- **Jan-v1**: 4B reasoning model specialized in web search
- **Research Models**
- **Jan-Nano (32k/128k)**: 4B model for web search with MCP tools
- **Lucy**: 1.7B mobile-optimized for web search
- **Community Models**: Any GGUF from Hugging Face works in Jan
- **Cloud Models**: Connect your API keys for OpenAI, Anthropic, Gemini, and more
### Extending Jan (Coming Soon)
Jan helps you customize and align Open Superintelligence:
- Jan Connectors: Extend Jan with integrations
- Jan Studio: Fine-tune, align, and guardrail
- Evals: Benchmark models across industries, regions, and alignment dimensions
## Principles
### Jan Connectors Hub
Connect AI to the tools you use daily via [Model Context Protocol](./mcp):
- [Open source](https://www.redhat.com/en/blog/open-source-culture-9-core-principles-and-values): [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) licensed, built in public.
- No [vendor lock-in](https://en.wikipedia.org/wiki/Vendor_lock-in): Switch freely between local and frontier models.
- [Right to Repair](https://en.wikipedia.org/wiki/Right_to_repair): Inspect, audit, and modify your AI stack.
**Creative & Design:**
- **Canva**: Generate and edit designs
**Data & Analysis:**
- **Jupyter**: Run Python notebooks
- **E2B**: Execute code in sandboxes
**Web & Search:**
- **Browserbase & Browser Use**: Browser automation
- **Exa, Serper, Perplexity**: Advanced web search
- **Octagon**: Deep research capabilities
**Productivity:**
- **Linear**: Project management
- **Todoist**: Task management
## Core Features
- **Run Models Locally**: Download any GGUF model from Hugging Face, use OpenAI's gpt-oss models,
or connect to cloud providers
- **OpenAI-Compatible API**: Local server at `localhost:1337` works with tools like
[Continue](./server-examples/continue-dev) and [Cline](https://cline.bot/)
- **Extend with MCP Tools**: Browser automation, web search, data analysis, and design tools, all
through natural language
- **Your Choice of Infrastructure**: Run on your laptop, self-host on your servers (soon), or use
cloud when you need it
## Philosophy
Jan is built to be user-owned:
- **Open Source**: Apache 2.0 license
- **Local First**: Your data stays on your device. Internet is optional
- **Privacy Focused**: We don't collect or sell user data. See our [Privacy Policy](./privacy)
- **No Lock-in**: Export your data anytime. Use any model. Switch between local and cloud
<Callout>
The best AI is the one you control. Not the one that others control for you.
</Callout>
## The Path Forward
### What Works Today
- Run powerful models locally on consumer hardware
- Connect to any cloud provider with your API keys
- Use MCP tools for real-world tasks
- Access transparent model evaluations
### What We're Building
- More specialized models that excel at specific tasks
- Expanded app ecosystem (mobile, web, extensions)
- Richer connector ecosystem
- An evaluation framework to build better models
### The Long-Term Vision
We're working towards open superintelligence where:
- Open models match or exceed closed alternatives
- Anyone can run powerful AI on their own hardware
- The community drives innovation, not corporations
- AI capabilities are owned by users, not rented
<Callout type="warning">
This is an ambitious goal without a guaranteed path. We're betting on the open-source community, improved
hardware, and better techniques, but we're honest that this is a journey, not a destination we've reached.
</Callout>
## Quick Start
1. [Download Jan](./quickstart) for your operating system
2. Choose a model - download locally or add cloud API keys
3. Start chatting or connect tools via MCP
4. Build with our [local API](./api-server)
Jan grows through contribution. It is shaped by many and belongs to everyone who uses it.
## Acknowledgements
Jan is built on the shoulders of giants:
- [Llama.cpp](https://github.com/ggerganov/llama.cpp) for inference
- [Model Context Protocol](https://modelcontextprotocol.io) for tool integration
- The open-source community that makes this possible
> Good artists copy, great artists steal.
## FAQs
Jan exists because we've borrowed, learned, and built on the work of others.
<FAQBox title="What is Jan?">
Jan is an open-source AI platform working towards a viable alternative to Big Tech AI. Today it's a desktop app that runs models locally or connects to cloud providers. Tomorrow it aims to be a complete ecosystem rivaling platforms like ChatGPT and Claude.
</FAQBox>
<FAQBox title="How is this different from other AI platforms?">
Other platforms are models behind APIs you rent. Jan is a complete AI ecosystem you own. Run any model, use real tools through MCP, keep your data private, and never pay subscriptions for local use.
</FAQBox>
<FAQBox title="What models can I use?">
**Jan Models:**
- Jan-Nano (32k/128k) - Research and analysis with MCP integration
- Lucy - Mobile-optimized search (1.7B)
- Jan-v1 - Reasoning and tool use (4B)
**Open Source:**
- OpenAI's gpt-oss models (120b and 20b)
- Any GGUF model from Hugging Face
**Cloud (with your API keys):**
- OpenAI, Anthropic, Mistral, Groq, and more
</FAQBox>
<FAQBox title="What are MCP tools?">
MCP (Model Context Protocol) lets AI interact with real applications. Instead of just generating text, your AI can create designs in Canva, analyze data in Jupyter, browse the web, and execute code - all through conversation.
</FAQBox>
<FAQBox title="Is Jan compatible with my system?">
**Supported OS**:
- [Windows 10+](/docs/desktop/install/windows#compatibility)
- [macOS 12+](/docs/desktop/install/mac#compatibility)
- [Linux (Ubuntu 20.04+)](/docs/desktop/install/linux)
**Hardware**:
- Minimum: 8GB RAM, 10GB storage
- Recommended: 16GB RAM, GPU (NVIDIA/AMD/Intel/Apple), 50GB storage
</FAQBox>
<FAQBox title="How realistic is 'open superintelligence'?">
Honestly? It's ambitious and uncertain. We believe the combination of rapidly improving open models, better consumer hardware, community innovation, and specialized models working together can eventually rival closed platforms. But this is a multi-year journey with no guarantees. What we can guarantee is that we'll keep building in the open, with the community, towards this goal.
</FAQBox>
<FAQBox title="What can Jan actually do today?">
Right now, Jan can:
- Run models like Llama, Mistral, and our own Jan models locally
- Connect to cloud providers if you want more power
- Use MCP tools to create designs, analyze data, browse the web, and more
- Work completely offline once models are downloaded
- Provide an OpenAI-compatible API for developers
</FAQBox>
<FAQBox title="Is Jan really free?">
**Local use**: Always free, no catches
**Cloud models**: You pay providers directly (we add no markup)
**Jan cloud**: Optional paid services coming 2025
The core platform will always be free and open source.
</FAQBox>
<FAQBox title="How does Jan protect privacy?">
- Runs 100% offline once models are downloaded
- All data stored locally in [Jan Data Folder](/docs/desktop/data-folder)
- No telemetry without explicit consent
- Open source code you can audit
<Callout type="warning">
When using cloud providers through Jan, their privacy policies apply.
</Callout>
</FAQBox>
<FAQBox title="Can I self-host Jan?">
Yes. Download directly or build from [source](https://github.com/menloresearch/jan). Jan Server for production deployments coming late 2025.
</FAQBox>
<FAQBox title="When will mobile/web versions launch?">
- **Jan Web**: Beta late 2025
- **Jan Mobile**: Late 2025
- **Jan Server**: Late 2025
All versions will sync seamlessly.
</FAQBox>
<FAQBox title="How can I contribute?">
- Code: [GitHub](https://github.com/menloresearch/jan)
- Community: [Discord](https://discord.gg/FTk2MvZwJH)
- Testing: Help evaluate models and report bugs
- Documentation: Improve guides and tutorials
</FAQBox>
<FAQBox title="Are you hiring?">
Yes! We love hiring from our community. Check [Careers](https://menlo.bamboohr.com/careers).
</FAQBox>
- [llama.cpp](https://github.com/ggerganov/llama.cpp) and [GGML](https://github.com/ggerganov/ggml) for efficient inference
- [r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/) for ideas, feedback, and debate
- [Model Context Protocol](https://modelcontextprotocol.io) for MCP integrations
- [PostHog](https://posthog.com/docs) for docs inspiration
- The open-source community for contributions, bug reports, and improvements

View File

@ -1,6 +1,6 @@
---
title: Linux
description: Get started quickly with Jan, an AI chat application that runs 100% offline on your desktop & mobile (*coming soon*).
description: Install Jan to run AI models locally on Linux. Works offline with GPU acceleration on Ubuntu, Debian, and other distributions.
keywords:
[
Jan,
@ -244,7 +244,7 @@ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64
### Step 2: Enable GPU Acceleration
1. Navigate to **Settings** (<Settings width={16} height={16} style={{display:"inline"}}/>) > **Local Engine** > **Llama.cpp**
2. Select appropriate backend in **llama-cpp Backend**. Details in our [guide](/docs/desktop/local-engines/llama-cpp).
2. Select appropriate backend in **llama-cpp Backend**. Details in our [llama.cpp guide](/docs/desktop/llama-cpp).
<Callout type="info">
CUDA offers better performance than Vulkan.

View File

@ -1,6 +1,6 @@
---
title: Windows
description: Run AI models locally on your Windows machine with Jan. Quick setup guide for local inference and chat.
description: Install Jan to run AI models locally on Windows. Works offline with GPU acceleration on Windows 10 and 11.
keywords:
[
Jan,

View File

@ -59,7 +59,7 @@ The model and its different model variants are fully supported by Jan.
## Using Jan-Nano-32k
**Step 1**
Download Jan from [here](https://jan.ai/docs/desktop/).
Download Jan from [here](https://jan.ai/download/).
**Step 2**
Go to the Hub Tab, search for Jan-Nano-Gguf, and click on the download button to the best model size for your system.
@ -118,8 +118,8 @@ Here are some example queries to showcase Jan-Nano's web search capabilities:
- 4xA6000 for vllm server (inferencing)
- What frontend should I use?
- Jan Beta (recommended) - Minimalistic and polished interface
- Download link: https://jan.ai/docs/desktop/beta
- Jan (recommended)
- Download link: https://jan.ai/download
- Getting Jinja errors in LM Studio?
- Use Qwen3 template from other LM Studio compatible models

View File

@ -108,7 +108,7 @@ You can help improve Jan by sharing anonymous usage data:
2. You can change this setting at any time
<Callout type="info">
Read more about that we collect with opt-in users at [Privacy](/docs/desktop/privacy).
Read more about that we collect with opt-in users at [Privacy](/privacy).
</Callout>
<br/>
@ -141,7 +141,7 @@ This action cannot be undone.
### Jan Data Folder
Jan stores your data locally in your own filesystem in a universal file format. See detailed [Jan Folder Structure](docs/data-folder#folder-structure).
Jan stores your data locally in your own filesystem in a universal file format. See detailed [Jan Folder Structure](/docs/desktop/data-folder#directory-structure).
**1. Open Jan Data Folder**

View File

@ -328,14 +328,14 @@ This command ensures that the necessary permissions are granted for Jan's instal
When you start a chat with a model and encounter a **Failed to Fetch** or **Something's Amiss** error, here are some possible solutions to resolve it:
**1. Check System & Hardware Requirements**
- Hardware dependencies: Ensure your device meets all [hardware requirements](docs/desktop/troubleshooting#step-1-verify-hardware-and-system-requirements)
- OS: Ensure your operating system meets the minimum requirements ([Mac](/docs/desktop/install/mac#minimum-requirements), [Windows](/docs/desktop/install/windows#compatibility), [Linux](/docs/desktop/install/linux#compatibility))
- Hardware dependencies: Ensure your device meets all [hardware requirements](troubleshooting)
- OS: Ensure your operating system meets the minimum requirements ([Mac](https://www.jan.ai/docs/desktop/install/mac#minimum-requirements), [Windows](/windows#compatibility), [Linux](docs/desktop/linux#compatibility))
- RAM: Choose models that use less than 80% of your available RAM
- For 8GB systems: Use models under 6GB
- For 16GB systems: Use models under 13GB
**2. Check Model Parameters**
- In **Engine Settings** in right sidebar, check your `ngl` ([number of GPU layers](/docs/desktop/models/model-parameters#engine-parameters)) setting to see if it's too high
- In **Engine Settings** in right sidebar, check your `ngl` ([number of GPU layers](/docs/desktop/model-parameters)) setting to see if it's too high
- Start with a lower NGL value and increase gradually based on your GPU memory
**3. Port Conflicts**

View File

@ -1,5 +1,4 @@
{
"index": "Overview",
"open-superintelligence": "Open Superintelligence",
"betting-on-open-source": "Betting on Open-Source"
"why": "Why does Jan exist?"
}

View File

@ -18,31 +18,6 @@ Jan's Handbook is a [living document](https://en.wikipedia.org/wiki/Living_docum
## Why does Jan exist?
### [Open Superintelligence](/handbook/open-superintelligence)
Building superintelligence that belongs to everyone, not just a few tech giants. We believe the future of AI should be open, accessible, and owned by the people who use it.
### [Betting on Open-Source](/handbook/betting-on-open-source)
- [Open Superintelligence](/handbook/open-superintelligence) - Building superintelligence that belongs to everyone, not just a few tech giants. We believe the future of AI should be open, accessible, and owned by the people who use it.
- [Betting on Open-Source](/handbook/betting-on-open-source)
Why we're betting on open-source as the future of AI and technology. Open-source has consistently won in the long term, and AI will be no different.
---
## Quick Links
- **For the curious**: Start with [Open Superintelligence](/handbook/open-superintelligence)
- **For developers**: Learn about [Betting on Open-Source](/handbook/betting-on-open-source)
- **For contributors**: Check out our [GitHub](https://github.com/menloresearch/jan) and [Discord](https://discord.gg/FTk2MvZwJH)
## Our North Star
We're building superintelligence that:
- **Works anywhere**: From your laptop to your data center
- **Belongs to you**: Download it, own it, modify it
- **Scales infinitely**: One person or ten thousand, same platform
- **Improves constantly**: Community-driven development
This isn't just about making AI accessible. It's about ensuring the most transformative technology in human history can be owned by those who use it.
---
_"The future of AI isn't about choosing between local or cloud. It's about having both, and everything in between, working perfectly together."_

View File

@ -0,0 +1,4 @@
{
"open-superintelligence": "Why Jan exists",
"betting-on-open-source": "Why we're betting on open-source"
}

View File

@ -1,11 +1,11 @@
---
title: "Why Open-Source"
title: "Why Jan is betting on Open-Source"
description: "Why we're betting on open-source."
---
# Why Open-Source
AI today is concentrated in the hands of a few companies. They ask for trust, while keeping the levers of control hidden. We think that's a mistake.
AI today is concentrated in the hands of [a few companies](https://stratechery.com/2025/tech-philosophy-and-ai-opportunity/). They ask for trust, while keeping the levers of control hidden. We think that's a mistake.
When you depend on one vendor, your future is tied to their roadmap, their politics, their survival. If they get acquired, pivot, or shut down; you're stuck.
@ -16,9 +16,9 @@ Depending on a closed vendor means giving up more than flexibility:
AI has become critical infrastructure. Nations, enterprises, even small teams rely on it to think and decide. And yet, control sits with a few vendors who decide the terms of access. We believe that's not control. That's dependency dressed up as convenience. One of the most powerful invention is being steered by a handful of executives. Their values shape what billions can say, build, or ask.
*This cannot stand. It must be changed.*
This can't stand. It must be changed.
## Jan's Bet
## How we see
We don't believe the future of AI should be dictated by a few firms in San Francisco, Beijing, or anywhere else.
@ -30,4 +30,4 @@ That's why we're building Jan, a full product suite:
- Jan Server
- Hub, Store, evals, guardrails, the ecosystem around it
The goal is to be the open-source replacement for ChatGPT and other BigAI products, with models and tools you can run, own, and trust.
The goal is to be the [open-source replacement for ChatGPT](https://jan.ai/) and other BigAI products, with models and tools you can run, own, and trust.

View File

@ -5,9 +5,13 @@ description: "Short answer: Open Superintelligence."
# Why does Jan exist?
> Short answer: Open Superintelligence.
import { Callout } from 'nextra/components'
In 1879, Edison lit a single street in [Menlo Park](https://en.wikipedia.org/wiki/Menlo_Park,_California). What mattered wasn't the bulb. It was that power could reach homes, schools, and factories.
<Callout type="info">
Short answer: Open Superintelligence.
</Callout>
In 1879, [Edison](https://en.wikipedia.org/wiki/Thomas_Edison) lit a single street in [Menlo Park](https://en.wikipedia.org/wiki/Menlo_Park,_California). What mattered wasn't the bulb. It was that power could reach homes, schools, and factories.
Electricity changed the world only when it became universal. Standard plugs, cheap generation, lines everywhere. People stopped talking about electricity and started using light, cold chains, and machines.
@ -19,13 +23,13 @@ Jan exists to push intelligence toward the first path: Open Superintelligence yo
> The world is made, and can be remade.
Every industrial wave redefined critical aspects of our daily lives:
- Factories introduced shift clocks and wage rhythms
- Steam gave way to electricity and standardized parts
- Rail, telegraph, and later networks changed how decisions travel
- Each wave pulled new bargains into being skills, schools, safety nets, labor law
Every industrial wave redefined new defaults of our daily lives:
- [Factories](https://en.wikipedia.org/wiki/Factory) created the modern job
- [Electricity](https://en.wikipedia.org/wiki/Electricity) created the modern home
- [Railroads](https://en.wikipedia.org/wiki/Rail_transport#History) and [telegraphs](https://en.wikipedia.org/wiki/Telegraphy#History) created the modern nation
- [The Internet](https://en.wikipedia.org/wiki/Internet) created the modern world
So what we're interested in is who is going to write the new defaults and share in the gains.
Open Superintelligence will create what comes next. What we're interested in is who is going to write the new defaults and share in the gains.
Technology doesnt choose its path, people do. Power accrues to whoever designs, deploys, and profits from the system:
- If intelligence is closed and centralized, the gains concentrate

View File

@ -17,7 +17,7 @@ Jan now supports [NVIDIA TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) i
We've been excited for TensorRT-LLM for a while, and [had a lot of fun implementing it](https://github.com/menloresearch/nitro-tensorrt-llm). As part of the process, we've run some benchmarks, to see how TensorRT-LLM fares on consumer hardware (e.g. [4090s](https://www.nvidia.com/en-us/geforce/graphics-cards/40-series/), [3090s](https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/)) we commonly see in the [Jan's hardware community](https://discord.com/channels/1107178041848909847/1201834752206974996).
<Callout type="info" >
**Give it a try!** Jan's [TensorRT-LLM extension](/docs/desktop/built-in/tensorrt-llm) is available in Jan v0.4.9 and up ([see more](/docs/desktop/built-in/tensorrt-llm)). We precompiled some TensorRT-LLM models for you to try: `Mistral 7b`, `TinyLlama-1.1b`, `TinyJensen-1.1b` 😂
**Give it a try!** Jan's TensorRT-LLM extension is available in Jan v0.4.9 and up ([see more](/docs/built-in/tensorrt-llm)). We precompiled some TensorRT-LLM models for you to try: `Mistral 7b`, `TinyLlama-1.1b`, `TinyJensen-1.1b` 😂
Bugs or feedback? Let us know on [GitHub](https://github.com/menloresearch/jan) or via [Discord](https://discord.com/channels/1107178041848909847/1201832734704795688).
</Callout>

View File

@ -125,8 +125,8 @@ any version with Model Context Protocol in it (>`v0.6.3`).
**The Key: Assistants + Tools**
Running deep research in Jan can be accomplished by combining [custom assistants](https://jan.ai/docs/assistants)
with [MCP search tools](https://jan.ai/docs/desktop/mcp-examples/search/exa). This pairing allows any model—local or
Running deep research in Jan can be accomplished by combining [custom assistants](https://jan.ai/docs/desktop/assistants)
with [MCP search tools](https://jan.ai/docs/mcp-examples/search/exa). This pairing allows any model—local or
cloud—to follow a systematic research workflow, to create a report similar to that of other providers, with some
visible limitations (for now).