vision update

This commit is contained in:
Ramon Perez 2025-07-30 11:18:11 +10:00
parent 04353a289c
commit 9e43f61366
14 changed files with 181 additions and 192 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 745 KiB

View File

@ -13,112 +13,81 @@ import { Aside, Card, CardGrid } from '@astrojs/starlight/components';
We know it's hard but we believe this will be possible in the next decade through a combination of
models, applications and tools. For this we are...
> **building a solution that ties all of these seamlessly so that users, regardless of their technical
background, are able to use Jan in the same way they use other applications but while owning them.**
> **building Jan as the ecosystem that ties all of these seamlessly so that users, regardless of their technical
background, add intelligence to their day-to-day lives like they would, but better, with similar tools.**
## What We're Building
![Jan Vision](../../assets/jan-vision.png)
**Jan Ecosystem** = Jan Models + Jan Application + Jan Tools
Unlike other AI assistants that do specific tasks with one model or have many models with a myriad of solutions, Jan provides:
- Its own specialised models that are optimised at specific tasks like web-search, creative writing, and translation
- Applications that work across all of your devices in an integrated way
- Tools to help you get things done
## Core Principles
## Two Modes
### 1) Build the Full Stack
### Local Mode
Run AI models entirely on your device, giving you complete privacy with no internet required.
Models alone aren't enough. Neither are applications. Superintelligence requires models that
understand your needs, tools that extend capabilities, and applications that tie it all
together. We're building all three, openly.
![Jan Desktop](../../assets/jan_desktop.png)
### 2) You Choose Who Runs It
Run Jan on your laptop. Self-host it on your servers. Use our cloud. The same superintelligence
works everywhere. Your data, your compute, your choice.
### Cloud Mode
Connect to more powerful models when needed - either self-hosted or via jan.ai.
### 3) Start Simple, Scale Infinitely
Open Jan and start chatting. No setup required. When you need more - better models, advanced tools,
team deployment - everything's there. The complexity scales with your ambition, not our architecture.
![Jan Everywhere](../../assets/jan_everywhere.png)
## The Path to Superintelligence
<Aside type="note">
Users shouldn't need to understand models, APIs, or technical details. Just choose Local for privacy or Cloud for power.
</Aside>
### Today we have the Foundation
## Our Product Principles
- **A desktop app** that works both with local and cloud-based models
- **Jan models** small enough to run on any laptop and powerful enough to scale on any server
- **Basic tools** enabled through MCP Search, file parsing, simple workflows
### 1) It Just Works
1. Open Jan, start chatting
2. Onboarding is fully available but optional
3. Setting up an API key is optional
4. Selecting a local model is optional
5. Become a power user at your own pace, if you want to
### Next 12 Months: Ecosystem
- **The Jan v1 models** are a specialized series of models with general capabilities but optimsed
for specific tasks like search, analysis, creative writing and more
- **The Jan server** works as a self-hosted AI infrastructure for teams
- **Advanced tools** like browser use, deep research, and long-term memory works across devices, excels
across different day-to-day use cases, and scales with the needs of large teams
- **Cross-device sync** allows you to take your AI everywhere
We handle the complexity.
### End State: Open Superintelligence
Not one massive model, but an ecosystem of specialized models, tools, and applications working
together. Built in public. Owned by whoever runs it.
### 2) Cloud When Needed
Start completely locally and own your AI models. Add cloud capabilities only when you choose to.
## Why This Matters: The Status Quo
### 3) Solve Problems, Not Settings
We help users get to answers quickly and leave the configurations as optional. Power users can dig deeper, but it's never required.
Every other AI company wants to be your AI provider. We want you to own your AI.
## Available on Every Device
- **OpenAI/Anthropic**: Their models, their servers, their rules
- **Open Source Models**: Powerful but fragmented - no cohesive experience
- **Jan**: Complete ecosystem you can own, modify, and deploy however you want
<CardGrid>
<Card title="Jan Desktop" icon="laptop">
**Available Now**
## Watch Us Build
Your personal AI workstation that helps with your use cases and powers other devices. Run models locally right away or bring an API key to connect to your favorite cloud-based models.
### Live Model Training
We train our models in public. Check the [models page](./models/jan-v1) to see:
- Real-time training progress
- Failed runs and what went wrong
- Models in testing before release
**[Learn more &rarr;](./platforms/desktop)**
</Card>
<Card title="Jan Web" icon="up-arrow">
**Beta Launch Soon**
No black box. No "trust us, it's good." Watch the entire process from dataset to deployment.
Web-based version of Jan with no setup required. The same default cloud mode for mobile and desktop users.
### Help Evaluate Our Models
Every model needs real-world testing. Join our open evaluation platform where you can:
- Compare model outputs side-by-side
- Test specific capabilities you care about
- Vote on which responses actually help
- Suggest improvements based on your use cases
**[Learn more &rarr;](./platforms/jan-ai)**
</Card>
<Card title="Jan Mobile" icon="phone">
**Coming Q4 2025**
Think LMArena, but you can see all the data, run your own evals, and directly influence what we train next.
Connect to your Desktop/Server, or run local models like Jan Nano. The same experience, everywhere.
[Test/evaluate our models here](link)
**[Learn more &rarr;](./platforms/mobile)**
</Card>
<Card title="Jan Server" icon="bars">
**Coming Q3 2025**
## Get Involved
Self-hosted solution or connect to Jan via API for teams and enterprises.
We build in public. Everything from our model training to our product roadmap is open.
**[Learn more &rarr;](./platforms/server)**
</Card>
</CardGrid>
## What Makes Jan Different
| Feature | Other AI Assistants | Jan |
| :--- | :--- | :--- |
| **Models** | Wrapper around Claude/GPT | Our own models + You can own them |
| **Dual mode** | Your data on their servers | Your data stays yours |
| **Deployment** | Cloud only | Local, self-hosted, or cloud |
| **Cost** | Subscription forever | Free locally, pay for cloud |
## Development Timeline
Jan is actively developed with regular releases. Our development follows these key milestones:
### Current Focus
- **Jan Desktop**: Continuous improvements and model support
- **Jan Web**: Beta launch preparation
- **Model Development**: Jan Nano optimization and v1 launch
### Next 6 Months
- Jan Web public beta
- Mobile app development
- Server deployment tools
### Future Vision
- Complete full-stack AI solution
- Advanced tool integration
- Enterprise features
<Aside type="tip">
We're building AI that respects your choices. Run it locally and power other apps, connect to the cloud for power, or self-host for both.
</Aside>
- [GitHub](link) - Contribute code
- [Handbook](link) - See how we train models
- [Discord](link) - Join the discussion

View File

@ -1,29 +1,51 @@
---
title: Jan V1
description: Our upcoming family of foundational models, built to compete with the best.
title: Jan Models
description: Specialized AI models trained in public for real tasks.
sidebar:
order: 1
banner:
content: 'In Development: Jan V1 models are currently being trained and are not yet available.'
content: 'Watch live training progress below. First models releasing Q1 2025.'
---
import { Aside } from '@astrojs/starlight/components';
## Our Foundational Model Family
## Not Just Another Model Family
Jan V1 is our in-house, still in training, family of models designed to compete directly with leading models. We're building powerful, general-purpose models from the ground up to solve real-world problems with a focus on efficiency and privacy.
Jan Models aren't general-purpose chatbots. Each model is trained for specific tasks that matter in daily work: search, analysis, creative writing, coding, research. They work together, each handling what it does best.
### Planned Model Lineup
### Current Training Status
| Model | Target Size | Intended Use Case | Availability |
|:------------|:------------|:-----------------------------|:--------------|
| Jan V1-7B | 4-8GB | Fast, efficient daily tasks | Coming Soon |
| Jan V1-13B | 8-16GB | Balanced power and performance | Coming Soon |
| Jan V1-70B | 40-64GB | Deep analysis, professional work | Coming Soon |
| Jan V1-2350B | 100GB+ | Frontier research, complex tasks | Planned 2026 |
| Model | Specialization | Size | Training Progress | Status |
|:------|:--------------|:-----|:------------------|:-------|
| Jan-Search | Web search + synthesis | 7B | ████████░░ 82% | Testing phase |
| Jan-Write | Creative + technical writing | 13B | ████░░░░░░ 41% | Active training |
| Jan-Analyze | Data analysis + reasoning | 13B | ██░░░░░░░░ 23% | Dataset prep |
| Jan-Code | Code generation + debugging | 7B | ░░░░░░░░░░ 0% | Starting Jan 2025 |
### What to Expect
- **Competitive Performance**: Aiming for results on par with leading closed-source models.
- **Optimized for Local Use**: Efficient quantized versions for running on your own hardware.
- **Privacy-Centric**: Trainable and runnable in your own environment, ensuring your data stays yours.
- **Seamless Integration**: Designed to work perfectly within the Jan ecosystem.
- **Fine-tuning support**: Easy to adapt to specific tasks or domains.
<Aside type="note">
Training runs live at [train.jan.ai](). See datasets, metrics, and even failed runs.
</Aside>
### Why Specialized Models?
One model can't excel at everything. GPT-4o, o3, o4, or Claude 4 Sonnet writing poetry
use the same weights to do math as well, which can be inefficient and expensive.
Our approach:
- **Jan-Search** knows how to query, crawl, and synthesize
- **Jan-Write** understands tone, structure, and creativity
- **Jan-Analyze** excels at reasoning and data interpretation
- Models work together through the Jan orchestration layer
### Built for the Ecosystem
These aren't standalone models. They're designed to:
- Run efficiently on local hardware (quantized to 4-8GB)
- Work with Jan Tools (browser automation, file parsing, memory)
- Scale from laptop to server without code changes
- Share context and hand off tasks to each other
### Help Us Improve
Models are only as good as their real-world performance. [Test our models](link) against your actual use cases and vote on what works.
We train on your feedback, not just benchmarks.

View File

@ -1,113 +1,111 @@
---
title: Jan Desktop
description: AI that runs on your computer, not someone else's. Your personal AI workstation.
description: The foundation of your AI ecosystem. Runs on your hardware.
sidebar:
order: 2
---
import { Aside, Card, CardGrid, Tabs, TabItem } from '@astrojs/starlight/components';
This is how Jan started and it has been available since day 1.
Jan Desktop is your local AI workstation. Download it, run your own models, or connect to cloud providers. Your computer, your choice.
Jan Desktop strives to be:
## How It Works
> Your personal AI workstation that helps with your use cases and powers other devices. Run models locally right away or bring an API key to connect to your favorite cloud-based models.
### Default: Local Mode
Open Jan. Start chatting with Jan Nano. No internet, no account, no API keys. Your conversations never leave your machine.
Jan Desktop is where it all starts. Download it, open it, and start chatting. Your AI runs on your computer with zero setup required.
### Optional: Cloud Mode
Need more power? Connect to:
- Your own Jan Server
- jan.ai (coming soon)
- Any OpenAI-compatible API
## Two Modes, Zero Complexity
### Local Mode (Default)
Your conversations stay on your computer. No internet needed. Complete privacy.
### Cloud Mode
Connect to more powerful models when you need them. Your choice of provider.
<Aside type="note">
As of today, when you first open Jan you do have to download a model or connect to a cloud provider, but that is about to change soon.
<Aside type="caution">
**Current limitation**: You need to download a model first (2-4GB). We're embedding Jan Nano in the app to fix this.
</Aside>
## What You Get
## Why Desktop First
Your desktop has the GPU, storage, and memory to run real AI models. Not toy versions. Not demos. The same models that power ChatGPT-scale applications.
More importantly: it becomes the hub for your other devices. Your phone connects to your desktop. Your team connects to your desktop. Everything stays in your control.
## Specifications
<CardGrid>
<Card title="Works Offline" icon="wifi">
Download once, use forever. Internet is optional.
<Card title="Storage" icon="folder">
Everything in `~/jan`. Your data, your models, your configuration. Back it up, move it, delete it - it's just files.
</Card>
<Card title="Your Data Stays Yours" icon="shield">
Everything is stored in `~/.local/share/jan`. No cloud backups unless you want them.
<Card title="API Server" icon="code">
OpenAI-compatible API at `localhost:1337`. Any tool that works with OpenAI works with Jan. No code changes.
</Card>
<Card title="Powers Other Devices" icon="devices">
Your desktop becomes an AI server for your phone and other computers.
<Card title="GPU Support" icon="rocket">
NVIDIA CUDA acceleration out of the box. Automatically detects and uses available GPUs. CPU fallback always works.
</Card>
<Card title="Developer Friendly" icon="code">
Local API at `localhost:1337`. Works with any OpenAI-compatible tool.
</Card>
<Card title="GPU Acceleration" icon="rocket">
Automatically detects and uses NVIDIA GPUs for faster performance.
</Card>
<Card title="Cross-Platform" icon="laptop">
Windows, macOS, and Linux support with native performance.
<Card title="Model Flexibility" icon="puzzle">
Run any GGUF model from Hugging Face. Or our models. Or your fine-tuned models. If it's GGUF, it runs.
</Card>
</CardGrid>
## System Requirements
### Minimum Requirements
- **RAM:** 8GB (models use less than 80% of available memory)
- **Storage:** 10GB+ free space
- **OS:** Windows 10, macOS 12, Ubuntu 20.04 or newer
**Minimum**: 8GB RAM, 10GB storage, any 64-bit OS from the last 5 years
### Recommended
- **RAM:** 16GB+ for larger models
- **Storage:** 20GB+ for multiple models
- **GPU:** NVIDIA GPU with 6GB+ VRAM for acceleration
- **OS:** Latest versions for best performance
**Recommended**: 16GB RAM, NVIDIA GPU, 50GB storage for multiple models
## Getting Started
**Runs on**: Windows 10+, macOS 12+, Ubuntu 20.04+
1. **Download Jan** from [jan.ai/download](https://jan.ai/download)
2. **Open the app** - it loads with everything ready
3. **Start chatting** - that's it
## Installation
## Local Mode Features
```bash
# macOS/Linux
curl -sSL https://jan.ai/install.sh | bash
- **Select your favorite Model:** Jan allows you to download any GGUF model from the Hugging Face Hub.
- **Smart Defaults:** Automatically uses your GPU if available and adjusts to your system's capabilities.
- **Complete Privacy:** No telemetry by default, no account required, and no data leaves your machine.
# Windows
# Download from jan.ai/download
```
## Cloud Mode (Optional)
Connect to external AI providers when you need more power:
<Tabs>
<TabItem label="jan.ai">
Our cloud service (coming soon). One click to enable.
</TabItem>
<TabItem label="OpenAI">
Use your OpenAI API key for GPT-4 access.
</TabItem>
<TabItem label="Self-Hosted">
Connect to your own Jan Server.
</TabItem>
</Tabs>
## Desktop as Your AI Hub
Your desktop can power AI across all your devices by automatically becoming a local server.
- **Network Sharing:** Mobile apps connect over WiFi, and other computers can access your models.
- **API:** Available at `localhost:1337` for any OpenAI-compatible application.
- **Offline Access:** No internet required for local network connections.
<Aside type="tip">
First run downloads Jan Nano (4GB). After that, everything works offline.
</Aside>
## For Developers
### Local API Server
```bash
# Always running at localhost:1337
curl http://localhost:1337/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "gemma3:3b", "messages": [{"role": "user", "content": "Hello"}]}'
### Use Jan as an OpenAI Drop-in
```javascript
// Your existing OpenAI code
const openai = new OpenAI({
apiKey: "not-needed",
baseURL: "http://localhost:1337/v1"
});
// Works exactly the same
const completion = await openai.chat.completions.create({
model: "jan-nano",
messages: [{ role: "user", content: "Hello" }]
});
```
## The Bottom Line
### Available Endpoints
- `/v1/chat/completions` - Chat with any loaded model
- `/v1/models` - List available models
- `/v1/embeddings` - Generate embeddings
- `/routes` - See all available routes
Jan Desktop is AI that respects that your computer is YOUR computer, not a terminal to someone else's server. Just software that works for you.
## The Foundation
Jan Desktop isn't just an app. It's the foundation of your personal AI infrastructure:
1. **Today**: Run models locally, connect to cloud APIs
2. **Soon**: Your phone connects to your desktop
3. **Next**: Your desktop serves your team
4. **Future**: Your personal AI that knows you, runs everywhere
No subscriptions. No lock-in. Just software that's yours.
---
**Next steps**:
- [Download Jan Desktop](https://jan.ai/download)
- [Try Jan Models →](../models/jan-v1)
- [Explore Tools →](../tools/search)

View File

@ -27,8 +27,8 @@ import { Content } from '../../content/products/index.mdx';
{
label: 'Models',
items: [
{ label: 'Overview', link: '/products/models/jan-v1/' },
{ label: 'Jan Nano', link: '/products/models/jan-nano/' },
{ label: 'Jan v1', link: '/products/models/jan-v1/' },
]
},
{

View File

@ -26,8 +26,8 @@ import { Content } from '../../../content/products/models/jan-nano.mdx';
{
label: 'Models',
items: [
{ label: 'Overview', link: '/products/models/jan-v1/' },
{ label: 'Jan Nano', link: '/products/models/jan-nano/' },
{ label: 'Jan v1', link: '/products/models/jan-v1/' },
]
},
{

View File

@ -5,7 +5,7 @@ import { Content } from '../../../content/products/models/jan-v1.mdx';
<StarlightPage
frontmatter={{
title: 'Jan v1',
title: 'Models Overview',
description: 'Full-featured AI model for desktop and server deployment',
tableOfContents: false
}}
@ -26,8 +26,8 @@ import { Content } from '../../../content/products/models/jan-v1.mdx';
{
label: 'Models',
items: [
{ label: 'Overview', link: '/products/models/jan-v1/' },
{ label: 'Jan Nano', link: '/products/models/jan-nano/' },
{ label: 'Jan v1', link: '/products/models/jan-v1/' },
]
},
{

View File

@ -26,8 +26,8 @@ import { Content } from '../../../content/products/platforms/desktop.mdx';
{
label: 'Models',
items: [
{ label: 'Overview', link: '/products/models/jan-v1/' },
{ label: 'Jan Nano', link: '/products/models/jan-nano/' },
{ label: 'Jan v1', link: '/products/models/jan-v1/' },
]
},
{

View File

@ -26,8 +26,8 @@ import { Content } from '../../../content/products/platforms/jan-ai.mdx';
{
label: 'Models',
items: [
{ label: 'Overview', link: '/products/models/jan-v1/' },
{ label: 'Jan Nano', link: '/products/models/jan-nano/' },
{ label: 'Jan v1', link: '/products/models/jan-v1/' },
]
},
{

View File

@ -26,8 +26,8 @@ import { Content } from '../../../content/products/platforms/mobile.mdx';
{
label: 'Models',
items: [
{ label: 'Overview', link: '/products/models/jan-v1/' },
{ label: 'Jan Nano', link: '/products/models/jan-nano/' },
{ label: 'Jan v1', link: '/products/models/jan-v1/' },
]
},
{

View File

@ -26,8 +26,8 @@ import { Content } from '../../../content/products/platforms/server.mdx';
{
label: 'Models',
items: [
{ label: 'Overview', link: '/products/models/jan-v1/' },
{ label: 'Jan Nano', link: '/products/models/jan-nano/' },
{ label: 'Jan v1', link: '/products/models/jan-v1/' },
]
},
{

View File

@ -26,8 +26,8 @@ import { Content } from '../../../content/products/tools/browseruse.mdx';
{
label: 'Models',
items: [
{ label: 'Overview', link: '/products/models/jan-v1/' },
{ label: 'Jan Nano', link: '/products/models/jan-nano/' },
{ label: 'Jan v1', link: '/products/models/jan-v1/' },
]
},
{

View File

@ -26,8 +26,8 @@ import { Content } from '../../../content/products/tools/deepresearch.mdx';
{
label: 'Models',
items: [
{ label: 'Overview', link: '/products/models/jan-v1/' },
{ label: 'Jan Nano', link: '/products/models/jan-nano/' },
{ label: 'Jan v1', link: '/products/models/jan-v1/' },
]
},
{

View File

@ -26,8 +26,8 @@ import { Content } from '../../../content/products/tools/search.mdx';
{
label: 'Models',
items: [
{ label: 'Overview', link: '/products/models/jan-v1/' },
{ label: 'Jan Nano', link: '/products/models/jan-nano/' },
{ label: 'Jan v1', link: '/products/models/jan-v1/' },
]
},
{