vision update

This commit is contained in:
Ramon Perez 2025-07-30 12:00:31 +10:00
parent 9e43f61366
commit a2aac5a63a
2 changed files with 36 additions and 10 deletions

View File

@ -71,7 +71,7 @@ We train our models in public. Check the [models page](./models/jan-v1) to see:
- Failed runs and what went wrong
- Models in testing before release
No black box. No "trust us, it's good." Watch the entire process from dataset to deployment.
No "trust us, it's good." Watch the entire process from dataset to deployment.
### Help Evaluate Our Models
Every model needs real-world testing. Join our open evaluation platform where you can:
@ -84,6 +84,25 @@ Think LMArena, but you can see all the data, run your own evals, and directly in
[Test/evaluate our models here](link)
### Our Models Training Right Now
We don't just talk about open development. Here's what's actually happening:
| Model | Progress | Status | Details |
|:------|:---------|:-------|:--------|
| **Jan-Search-7B** | ████████░░ 82% | Testing | [View run](/) • 2.1M steps • ETA 3 days |
| **Jan-Write-13B** | ████░░░░░░ 41% | Training | [View run](/) • 980K steps • On track |
| **Jan-Analyze-13B** | ████████░░ ~~67%~~ | Failed | [View logs](/) • OOM at step 1.5M • Restarting |
These are our actual models training on our hardware in our Singapore office. Click any run to see:
- Live loss curves
- Training datasets
- Evaluation metrics
- Even our failures
[Watch live training →](/train)
## Get Involved
We build in public. Everything from our model training to our product roadmap is open.

View File

@ -6,12 +6,14 @@ sidebar:
---
import { Aside, Card, CardGrid, Tabs, TabItem } from '@astrojs/starlight/components';
Jan Desktop is your local AI workstation. Download it, run your own models, or connect to cloud providers. Your computer, your choice.
Jan Desktop is your local AI workstation. Download it, run your own models, or connect to
cloud providers. Your computer, your choice.
## How It Works
### Default: Local Mode
Open Jan. Start chatting with Jan Nano. No internet, no account, no API keys. Your conversations never leave your machine.
Open Jan. Start chatting with Jan Nano. No internet, no account, no API keys. Your conversations
never leave your machine.
### Optional: Cloud Mode
Need more power? Connect to:
@ -20,23 +22,28 @@ Need more power? Connect to:
- Any OpenAI-compatible API
<Aside type="caution">
**Current limitation**: You need to download a model first (2-4GB). We're embedding Jan Nano in the app to fix this.
**Current limitation**: You need to download a model first (2-4GB). We're embedding Jan Nano
in the app to fix this.
</Aside>
## Why Desktop First
Your desktop has the GPU, storage, and memory to run real AI models. Not toy versions. Not demos. The same models that power ChatGPT-scale applications.
Your desktop has the GPU, storage, and memory to run real AI models. Not toy versions. Not
demos. The same models that power ChatGPT-scale applications.
More importantly: it becomes the hub for your other devices. Your phone connects to your desktop. Your team connects to your desktop. Everything stays in your control.
More importantly: it becomes the hub for your other devices. Your phone connects to your
desktop. Your team connects to your desktop. Everything stays in your control.
## Specifications
<CardGrid>
<Card title="Storage" icon="folder">
Everything in `~/jan`. Your data, your models, your configuration. Back it up, move it, delete it - it's just files.
<Card title="Storage" icon="list-format">
Everything in `~/.local/share/jan`. Your data, your models, your configuration. Back it up, move it, delete
it - it's just files.
</Card>
<Card title="API Server" icon="code">
OpenAI-compatible API at `localhost:1337`. Any tool that works with OpenAI works with Jan. No code changes.
<Card title="API Server" icon="forward-slash">
OpenAI-compatible API at `localhost:1337`. Any tool that works with OpenAI works with Jan. No
code changes.
</Card>
<Card title="GPU Support" icon="rocket">
NVIDIA CUDA acceleration out of the box. Automatically detects and uses available GPUs. CPU fallback always works.