Added new model provider and updated main repo readme

This commit is contained in:
Ramon Perez 2025-08-06 13:14:28 +10:00 committed by GitHub
commit 1739958664
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
50 changed files with 1354 additions and 94 deletions

View File

@ -1,6 +1,6 @@
# Jan - Local AI Assistant
![Jan banner](./JanBanner.png)
![Jan AI](docs/src/pages/docs/_assets/jan-app.png)
<p align="center">
<!-- ALL-CONTRIBUTORS-BADGE:START - Do not remove or modify this section -->
@ -12,62 +12,50 @@
</p>
<p align="center">
<a href="https://jan.ai/docs/quickstart">Getting Started</a>
- <a href="https://jan.ai/docs">Docs</a>
- <a href="https://jan.ai/changelog">Changelog</a>
- <a href="https://github.com/menloresearch/jan/issues">Bug reports</a>
<a href="https://jan.ai/docs/quickstart">Getting Started</a>
- <a href="https://jan.ai/docs">Docs</a>
- <a href="https://jan.ai/changelog">Changelog</a>
- <a href="https://github.com/menloresearch/jan/issues">Bug reports</a>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
Jan is a ChatGPT-alternative that runs 100% offline on your device. Our goal is to make it easy for a layperson to download and run LLMs and use AI with **full control** and **privacy**.
**⚠️ Jan is in active development.**
Jan is an AI assistant that can run 100% offline on your device. Download and run LLMs with
**full control** and **privacy**.
## Installation
Because clicking a button is still the easiest way to get started:
The easiest way to get started is by downloading one of the following versions for your respective operating system:
<table>
<tr>
<td><b>Platform</b></td>
<td><b>Stable</b></td>
<td><b>Beta</b></td>
<td><b>Nightly</b></td>
</tr>
<tr>
<td><b>Windows</b></td>
<td><a href='https://app.jan.ai/download/latest/win-x64'>jan.exe</a></td>
<td><a href='https://app.jan.ai/download/beta/win-x64'>jan.exe</a></td>
<td><a href='https://app.jan.ai/download/nightly/win-x64'>jan.exe</a></td>
</tr>
<tr>
<td><b>macOS</b></td>
<td><a href='https://app.jan.ai/download/latest/mac-universal'>jan.dmg</a></td>
<td><a href='https://app.jan.ai/download/beta/mac-universal'>jan.dmg</a></td>
<td><a href='https://app.jan.ai/download/nightly/mac-universal'>jan.dmg</a></td>
</tr>
<tr>
<td><b>Linux (deb)</b></td>
<td><a href='https://app.jan.ai/download/latest/linux-amd64-deb'>jan.deb</a></td>
<td><a href='https://app.jan.ai/download/beta/linux-amd64-deb'>jan.deb</a></td>
<td><a href='https://app.jan.ai/download/nightly/linux-amd64-deb'>jan.deb</a></td>
</tr>
<tr>
<td><b>Linux (AppImage)</b></td>
<td><a href='https://app.jan.ai/download/latest/linux-amd64-appimage'>jan.AppImage</a></td>
<td><a href='https://app.jan.ai/download/beta/linux-amd64-appimage'>jan.AppImage</a></td>
<td><a href='https://app.jan.ai/download/nightly/linux-amd64-appimage'>jan.AppImage</a></td>
</tr>
</table>
Download from [jan.ai](https://jan.ai/) or [GitHub Releases](https://github.com/menloresearch/jan/releases).
## Demo
<video width="100%" controls>
<source src="./docs/public/assets/videos/enable-tool-call-for-models.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
## Features
@ -149,13 +137,12 @@ For detailed compatibility, check our [installation guides](https://jan.ai/docs/
## Troubleshooting
When things go sideways (they will):
If things go sideways:
1. Check our [troubleshooting docs](https://jan.ai/docs/troubleshooting)
2. Copy your error logs and system specs
3. Ask for help in our [Discord](https://discord.gg/FTk2MvZwJH) `#🆘|jan-help` channel
We keep logs for 24 hours, so don't procrastinate on reporting issues.
## Contributing
@ -175,15 +162,6 @@ Contributions welcome. See [CONTRIBUTING.md](CONTRIBUTING.md) for the full spiel
- **Jobs**: hr@jan.ai
- **General Discussion**: [Discord](https://discord.gg/FTk2MvZwJH)
## Trust & Safety
**Friendly reminder**: We're not trying to scam you.
- We won't ask for personal information
- Jan is completely free (no premium version exists)
- We don't have a cryptocurrency or ICO
- We're bootstrapped and not seeking your investment (yet)
## License
Apache 2.0 - Because sharing is caring.

Binary file not shown.

After

Width:  |  Height:  |  Size: 203 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.6 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 171 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 139 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 152 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 417 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 405 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 661 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 158 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 642 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

View File

@ -26,5 +26,9 @@
"openrouter": {
"title": "OpenRouter",
"href": "/docs/remote-models/openrouter"
},
"huggingface": {
"title": "Hugging Face",
"href": "/docs/remote-models/huggingface"
}
}

View File

@ -0,0 +1,152 @@
---
title: Hugging Face
description: Learn how to integrate Hugging Face models with Jan using the Router or Inference Endpoints.
keywords:
[
Hugging Face,
Jan,
Jan AI,
Hugging Face Router,
Hugging Face Inference Endpoints,
Hugging Face API,
Hugging Face Integration,
Hugging Face API Integration
]
---
import { Callout, Steps } from 'nextra/components'
import { Settings, Plus } from 'lucide-react'
# Hugging Face
Jan supports Hugging Face models through two methods: the new **HF Router** (recommended) and **Inference Endpoints**. Both methods require a Hugging Face token and **billing to be set up**.
![HuggingFace Inference Providers](../_assets/hf_providers.png)
## Option 1: HF Router (Recommended)
The HF Router provides access to models from multiple providers (Replicate, Together AI, SambaNova, Fireworks, Cohere, and more) through a single endpoint.
<Steps>
### Step 1: Get Your HF Token
Visit [Hugging Face Settings > Access Tokens](https://huggingface.co/settings/tokens) and create a token. Make sure you have billing set up on your account.
### Step 2: Configure Jan
1. Go to **Settings** > **Model Providers** > **HuggingFace**
2. Enter your HF token
3. Use this URL: `https://router.huggingface.co/v1`
![Jan HF Setup](../_assets/hf_jan_setup.png)
You can find out more about the HF Router [here](https://huggingface.co/docs/inference-providers/index).
### Step 3: Start Using Models
Jan comes with three HF Router models pre-configured. Select one and start chatting immediately.
</Steps>
<Callout type='info'>
The HF Router automatically routes your requests to the best available provider for each model, giving you access to a wide variety of models without managing individual endpoints.
</Callout>
## Option 2: HF Inference Endpoints
For more control over specific models and deployment configurations, you can use Hugging Face Inference Endpoints.
<Steps>
### Step 1: Navigate to the HuggingFace Model Hub
Visit the [Hugging Face Model Hub](https://huggingface.co/models) (make sure you are logged in) and pick the model you want to use.
![HuggingFace Model Hub](../_assets/hf_hub.png)
### Step 2: Configure HF Inference Endpoint and Deploy
After you have selected the model you want to use, click on the **Deploy** button and select a deployment method. We will select HF Inference Endpoints for this one.
![HuggingFace Deployment](../_assets/hf_jan_nano.png)
<br/>
This will take you to the deployment set up page. For this example, we will leave the default settings as they are under the GPU tab and click on **Create Endpoint**.
![HuggingFace Deployment](../_assets/hf_jan_nano_2.png)
<br/>
Once your endpoint is ready, test that it works on the **Test your endpoint** tab.
![HuggingFace Deployment](../_assets/hf_jan_nano_3.png)
<br/>
If you get a response, you can click on **Copy** to copy the endpoint URL and API key.
<Callout type='info'>
You will need to be logged into the HuggingFace Inference Endpoints and have a credit card on file to deploy a model.
</Callout>
### Step 3: Configure Jan
If you do not have an API key you can create one under **Settings** > **Access Tokens** [here](https://huggingface.co/settings/tokens). Once you finish, copy the token and add it to Jan alongside your endpoint URL at **Settings** > **Model Providers** > **HuggingFace**.
**3.1 HF Token**
![Get Token](../_assets/hf_jan_nano_5.png)
<br/>
**3.2 HF Endpoint URL**
![Endpoint URL](../_assets/hf_jan_nano_4.png)
<br/>
**3.3 Jan Settings**
![Jan Settings](../_assets/hf_jan_nano_6.png)
<Callout type='warning'>
Make sure to add `/v1/` to the end of your endpoint URL. This is required by the OpenAI API.
</Callout>
**3.4 Add Model Details**
![Add Model Details](../_assets/hf_jan_nano_7.png)
### Step 4: Start Using the Model
Now you can start using the model in any chat.
![Start Using the Model](../_assets/hf_jan_nano_8.png)
If you want to learn how to use Jan Nano with MCP, check out [the guide here](../jan-models/jan-nano-32).
<br/>
</Steps>
## Available Hugging Face Models
**Option 1 (HF Router):** Access to models from multiple providers as shown in the providers image above.
**Option 2 (Inference Endpoints):** You can follow the steps above with a large amount of models on Hugging Face and bring them to Jan. Check out other models in the [Hugging Face Model Hub](https://huggingface.co/models).
## Troubleshooting
Common issues and solutions:
**1. Started a chat but the model is not responding**
- Verify your API_KEY/HF_TOKEN is correct and not expired
- Ensure you have billing set up on your HF account
- For Inference Endpoints: Ensure the model you're trying to use is running again since, after a while, they go idle so that you don't get charged when you are not using it
![Model Running](../_assets/hf_jan_nano_9.png)
**2. Connection Problems**
- Check your internet connection
- Verify Hugging Face's system status
- Look for error messages in [Jan's logs](/docs/troubleshooting#how-to-get-error-logs)
**3. Model Unavailable**
- Confirm your API key has access to the model
- Check if you're using the correct model ID
- Verify your Hugging Face account has the necessary permissions
Need more help? Join our [Discord community](https://discord.gg/FTk2MvZwJH) or check the
[Hugging Face's documentation](https://docs.huggingface.co/en/inference-endpoints/index).

View File

@ -210,3 +210,4 @@ pub fn is_library_available(library: &str) -> bool {
}
}
}

View File

@ -81,10 +81,22 @@ export default defineConfig({
label: 'MCP Examples',
collapsed: true,
items: [
{
label: 'Browser Control (Browserbase)',
slug: 'jan/mcp-examples/browser/browserbase',
},
{
label: 'Code Sandbox (E2B)',
slug: 'jan/mcp-examples/data-analysis/e2b',
},
{
label: 'Design Creation (Canva)',
slug: 'jan/mcp-examples/design/canva',
},
{
label: 'Deep Research (Octagon)',
slug: 'jan/mcp-examples/deepresearch/octagon',
},
{
label: 'Web Search with Exa',
slug: 'jan/mcp-examples/search/exa',
@ -107,6 +119,10 @@ export default defineConfig({
label: 'Llama.cpp Server',
slug: 'local-server/llama-cpp',
},
{
label: 'Server Troubleshooting',
slug: 'local-server/troubleshooting',
},
{
label: 'Integrations',
collapsed: true,

Binary file not shown.

After

Width:  |  Height:  |  Size: 714 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 554 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 377 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 453 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 616 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 742 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 544 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 404 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.4 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 432 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 499 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 514 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 986 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 718 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 685 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 946 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 286 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 522 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 620 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 781 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 713 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 684 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 659 KiB

View File

@ -16,8 +16,9 @@ keywords:
parameters,
]
---
import { Aside, Steps } from '@astrojs/starlight/components'
import { Aside } from '@astrojs/starlight/components';
# Model Parameters
Model parameters control how your AI thinks and responds. Think of them as the AI's personality settings and performance controls.
@ -32,7 +33,7 @@ Model parameters control how your AI thinks and responds. Think of them as the A
**For model capabilities:**
- Click the **edit button** next to a model to enable features like vision or tools
## Performance Settings
## Performance Settings (Gear Icon)
These settings control how the model thinks and performs:
@ -51,7 +52,7 @@ These settings control how the model thinks and performs:
![Model Parameters](../../../../assets/model-parameters.png)
## Model Capabilities
## Model Capabilities (Edit Button)
These toggle switches enable special features:

View File

@ -0,0 +1,273 @@
---
title: Browserbase MCP
description: Control browsers with natural language through Browserbase's cloud infrastructure.
keywords:
[
Jan,
MCP,
Model Context Protocol,
Browserbase,
browser automation,
web scraping,
Stagehand,
headless browser,
tool calling,
]
---
import { Aside, Steps } from '@astrojs/starlight/components'
[Browserbase MCP](https://docs.browserbase.com/integrations/mcp/introduction) gives AI models actual browser control through cloud infrastructure. Built on Stagehand, it lets you navigate websites, extract data, and interact with web pages using natural language commands.
The integration provides real browser sessions that AI can control, enabling tasks that go beyond simple web search APIs.
## Available Tools
<Aside type="note">
Browserbase's MCP tools evolve over time. This list reflects current capabilities but may change.
</Aside>
### Multi-Session Tools
- `multi_browserbase_stagehand_session_create`: Create parallel browser sessions
- `multi_browserbase_stagehand_session_list`: Track active sessions
- `multi_browserbase_stagehand_session_close`: Clean up sessions
- `multi_browserbase_stagehand_navigate_session`: Navigate in specific session
### Core Browser Actions
- `browserbase_stagehand_navigate`: Navigate to URLs
- `browserbase_stagehand_act`: Perform actions ("click the login button")
- `browserbase_stagehand_extract`: Extract text content
- `browserbase_stagehand_observe`: Find page elements
- `browserbase_screenshot`: Capture screenshots
### Session Management
- `browserbase_session_create`: Create or reuse sessions
- `browserbase_session_close`: Close active sessions
## Prerequisites
- Jan with MCP enabled
- Browserbase account (includes 60 minutes free usage)
- Model with strong tool calling support
- Node.js installed
<Aside type="caution">
Currently, only the latest Anthropic models handle multiple tools reliably. Other models may struggle with the full tool set.
</Aside>
## Setup
### Enable MCP
1. Go to **Settings** > **MCP Servers**
2. Toggle **Allow All MCP Tool Permission** ON
![MCP settings page with toggle enabled](../../../../../assets/mcp-on.png)
### Get Browserbase Credentials
1. Sign up at [browserbase.com](https://browserbase.com)
- Email verification required
- Phone number authentication
- Thorough security process
2. Access your dashboard and copy:
- **API Key**
- **Project ID**
![Browserbase dashboard showing API key and project ID](../../../../../assets/browserbase.png)
### Configure MCP Server
Click `+` in MCP Servers section:
**NPM Package Configuration:**
- **Server Name**: `browserbase`
- **Command**: `npx`
- **Arguments**: `@browserbasehq/mcp-server-browserbase`
- **Environment Variables**:
- Key: `BROWSERBASE_API_KEY`, Value: `your-api-key`
- Key: `BROWSERBASE_PROJECT_ID`, Value: `your-project-id`
![Jan MCP server configuration with Browserbase settings](../../../../../assets/browserbase3.png)
### Verify Setup
Check the tools bubble in chat to confirm Browserbase tools are available:
![Chat interface showing available Browserbase tools](../../../../../assets/browserbase2.png)
## Real Usage Example
### Live Information Query
```
Which sports matches are happening right now in Australia (irrespective of the sport)?
```
This simple query demonstrates browser automation in action:
1. **Tool Activation**
- Model creates browser session
- Navigates to sports websites
- Extracts current match data
![Model using browser tools to search for information](../../../../../assets/browserbase5.png)
2. **Results Delivery**
- Real-time match information
- Multiple sports covered
- Current scores and timings
![Final response with Australian sports matches](../../../../../assets/browserbase6.png)
The AI successfully found:
- AFL matches with live scores
- NRL games in progress
- Upcoming Rugby Union fixtures
## Common Issues
### Tool Call Failures
Sometimes tool calls fail due to parsing issues:
![Tool call error showing parsing problem](../../../../../assets/browserbase7.png)
**Solutions:**
- Try rephrasing your prompt
- Disable unnecessary tools
- Use simpler, more direct requests
- Switch to Claude 3.5+ Sonnet if using another model
### Model Limitations
Most models struggle with multiple tools. If experiencing issues:
- Start with single-purpose requests
- Build complexity gradually
- Consider which tools are actually needed
- Expect some trial and error initially
## Usage Limits
**Free Tier:**
- 60 minutes of browser time included
- Sessions auto-terminate after 5 minutes inactivity
- Can adjust timeout in Browserbase dashboard
- Usage visible in dashboard analytics
**Session Management:**
- Each browser session counts against time
- Close sessions when done to conserve minutes
- Multi-session operations consume time faster
## Practical Use Cases
### Real-Time Data Collection
```
Check current prices for MacBook Pro M4 at major Australian retailers and create a comparison table.
```
### Form Testing
```
Navigate to myservice.gov.au and walk through the Medicare claim process, documenting each required field.
```
### Content Monitoring
```
Visit ABC News Australia and extract the top 5 breaking news headlines with their timestamps.
```
### Multi-Site Analysis
```
Compare flight prices from Sydney to Tokyo next week across Qantas, Jetstar, and Virgin Australia.
```
### Automated Verification
```
Check if our company is listed correctly on Google Maps, Yelp, and Yellow Pages, noting any discrepancies.
```
## Advanced Techniques
### Session Reuse
```
Create a browser session, log into LinkedIn, then search for "AI engineers in Melbourne" and extract the first 10 profiles.
```
### Parallel Operations
```
Create three browser sessions: monitor stock prices on ASX, check crypto on CoinSpot, and track forex on XE simultaneously.
```
### Sequential Workflows
```
Go to seek.com.au, search for "data scientist" jobs in Sydney, apply filters for $150k+, then extract job titles and companies.
```
## Optimization Tips
**Prompt Engineering:**
- Be specific about what to extract
- Name exact websites when possible
- Break complex tasks into steps
- Specify output format clearly
**Tool Selection:**
- Use multi-session only when needed
- Close sessions promptly
- Choose observe before act when possible
- Screenshot sparingly to save time
**Error Recovery:**
- Have fallback prompts ready
- Start simple, add complexity
- Watch for timeout warnings
- Monitor usage in dashboard
## Troubleshooting
**Connection Issues:**
- Verify API key and Project ID
- Check Browserbase service status
- Ensure NPX can download packages
- Restart Jan after configuration
**Browser Failures:**
- Some sites block automation
- Try different navigation paths
- Check if site requires login
- Verify target site is accessible
**Performance Problems:**
- Reduce concurrent sessions
- Simplify extraction requests
- Check remaining time quota
- Consider upgrading plan
**Model Struggles:**
- Too many tools overwhelm most models
- Claude 3.5+ Sonnet most reliable
- Reduce available tools if needed
- Use focused, clear instructions
<Aside type="note">
Browser automation is complex. Expect occasional failures and be prepared to adjust your approach.
</Aside>
## Browserbase vs Browser Use
| Feature | Browserbase | Browser Use |
|---------|-------------|-------------|
| **Infrastructure** | Cloud browsers | Local browser |
| **Setup Complexity** | API key only | Python environment |
| **Performance** | Consistent | System dependent |
| **Cost** | Usage-based | Free (local resources) |
| **Reliability** | High | Variable |
| **Privacy** | Cloud-based | Fully local |
## Next Steps
Browserbase MCP provides genuine browser automation capabilities, not just web search. This enables complex workflows like form filling, multi-site monitoring, and data extraction that would be impossible with traditional APIs.
The cloud infrastructure handles browser complexity while Jan maintains conversational privacy. Just remember: with great browser power comes occasional parsing errors.

View File

@ -0,0 +1,259 @@
---
title: Octagon Deep Research MCP
description: Finance-focused deep research with AI-powered analysis through Octagon's MCP integration.
keywords:
[
Jan,
MCP,
Model Context Protocol,
Octagon,
deep research,
financial research,
private equity,
market analysis,
technical research,
tool calling,
]
---
import { Aside, Steps } from '@astrojs/starlight/components'
[Octagon Deep Research MCP](https://docs.octagonagents.com/guide/deep-research-mcp.html) provides specialized AI research capabilities with a strong focus on financial markets and business intelligence. Unlike general research tools, Octagon excels at complex financial analysis, market dynamics, and investment research.
The integration delivers comprehensive reports that combine multiple data sources, cross-verification, and actionable insights - particularly useful for understanding market structures, investment strategies, and business models.
## Available Tools
### octagon-agent
Orchestrates comprehensive market intelligence research, particularly strong in:
- Financial market analysis
- Private equity and M&A research
- Corporate structure investigations
- Investment strategy evaluation
### octagon-scraper-agent
Specialized web scraping for public and private market data:
- SEC filings and regulatory documents
- Company financials and metrics
- Market transaction data
- Industry reports and analysis
### octagon-deep-research-agent
Comprehensive research synthesis combining:
- Multi-source data aggregation
- Cross-verification of claims
- Historical trend analysis
- Actionable insights generation
## Prerequisites
- Jan with MCP enabled
- Octagon account (includes 2-week Pro trial)
- Model with tool calling support
- Node.js installed
<Aside type="note">
Octagon offers a 2-week Pro trial upon signup, providing full access to their financial research capabilities.
</Aside>
## Setup
### Enable MCP
1. Go to **Settings** > **MCP Servers**
2. Toggle **Allow All MCP Tool Permission** ON
![MCP settings page with toggle enabled](../../../../../assets/mcp-on.png)
### Get Octagon API Key
1. Sign up at [Octagon signup page](https://app.octagonai.co/signup/?redirectToAfterSignup=https://app.octagonai.co/api-keys)
2. Navigate to the API playground
3. Copy your API key from the dashboard
![Octagon API playground showing API key location](../../../../../assets/octagon2.png)
### Configure MCP Server
Click `+` in MCP Servers section:
**NPM Package Configuration:**
- **Server Name**: `octagon-mcp-server`
- **Command**: `npx`
- **Arguments**: `-y octagon-mcp@latest`
- **Environment Variables**:
- Key: `OCTAGON_API_KEY`, Value: `your-api-key`
![Jan MCP server configuration with Octagon settings](../../../../../assets/octagon3.png)
### Verify Setup
Check the tools bubble in chat to confirm Octagon tools are available:
![Chat interface showing available Octagon tools with moonshotai/kimi-k2 model](../../../../../assets/octagon4.png)
## Real-World Example: Private Equity Analysis
Here's an actual deep research query demonstrating Octagon's financial analysis capabilities:
### The Prompt
```
Break apart the private equity paradox: How did an industry that promises to "unlock value" become synonymous with gutting companies, yet still attracts the world's smartest money?
Start with the mechanics—how PE firms use other people's money to buy companies with borrowed cash, then charge fees for the privilege. Trace the evolution from corporate raiders of the 1980s to today's trillion-dollar titans like Blackstone, KKR, and Apollo. Use SEC filings, M&A databases, and bankruptcy records to map their empires.
Dig into specific deals that illustrate the dual nature: companies genuinely transformed versus those stripped and flipped. Compare Toys "R" Us's death to Hilton's resurrection. Examine how PE-owned companies fare during economic downturns—do they really have "patient capital" or do they bleed portfolio companies dry through dividend recaps?
Investigate the fee structure that makes partners billionaires regardless of performance. Calculate the real returns after the 2-and-20 (or worse) fee structures. Why do pension funds and endowments keep pouring money in despite academic studies showing they'd do better in index funds?
Explore the revolving door between PE, government, and central banks. How many Fed officials and Treasury secretaries came from or went to PE? Map the political donations and lobbying expenditures that keep carried interest taxed as capital gains.
Address the human cost through labor statistics and case studies—what happens to employees when PE takes over? But also examine when PE genuinely saves failing companies and preserves jobs.
Write this as if explaining to a skeptical but curious friend over drinks—clear language, no jargon without explanation, and enough dry humor to make the absurdities apparent. Think Michael Lewis meets Matt Levine. Keep it under 3,000 words but pack it with hard data and real examples. The goal: help readers understand why PE is simultaneously capitalism's most sophisticated expression and its most primitive.
```
![Prompt entered in Jan UI](../../../../../assets/octagon5.png)
### Research Process
The AI engages multiple Octagon tools to gather comprehensive data:
![Kimi model using Octagon tools for research](../../../../../assets/octagon6.png)
### The Results
Octagon delivers a detailed analysis covering:
**Part 1: The Mechanics Explained**
![First part of the research report](../../../../../assets/octagon7.png)
**Part 2: Historical Analysis and Case Studies**
![Second part showing PE evolution and specific deals](../../../../../assets/octagon8.png)
**Part 3: Financial Engineering and Human Impact**
![Final section on fee structures and consequences](../../../../../assets/octagon9.png)
The report demonstrates Octagon's ability to:
- Access and analyze SEC filings
- Compare multiple deal outcomes
- Calculate real returns after fees
- Track political connections
- Assess human impact with data
## Finance-Focused Use Cases
### Investment Research
```
Analyze Tesla's vertical integration strategy vs traditional automakers. Include supply chain dependencies, margin analysis, and capital efficiency metrics from the last 5 years.
```
### Market Structure Analysis
```
Map the concentration of market makers in US equities. Who controls order flow, what are their profit margins, and how has this changed since zero-commission trading?
```
### Corporate Governance
```
Investigate executive compensation at the 10 largest US banks post-2008. Compare pay ratios, stock buybacks vs R&D spending, and correlation with shareholder returns.
```
### Private Market Intelligence
```
Track Series B+ funding rounds in AI/ML companies in 2024. Identify valuation trends, investor concentration, and compare to public market multiples.
```
### Regulatory Analysis
```
Examine how Basel III implementation differs across major markets. Which banks gained competitive advantages and why?
```
### M&A Strategy
```
Analyze Microsoft's acquisition strategy under Nadella. Calculate actual vs projected synergies, integration success rates, and impact on market position.
```
## Technical Research Capabilities
While finance-focused, Octagon also handles technical research:
### Framework Evaluation
```
Compare Kubernetes alternatives for edge computing. Consider resource usage, latency, reliability, and operational complexity with real deployment data.
```
### API Economics
```
Analyze the unit economics of major AI API providers. Include pricing history, usage patterns, and margin estimates based on reported compute costs.
```
### Open Source Sustainability
```
Research funding models for critical open source infrastructure. Which projects are at risk and what are the economic incentives misalignments?
```
## Research Quality
Octagon's reports typically include:
- **Primary Sources**: SEC filings, earnings calls, regulatory documents
- **Quantitative Analysis**: Financial metrics, ratios, trend analysis
- **Comparative Studies**: Peer benchmarking, historical context
- **Narrative Clarity**: Complex topics explained accessibly
- **Actionable Insights**: Not just data, but implications
## Troubleshooting
**Authentication Issues:**
- Verify API key from Octagon dashboard
- Check trial status hasn't expired
- Ensure correct API key format
- Contact Octagon support if needed
**Research Failures:**
- Some queries may exceed scope (try narrowing)
- Financial data may have access restrictions
- Break complex queries into parts
- Allow time for comprehensive research
**Tool Calling Problems:**
- Not all models handle multiple tools well
- Kimi-k2 via OpenRouter works reliably
- Claude 3.5+ Sonnet also recommended
- Enable tool calling in model settings
**Performance Considerations:**
- Deep research takes time (be patient)
- Complex financial analysis may take minutes
- Monitor API usage in dashboard
- Consider query complexity vs urgency
<Aside type="caution">
Octagon specializes in financial and business research. While capable of technical analysis, it's optimized for market intelligence and investment research.
</Aside>
## Pricing After Trial
After the 2-week Pro trial:
- Check current pricing at octagonagents.com
- Usage-based pricing for API access
- Different tiers for research depth
- Educational discounts may be available
## Octagon vs Other Research Tools
| Feature | Octagon | ChatGPT Deep Research | Perplexity |
|---------|---------|----------------------|------------|
| **Finance Focus** | Specialized | General | General |
| **Data Sources** | Financial databases | Web-wide | Web-wide |
| **SEC Integration** | Native | Limited | Limited |
| **Market Data** | Comprehensive | Basic | Basic |
| **Research Depth** | Very Deep | Deep | Moderate |
| **Speed** | Moderate | Slow | Fast |
## Next Steps
Octagon Deep Research MCP excels at complex financial analysis that would typically require a team of analysts. The integration provides institutional-quality research capabilities within Jan's conversational interface.
Whether analyzing market structures, evaluating investments, or understanding business models, Octagon delivers the depth and accuracy that financial professionals expect, while maintaining readability for broader audiences.

View File

@ -0,0 +1,279 @@
---
title: Canva MCP
description: Create and manage designs through natural language commands with Canva's official MCP server.
keywords:
[
Jan,
MCP,
Model Context Protocol,
Canva,
design automation,
graphic design,
presentations,
templates,
tool calling,
]
---
import { Aside, Steps } from '@astrojs/starlight/components'
[Canva MCP](https://www.canva.com/newsroom/news/deep-research-integration-mcp-server/) gives AI models the ability to create, search, and manage designs directly within Canva. As the first design platform with native MCP integration, it lets you generate presentations, logos, and marketing materials through conversation rather than clicking through design interfaces.
The integration provides comprehensive design capabilities without leaving your chat, though actual editing still happens in Canva's interface.
## Available Tools
<Aside type="note">
Canva's MCP tools may change over time as the integration evolves. This list reflects current capabilities.
</Aside>
### Design Operations
- **generate-design**: Create new designs using AI prompts
- **search-designs**: Search docs, presentations, videos, whiteboards
- **get-design**: Get detailed information about a Canva design
- **get-design-pages**: List pages in multi-page designs
- **get-design-content**: Extract content from designs
- **resize-design**: Adapt designs to different dimensions
- **get-design-resize-status**: Check resize operation status
- **get-design-generation-job**: Track AI generation progress
### Import/Export
- **import-design-from-url**: Import files from URLs as new designs
- **get-design-import-from-url**: Check import status
- **export-design**: Export designs in various formats
- **get-export-formats**: List available export options
- **get-design-export-status**: Track export progress
### Organization
- **create-folder**: Create folders in Canva
- **move-item-to-folder**: Organize designs and assets
- **list-folder-items**: Browse folder contents
### Collaboration
- **comment-on-design**: Add comments to designs
- **list-comments**: View design comments
- **list-replies**: See comment threads
- **reply-to-comment**: Respond to feedback
### Legacy Tools
- **search**: ChatGPT connector (use search-designs instead)
- **fetch**: Content retrieval for ChatGPT
## Prerequisites
- Jan with MCP enabled
- Canva account (free or paid)
- Model with tool calling support
- Node.js installed
- Internet connection for Canva API access
## Setup
### Enable MCP
1. Go to **Settings** > **MCP Servers**
2. Toggle **Allow All MCP Tool Permission** ON
![MCP settings page with toggle enabled](../../../../../assets/mcp-on.png)
### Configure Canva MCP Server
Click `+` in MCP Servers section:
**Configuration:**
- **Server Name**: `Canva`
- **Command**: `npx`
- **Arguments**: `-y mcp-remote@latest https://mcp.canva.com/mcp`
- **Environment Variables**: Leave empty (authentication handled via OAuth)
![Canva MCP server configuration in Jan](../../../../../assets/canva.png)
### Authentication Process
When you first use Canva tools:
1. **Browser Opens Automatically**
- Canva authentication page appears in your default browser
- Log in with your Canva account
![Canva authentication page](../../../../../assets/canva2.png)
2. **Team Selection & Permissions**
- Select your team (if you have multiple)
- Review permissions the AI will have
- Click **Allow** to grant access
![Canva team selection and permissions](../../../../../assets/canva3.png)
The permissions include:
- Reading your profile and designs
- Creating new designs
- Managing folders and content
- Accessing team brand templates
- Commenting on designs
### Model Configuration
Use a tool-enabled model:
- **Anthropic Claude 3.5+ Sonnet**
- **OpenAI GPT-4o**
- **Google Gemini Pro**
## Real-World Usage Example
Here's an actual workflow creating a company logo:
### Initial Setup Confirmation
```
Are you able to access my projects?
```
The AI explains available capabilities:
![AI response about available actions](../../../../../assets/canva4.png)
### Design Creation Request
```
Create new designs with AI. Call it "VibeBusiness" and have it be a company focused on superintelligence for the benefit of humanity.
```
The AI initiates design generation:
![AI generating design with tool call visible](../../../../../assets/canva5.png)
### Design Options
The AI creates multiple logo variations:
**First Option:**
![First logo design option](../../../../../assets/canva6.png)
**Selected Design:**
![Selected logo design](../../../../../assets/canva7.png)
### Final Result
After selection, the AI confirms:
![Final response with design ready](../../../../../assets/canva8.png)
Clicking the design link opens it directly in Canva:
![Design opened in Canva browser tab](../../../../../assets/canva9.png)
## Practical Use Cases
### Marketing Campaign Development
```
Create a social media campaign for our new product launch. Generate Instagram posts, Facebook covers, and LinkedIn banners with consistent branding.
```
### Presentation Automation
```
Search for our Q4 sales presentation and create a simplified 5-slide version for the board meeting.
```
### Brand Asset Management
```
List all designs in our "2025 Marketing" folder and export the approved ones as PDFs.
```
### Design Iteration
```
Find our company logo designs from last month and resize them for business cards, letterheads, and email signatures.
```
### Content Extraction
```
Extract all text from our employee handbook presentation so I can update it in our documentation.
```
### Collaborative Review
```
Add a comment to the new website mockup asking the design team about the color scheme choices.
```
## Workflow Tips
### Effective Design Generation
- **Be specific**: "Create a minimalist tech company logo with blue and silver colors"
- **Specify format**: "Generate an Instagram story template for product announcements"
- **Include context**: "Design a professional LinkedIn banner for a AI research company"
- **Request variations**: Ask for multiple options to choose from
### Organization Best Practices
- Create folders before generating multiple designs
- Use descriptive names for easy searching later
- Move designs to appropriate folders immediately
- Export important designs for backup
### Integration Patterns
- Generate designs → Review options → Select preferred → Open in Canva for fine-tuning
- Search existing designs → Extract content → Generate new versions
- Create templates → Resize for multiple platforms → Export all variants
## Limitations and Considerations
**Design Editing**: While the MCP can create and manage designs, actual editing requires opening Canva's interface.
**Project Access**: The integration may not access all historical projects immediately, focusing on designs created or modified after connection.
**Generation Time**: AI design generation takes a few moments. The tool provides job IDs to track progress.
**Team Permissions**: Access depends on your Canva team settings and subscription level.
## Troubleshooting
**Authentication Issues:**
- Clear browser cookies for Canva
- Try logging out and back into Canva
- Ensure pop-ups aren't blocked for OAuth flow
- Check team admin permissions if applicable
**Design Generation Failures:**
- Verify you have creation rights in selected team
- Check Canva subscription limits
- Try simpler design prompts first
- Ensure stable internet connection
**Tool Availability:**
- Some tools require specific Canva plans
- Team features need appropriate permissions
- Verify MCP server is showing as active
- Restart Jan after authentication
**Search Problems:**
- Use search-designs (not the legacy search tool)
- Be specific with design types and names
- Check folder permissions for team content
- Allow time for new designs to index
<Aside type="caution">
Design generation uses Canva's AI capabilities and may be subject to usage limits based on your account type.
</Aside>
## Advanced Workflows
### Batch Operations
```
Create 5 variations of our product announcement banner, then resize all of them for Twitter, LinkedIn, and Facebook.
```
### Content Migration
```
Import all designs from [URLs], organize them into a "2025 Campaign" folder, and add review comments for the team.
```
### Automated Reporting
```
Search for all presentation designs created this month, extract their content, and summarize the key themes.
```
## Next Steps
Canva MCP bridges the gap between conversational AI and visual design. Instead of describing what you want and then manually creating it, you can generate professional designs directly through natural language commands.
The real power emerges when combining multiple tools - searching existing assets, generating new variations, organizing content, and collaborating with teams, all within a single conversation flow.

View File

@ -17,33 +17,22 @@ keywords:
API key
]
---
import { Aside, Steps } from '@astrojs/starlight/components';
import { Aside, Steps } from '@astrojs/starlight/components'
Jan provides a built-in, OpenAI-compatible API server that runs entirely on your computer,
powered by `llama.cpp`. Use it as a drop-in replacement for cloud APIs to build private,
offline-capable AI applications.
Jan provides a built-in, OpenAI-compatible API server that runs entirely on your computer, powered by `llama.cpp`. Use it as a drop-in replacement for cloud APIs to build private, offline-capable AI applications.
![Jan's Local API Server Settings UI](../../../assets/api-server-ui.png)
## Quick Start
### 1. Start the Server
### Start the Server
1. Navigate to **Settings** > **Local API Server**.
2. Enter a custom **API Key** (e.g., `secret-key-123`). This is required for all requests.
3. Click **Start Server**.
The server is ready when the logs show `JAN API listening at http://12.0.0.1:1337`.
The server is ready when the logs show `JAN API listening at http://127.0.0.1:1337`.
### 2. Load a model with cURL
```sh
curl http://127.0.0.1:1337/v1/models/start -H "Content-Type: application/json" \
-H "Authorization: Bearer secret-key-123" \
-d '{"model": "gemma3:12b"}'
```
### 3. Test with cURL
### Test with cURL
Open a terminal and make a request. Replace `YOUR_MODEL_ID` with the ID of an available model in Jan.
```bash
@ -95,7 +84,7 @@ A comma-separated list of hostnames allowed to access the server. This provides
## Troubleshooting
<Aside type="note">
<Aside>
Ensure **Verbose Server Logs** are enabled to get detailed error messages in the "Server Logs" view.
</Aside>

View File

@ -1,5 +1,5 @@
---
title: llama.cpp Server
title: llama.cpp Engine
description: Configure Jan's local AI engine for optimal performance.
keywords:
[
@ -15,14 +15,19 @@ keywords:
]
---
import { Aside, Tabs, TabItem } from '@astrojs/starlight/components';
import { Aside, Tabs, TabItem } from '@astrojs/starlight/components'
llama.cpp is the engine that runs AI models locally on your computer. It's what makes Jan work without
needing internet or cloud services.
`llama.cpp` is the core **inference engine** Jan uses to run AI models locally on your computer. This section
covers the settings for the engine itself, which control *how* a model processes information on your hardware.
<Aside>
Looking for API server settings (like port, host, CORS)? They have been moved to the dedicated
[**Local API Server**](/docs/local-server/api-server) page.
</Aside>
## Accessing Engine Settings
Find llama.cpp settings at **Settings** (⚙️) > **Local Engine** > **llama.cpp**:
Find llama.cpp settings at **Settings** > **Local Engine** > **llama.cpp**:
![llama.cpp](../../../assets/llama.cpp-01-updated.png)
@ -49,7 +54,8 @@ You might need to modify these settings if:
Different backends are optimized for different hardware. Pick the one that matches your computer:
<Tabs>
<Tabs items={['Windows', 'Linux', 'macOS']}>
<TabItem label="Windows">
### NVIDIA Graphics Cards (Fastest)
@ -69,6 +75,7 @@ Different backends are optimized for different hardware. Pick the one that match
- `llama.cpp-vulkan` (AMD, Intel Arc)
</TabItem>
<TabItem label="Linux">
### NVIDIA Graphics Cards
@ -83,6 +90,7 @@ Different backends are optimized for different hardware. Pick the one that match
- `llama.cpp-vulkan` (AMD, Intel graphics)
</TabItem>
<TabItem label="macOS">
### Apple Silicon (M1/M2/M3/M4)
@ -96,6 +104,7 @@ Apple Silicon automatically uses GPU acceleration through Metal.
</Aside>
</TabItem>
</Tabs>
## Performance Settings

View File

@ -14,11 +14,11 @@ keywords:
]
---
import { Tabs, TabItem } from '@astrojs/starlight/components';
import { Steps } from '@astrojs/starlight/components';
import { Aside } from '@astrojs/starlight/components';
import { Aside, Steps } from '@astrojs/starlight/components'
Access Jan's settings by clicking the ⚙️ icon in the bottom left corner.
# Settings
Access Jan's settings by clicking the Settings icon in the bottom left corner.
## Managing AI Models
@ -163,41 +163,17 @@ Jan stores everything locally on your computer in standard file formats.
This duplicates your data to the new location - your original files stay safe.
</Aside>
## Network Settings
## Local API Server
### HTTPS Proxy Setup
All settings for running Jan as a local, OpenAI-compatible server have been moved to their own dedicated page for clarity.
If you need to connect through a corporate network or want enhanced privacy:
This includes configuration for:
- Server Host and Port
- API Keys
- CORS (Cross-Origin Resource Sharing)
- Verbose Logging
1. **Enable** the proxy toggle
2. Enter your proxy details:
```
http://<username>:<password>@<server>:<port>
```
**Example:**
```
http://user:pass@proxy.company.com:8080
```
![HTTPS Proxy](../../../assets/settings-13.png)
<Aside type="note">
Proxy connections may slow down model downloads but don't affect local model performance.
</Aside>
### SSL Certificate Handling
**Ignore SSL Certificates:** Only enable this for:
- Corporate networks with internal certificates
- Development/testing environments
- Trusted network setups
![Ignore SSL Certificates](../../../assets/settings-14.png)
<Aside type="caution">
Only enable if you trust your network environment completely.
</Aside>
[**Go to Local API Server Settings &rarr;**](/docs/local-server/api-server)
## Emergency Options
@ -218,7 +194,7 @@ Only enable if you trust your network environment completely.
![Reset Confirmation](../../../assets/settings-18.png)
<Aside type="caution">
<Aside type="danger">
**This cannot be undone.** All chat history, downloaded models, and settings will be permanently deleted.
</Aside>

View File

@ -0,0 +1,323 @@
---
title: Troubleshooting
description: Fix common issues and optimize Jan's performance with this comprehensive guide.
keywords:
[
Jan,
troubleshooting,
error fixes,
performance issues,
GPU problems,
installation issues,
common errors,
local AI,
technical support,
]
---
import { Aside, Steps, Tabs, TabItem } from '@astrojs/starlight/components'
## Getting Help: Error Logs
When Jan isn't working properly, error logs help identify the problem. Here's how to get them:
### Quick Access to Logs
**In Jan Interface:**
1. Look for **System Monitor** in the footer
2. Click **App Log**
![App log](../../../assets/trouble-shooting-02.png)
**Via Terminal:**
```bash
# macOS/Linux
tail -n 50 ~/Library/Application\ Support/Jan/data/logs/app.log
# Windows
type %APPDATA%\Jan\data\logs\app.log
```
<Aside type="caution">
Remove any personal information before sharing logs. We only keep logs for 24 hours.
</Aside>
## Common Issues & Solutions
### Jan Won't Start (Broken Installation)
If Jan gets stuck after installation or won't start properly:
<Tabs>
<TabItem label="macOS">
**Clean Reinstall Steps:**
1. **Uninstall Jan** from Applications folder
2. **Delete all Jan data:**
```bash
rm -rf ~/Library/Application\ Support/Jan
```
3. **Kill any background processes** (for versions before 0.4.2):
```bash
ps aux | grep nitro
# Find process IDs and kill them:
kill -9 <PID>
```
4. **Download fresh copy** from [jan.ai](/download)
</TabItem>
<TabItem label="Windows">
**Clean Reinstall Steps:**
1. **Uninstall Jan** via Control Panel
2. **Delete application data:**
```cmd
cd C:\Users\%USERNAME%\AppData\Roaming
rmdir /S Jan
```
3. **Kill background processes** (for versions before 0.4.2):
```cmd
# Find nitro processes
tasklist | findstr "nitro"
# Kill them by PID
taskkill /F /PID <PID>
```
4. **Download fresh copy** from [jan.ai](/download)
</TabItem>
<TabItem label="Linux">
**Clean Reinstall Steps:**
1. **Uninstall Jan:**
```bash
# For Debian/Ubuntu
sudo apt-get remove jan
# For AppImage - just delete the file
```
2. **Delete application data:**
```bash
# Default location
rm -rf ~/.config/Jan
# Or custom location
rm -rf $XDG_CONFIG_HOME/Jan
```
3. **Kill background processes** (for versions before 0.4.2):
```bash
ps aux | grep nitro
kill -9 <PID>
```
4. **Download fresh copy** from [jan.ai](/download)
</TabItem>
</Tabs>
<Aside type="note">
Make sure Jan is completely removed from all user accounts before reinstalling.
</Aside>
### NVIDIA GPU Not Working
If Jan isn't using your NVIDIA graphics card for acceleration:
### Step 1: Check Your Hardware Setup
**Verify GPU Detection:**
*Windows:* Right-click desktop → NVIDIA Control Panel, or check Device Manager → Display Adapters
*Linux:* Run `lspci | grep -i nvidia`
**Install Required Software:**
**NVIDIA Driver (470.63.01 or newer):**
1. Download from [nvidia.com/drivers](https://www.nvidia.com/drivers/)
2. Test: Run `nvidia-smi` in terminal
**CUDA Toolkit (11.7 or newer):**
1. Download from [CUDA Downloads](https://developer.nvidia.com/cuda-downloads)
2. Test: Run `nvcc --version`
**Linux Additional Requirements:**
```bash
# Install required packages
sudo apt update && sudo apt install gcc-11 g++-11 cpp-11
# Set CUDA environment
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64
```
### Step 2: Enable GPU Acceleration in Jan
1. Open **Settings** > **Hardware**
2. Turn on **GPU Acceleration**
3. Check **System Monitor** (footer) to verify GPU is detected
![Hardware](../../../assets/trouble-shooting-01.png)
### Step 3: Verify Configuration
1. Go to **Settings** > **Advanced Settings** > **Data Folder**
2. Open `settings.json` file
3. Check these settings:
```json
{
"run_mode": "gpu", // Should be "gpu"
"nvidia_driver": {
"exist": true, // Should be true
"version": "531.18"
},
"cuda": {
"exist": true, // Should be true
"version": "12"
},
"gpus": [
{
"id": "0",
"vram": "12282" // Your GPU memory in MB
}
]
}
```
### Step 4: Restart Jan
Close and restart Jan to apply changes.
#### Tested Working Configurations
**Desktop Systems:**
- Windows 11 + RTX 4070Ti + CUDA 12.2 + Driver 531.18
- Ubuntu 22.04 + RTX 4070Ti + CUDA 12.2 + Driver 545
**Virtual Machines:**
- Ubuntu on Proxmox + GTX 1660Ti + CUDA 12.1 + Driver 535
<Aside type="note">
Desktop installations perform better than virtual machines. VMs need proper GPU passthrough setup.
</Aside>
### "Failed to Fetch" or "Something's Amiss" Errors
When models won't respond or show these errors:
**1. Check System Requirements**
- **RAM:** Use models under 80% of available memory
- 8GB system: Use models under 6GB
- 16GB system: Use models under 13GB
- **Hardware:** Verify your system meets [minimum requirements](/docs/troubleshooting#step-1-verify-hardware-and-system-requirements)
**2. Adjust Model Settings**
- Open model settings in the chat sidebar
- Lower the **GPU Layers (ngl)** setting
- Start low and increase gradually
**3. Check Port Conflicts**
If logs show "Bind address failed":
```bash
# Check if ports are in use
# macOS/Linux
netstat -an | grep 1337
# Windows
netstat -ano | find "1337"
```
**Default Jan ports:**
- API Server: `1337`
- Documentation: `3001`
**4. Try Factory Reset**
1. **Settings** > **Advanced Settings**
2. Click **Reset** under "Reset To Factory Settings"
<Aside type="caution">
This deletes all chat history, models, and settings.
</Aside>
**5. Clean Reinstall**
If problems persist, do a complete clean installation (see "Jan Won't Start" section above).
### Permission Denied Errors
If you see permission errors during installation:
```bash
# Fix npm permissions (macOS/Linux)
sudo chown -R $(whoami) ~/.npm
# Windows - run as administrator
```
### OpenAI API Issues ("Unexpected Token")
For OpenAI connection problems:
**1. Verify API Key**
- Get valid key from [OpenAI Platform](https://platform.openai.com/)
- Ensure sufficient credits and permissions
**2. Check Regional Access**
- Some regions have API restrictions
- Try using a VPN from a supported region
- Test network connectivity to OpenAI endpoints
### Performance Issues
**Models Running Slowly:**
- Enable GPU acceleration (see NVIDIA section)
- Use appropriate model size for your hardware
- Close other memory-intensive applications
- Check Task Manager/Activity Monitor for resource usage
**High Memory Usage:**
- Switch to smaller model variants
- Reduce context length in model settings
- Enable model offloading in engine settings
**Frequent Crashes:**
- Update graphics drivers
- Check system temperature
- Reduce GPU layers if using GPU acceleration
- Verify adequate power supply (desktop systems)
## Need More Help?
If these solutions don't work:
**1. Gather Information:**
- Copy your error logs (see top of this page)
- Note your system specifications
- Describe what you were trying to do when the problem occurred
**2. Get Community Support:**
- Join our [Discord](https://discord.com/invite/FTk2MvZwJH)
- Post in the **#🆘|jan-help** channel
- Include your logs and system info
**3. Check Resources:**
- [System requirements](/docs/troubleshooting#step-1-verify-hardware-and-system-requirements)
- [Model compatibility guides](/docs/manage-models)
- [Hardware setup guides](/docs/desktop/)
<Aside type="note">
When sharing logs, remove personal information first. We only keep logs for 24 hours, so report issues promptly.
</Aside>