blog: improve local AI guide for beginners (#4610)

* docs: add DeepSeek R1 local installation guide

- Add comprehensive guide for running DeepSeek R1 locally
- Include step-by-step instructions with screenshots
- Add VRAM requirements and model selection guide
- Include system prompt setup instructions

* docs: add comprehensive guide on running AI models locally

* docs: address PR feedback for DeepSeek R1 and local AI guides

- Improve language and terminology throughout
- Add Linux support information
- Enhance technical explanations
- Update introduction for better flow
- Fix parameters section in run-ai-models-locally.mdx

* docs: improve local AI guides content and linking

- Update titles and introductions for better SEO
- Add opinionated guidance section for beginners
- Link DeepSeek guide with general local AI guide
- Fix typos and improve readability

* fix: remove git conflict markers from deepseek guide frontmatter

* docs: improve local AI guide for beginners

Key improvements:
- Add detailed explanation of GGUF and why it's needed
- Improve content structure and readability
- Add visual guides with SEO-friendly images
- Enhance llama.cpp explanation with GitHub link
- Fix heading hierarchy for better navigation
- Add practical examples and common questions
- Update image paths and captions for better SEO

Technical details:
- Add proper image alt text and captions
- Link to llama.cpp GitHub repository
- Clarify model size requirements
- Simplify hardware requirements section
- Improve heading structure (h1-h5)
- Add step-by-step model installation guide

* docs: add offline ChatGPT alternative guide with Jan

- Add comprehensive guide on using Jan as offline ChatGPT alternative
- Include step-by-step instructions for setup
- Add images for document chat feature
- Optimize content for SEO with relevant keywords

* docs: update description to emphasize computer-local aspect

---------

Co-authored-by: Louis <louis@jan.ai>
This commit is contained in:
Emre Can Kartal 2025-02-10 10:43:07 +08:00 committed by GitHub
parent 2ef3e8b691
commit 60447289bd
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
13 changed files with 297 additions and 134 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.4 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 163 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.0 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 725 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.7 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 586 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 899 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.7 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 673 KiB

View File

@ -1,25 +1,32 @@
---
title: "Run DeepSeek R1 locally on your device (Beginner-Friendly Guide)"
description: "A straightforward guide to running DeepSeek R1 locally for enhanced privacy, regardless of your background."
description: "A straightforward guide to running DeepSeek R1 locally regardless of your background."
tags: DeepSeek, R1, local AI, Jan, GGUF, Qwen, Llama
categories: guides
date: 2024-01-31
ogImage: assets/run-deepseek-r1-locally-in-jan.jpg
date: 2025-01-31
ogImage: assets/deepseek-r1-locally-jan.jpg
twitter:
card: summary_large_image
site: "@jandotai"
title: "Run DeepSeek R1 locally on your device (Beginner-Friendly Guide)"
description: "A straightforward guide to running DeepSeek R1 locally regardless of your background."
image: assets/deepseek-r1-locally-jan.jpg
---
import { Callout } from 'nextra/components'
import CTABlog from '@/components/Blog/CTA'
# Run DeepSeek R1 locally on your device (Beginner-Friendly Guide)
![image](./_assets/run-deepseek-r1-locally-in-jan.jpg)
![DeepSeek R1 running locally in Jan AI interface, showing the chat interface and model settings](./_assets/deepseek-r1-locally-jan.jpg)
DeepSeek R1 is one of the best open-source models in the market right now, and you can run DeepSeek R1 on your own computer! While the full model needs very powerful hardware, we'll use a smaller version that works great on regular computers.
DeepSeek R1 is one of the best open-source models in the market right now, and you can run DeepSeek R1 on your own computer!
<Callout type="info">
New to running AI models locally? Check out our [comprehensive guide on running AI models locally](/post/run-ai-models-locally) first. It covers essential concepts that will help you better understand this DeepSeek R1 guide.
New to running AI models locally? Check out the [guide on running AI models locally](/post/run-ai-models-locally) first. It covers essential concepts that will help you better understand this DeepSeek R1 guide.
</Callout>
DeepSeek R1 requires data-center level computers to run at its full potential, and we'll use a smaller version that works great on regular computers.
Why use an optimized version?
- Efficient performance on standard hardware
- Faster download and initialization
@ -28,35 +35,46 @@ Why use an optimized version?
## Quick Steps at a Glance
1. Download [Jan](https://jan.ai/)
2. Select a model version suited to your hardware
3. Configure optimal settings
4. Set up the prompt template & begin interacting
2. Select a model version
3. Choose settings
4. Set up the prompt template & start using DeepSeek R1
Let's walk through each step with detailed instructions.
## Step 1: Download Jan
[Jan](https://jan.ai/) is an open-source application that enables you to run AI models locally. It's available for Windows, Mac, and Linux, with a streamlined setup process.
[Jan](https://jan.ai/) is an open-source application that enables you to run AI models locally. It's available for Windows, Mac, and Linux. For beginners, Jan is the best choice to get started.
![image](./_assets/download-jan.jpg)
![Jan AI interface, showing the download button](./_assets/download-jan.jpg)
1. Visit [jan.ai](https://jan.ai)
2. Download the appropriate version for your operating system
3. Follow the standard installation process
3. Install the app
## Step 2: Choose Your DeepSeek R1 Version
DeepSeek R1 is available in different architectures and sizes. Here's how to select the right version for your system.
To run AI models like DeepSeek R1 on your computer, you'll need something called VRAM (Video Memory). Think of VRAM as your computer's special memory for handling complex tasks like gaming or, in our case, running AI models. It's different from regular RAM - VRAM is part of your graphics card (GPU).
<Callout type="info">
To check your system's VRAM:
- Windows: Press Windows + R, type "dxdiag", press Enter, click "Display" tab
- Mac: Apple menu > About This Mac > More Info > Graphics/Displays
- Linux: Open Terminal, run `nvidia-smi` (NVIDIA GPUs) or `lspci -v | grep -i vga`
Running AI models locally is like running a very sophisticated video game - it needs dedicated memory to process all the AI's "thinking." The more VRAM you have, the larger and more capable AI models you can run.
</Callout>
Understanding the versions:
- **Qwen architecture:** Optimized for efficiency while maintaining high performance
- **Llama architecture:** Known for robust performance and reliability
- **Original vs Distilled:** Distilled versions are optimized models that preserve core capabilities while reducing resource requirements
Let's first check how much VRAM your computer has. Don't worry if it's not much - DeepSeek R1 has versions for all kinds of computers!
Finding your VRAM is simple:
- On Windows: Press `Windows + R`, type `dxdiag`, hit Enter, and look under the "Display" tab
- On Mac: Click the Apple menu, select "About This Mac", then "More Info", and check under "Graphics/Displays"
- On Linux: Open Terminal and type `nvidia-smi` for NVIDIA GPUs, or `lspci -v | grep -i vga` for other graphics cards
<Callout>
**No dedicated graphics card?** That's okay! You can still run the smaller versions of DeepSeek R1. They're specially optimized to work on computers with basic graphics capabilities.
</Callout>
Once you know your VRAM, here's what version of DeepSeek R1 will work best for you. If you have:
- 6GB VRAM: Go for the 1.5B version - it's fast and efficient
- 8GB VRAM: You can run the 7B or 8B versions, which offer great capabilities
- 16GB or more VRAM: You have access to the larger models with enhanced features
Available versions and basic requirements for DeepSeek R1 distills:
| Version | Model Link | Required VRAM |
|---------|------------|---------------|
@ -67,22 +85,15 @@ Understanding the versions:
| Qwen 32B | [DeepSeek-R1-Distill-Qwen-32B-GGUF](https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-32B-GGUF) | 16GB+ |
| Llama 70B | [DeepSeek-R1-Distill-Llama-70B-GGUF](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-70B-GGUF) | 48GB+ |
<Callout type="info">
Recommendations based on your hardware:
- 6GB VRAM: The 1.5B version offers efficient performance
- 8GB VRAM: 7B or 8B versions provide a balanced experience
- 16GB+ VRAM: Access to larger models for enhanced capabilities
</Callout>
To download your chosen model:
1. Launch Jan and navigate to Jan Hub using the sidebar
2. Locate the "Add Model" section:
![image](./_assets/jan-library-deepseek-r1.jpg)
Launch Jan and navigate to Jan Hub using the sidebar
3. Input the model link in the designated field:
![Jan AI interface, showing the model library](./_assets/jan-library-deepseek-r1.jpg)
![image](./_assets/jan-hub-deepseek-r1.jpg)
3. Input the model link in this field:
![Jan AI interface, showing the model link input field](./_assets/jan-hub-deepseek-r1.jpg)
## Step 3: Configure Model Settings
When configuring your model, you'll encounter quantization options:
@ -106,11 +117,11 @@ Final configuration step:
```
</Callout>
This template ensures proper communication between you and the model.
This template is for proper communication between you and the model.
You're now ready to interact with DeepSeek R1:
![image](./_assets/jan-runs-deepseek-r1-distills.jpg)
![Jan interface, showing DeepSeek R1 running locally](./_assets/jan-runs-deepseek-r1-distills.jpg)
## Need Assistance?

View File

@ -0,0 +1,118 @@
---
title: "Offline ChatGPT: You can't run ChatGPT offline, do this instead"
description: "Learn how to use AI offline with Jan - a free, open-source alternative to ChatGPT that works 100% offline on your computer."
tags: AI, ChatGPT alternative, offline AI, Jan, local AI, privacy
categories: guides
date: 2025-02-08
ogImage: _assets/offline-chatgpt-alternatives-jan.jpg
twitter:
card: summary_large_image
site: "@jandotai"
title: "Offline ChatGPT: You can't run ChatGPT offline, do this instead"
description: "Want to use ChatGPT offline? Learn how to run AI models locally with Jan - free, open-source, and works without internet."
image: _assets/offline-chatgpt-alternatives-jan.jpg
---
import { Callout } from 'nextra/components'
import CTABlog from '@/components/Blog/CTA'
# Offline ChatGPT: You can't run ChatGPT offline, do this instead
ChatGPT is a cloud-based service that requires internet access. However, it's not the only way to use AI. You can run AI models offline on your device with [Jan](https://jan.ai/). It's completely free, open-source, and gives you 100% offline capability. You can even use AI on a plane!
<Callout>
**Quick Summary:**
- ChatGPT always needs internet - it can't run offline
- Jan lets you run AI models 100% offline on your computer
- It's free and open-source
- Works on Mac, Windows, and Linux
</Callout>
## Jan as an offline ChatGPT alternative
![Use Jan to chat with AI models without internet access](./_assets/offline-chatgpt-alternative-ai-without-internet.jpg)
*Jan lets you use AI offline - no internet connection needed*
Here's how to get started with offline AI in 3 simple steps:
### 1. Download Jan
Go to [jan.ai](https://jan.ai) and download the version for your computer (Mac, Windows, or Linux). It's completely free.
![Download Jan for offline AI use](./_assets/jan.ai.jpg "Get Jan for free and start using AI offline")
### 2. Download an AI model
You'll need an AI model to use AI offline, so download a model from Jan. Once it's on your computer, you don't need internet anymore.
![Choose an AI model that works offline](./_assets/jan-model-selection.jpg "Find the perfect AI model for offline use")
*Select an AI model that matches your needs and computer capabilities*
<Callout>
**Which model should you choose?**
- For most computers: Try Mistral 7B or DeepSeek - they're similar to ChatGPT 3.5
- For older computers: Use smaller 3B models
- For gaming PCs: You can try larger 13B models
Don't worry about choosing - Jan will automatically recommend models that work well on your computer.
</Callout>
### 3. Start using AI offline
![Chat with AI offline using Jan's interface](./_assets/run-ai-locally-with-jan.jpg "Experience ChatGPT-like interactions without internet")
*Use Jan's clean interface to chat with AI - no internet required*
Once downloaded, you can use AI anywhere, anytime:
- Chat like you do with ChatGPT
- Work on documents offline
- Get coding help without internet
- Keep your conversations private
- Use AI even when servers are down
## How to chat with your docs in Jan?
To chat with your docs in Jan, you need to activate experimental mode.
![Activate experimental mode in Jan's settings](./_assets/chat-with-your-docs-offline-ai.jpg "Enable experimental features to chat with your documents")
*Turn on experimental mode in settings to chat with your docs*
After activating experimental mode, simply add your files and ask questions about them.
![Chat with your documents using Jan](./_assets/chat-with-docs-prompt.jpg "Ask questions about your documents offline")
*Chat with your documents privately - no internet needed*
I did this for you and got a reply from a 7B parameter model. If you'd like to learn what "7B" means and understand other local AI terms, check our [guide on running AI models locally](/post/run-ai-models-locally).
A response from AI, Qwen2.5 7B Instruct Q4:
`This document appears to be about the benefits and advantages of running artificial intelligence (AI) models locally on your device rather than using cloud-based or remote AI services. The key points it highlights include data privacy, offline functionality, freedom from paywalls and restrictions, and giving users full control over their AI models. Additionally, the text mentions that local AI is becoming a new trend and provides a link to a guide for beginners who want to learn more about this topic.`
Local AI makes possible offline AI use, so Jan is going to be your first step to get started.
## Why choose Jan over ChatGPT?
1. **True Offline Use:** Unlike ChatGPT, Jan works without internet
2. **100% Private:** Your data never leaves your computer
3. **Free Forever:** No subscriptions or API costs
4. **No Server Issues:** No more "ChatGPT is at capacity"
5. **Your Choice of Models:** Use newer models as they come out
**"Is it really free? What's the catch?"**
Yes, it's completely free and open source. Jan is built by developers who believe in making AI accessible to everyone.
**"How does it compare to ChatGPT?"**
Modern open-source models like DeepSeek and Mistral are very capable. While they might not match GPT-4, they're perfect for most tasks and getting better every month.
**"Do I need a powerful computer?"**
If your computer is from the last 5 years, it will likely work fine. You need about 8GB of RAM and 10GB of free space for comfortable usage.
**"What about my privacy?"**
Everything stays on your computer. Your conversations, documents, and data never leave your device unless you choose to share them.
Want to learn more about the technical side? Check our detailed [guide on running AI models locally](/post/run-ai-models-locally). It's not required to [use AI offline](https://jan.ai/) but helps understand how it all works.
## Need help?
<Callout type="info">
[Join our Discord community](https://discord.gg/Exe46xPMbK) for support and tips on using Jan as your offline ChatGPT alternative.
</Callout>

View File

@ -3,8 +3,14 @@ title: "How to run AI models locally as a beginner?"
description: "A straightforward guide to running AI models locally on your computer, regardless of your background."
tags: AI, local models, Jan, GGUF, privacy, local AI
categories: guides
date: 2024-01-31
ogImage: assets/jan-local-ai.jpg
date: 2025-01-31
ogImage: assets/run-ai-locally-with-jan.jpg
twitter:
card: summary_large_image
site: "@jandotai"
title: "How to run AI models locally as a beginner?"
description: "Learn how to run AI models locally on your computer for enhanced privacy and control. Perfect for beginners!"
image: assets/run-ai-locally-with-jan.jpg
---
import { Callout } from 'nextra/components'
@ -12,120 +18,147 @@ import CTABlog from '@/components/Blog/CTA'
# How to run AI models locally as a beginner?
Most people think running AI models locally is complicated. It's not. The real complexity lies in believing you need cloud services to use AI. In 2025, anyone can run powerful AI models like DeepSeek, Llama, and Mistral on their own computer. The advantages are significant: complete privacy, no subscription fees, and full control over your AI interactions. This guide will show you how, even if you've never written a line of code.
Most people think running AI models locally is complicated. It's not. Anyone can run powerful AI models like DeepSeek, Llama, and Mistral on their own computer. This guide will show you how, even if you've never written a line of code.
## Quick steps:
1. Download [Jan](https://jan.ai)
2. Choose a model that fits your hardware
3. Start using AI locally!
### 1. Download [Jan](https://jan.ai)
<Callout type="info">
Benefits of running AI locally:
- **Privacy:** Your data stays on your device
- **No subscription:** Pay once for hardware
- **Speed:** No internet latency
- **Reliability:** Works offline
- **Full control:** Choose which AI models to use
![Jan AI's official website showing the download options](./_assets/jan.ai.jpg "Download Jan from the official website - it's free and open source")
*Download Jan from [jan.ai](https://jan.ai) - it's free and open source.*
### 2. Choose a model that fits your hardware
![Jan's model selection interface showing various AI models](./_assets/jan-model-selection.jpg "Jan helps you pick the right AI model for your computer")
*Jan helps you pick the right AI model for your computer.*
### 3. Start using AI locally
That's all to run your first AI model locally!
![Jan's simple and clean chat interface for local AI](./_assets/run-ai-locally-with-jan.jpg "Jan's easy-to-use chat interface after installation")
*Jan's easy-to-use chat interface after installation.*
Keep reading to learn key terms of local AI and the things you should know before running AI models locally.
## How Local AI Works
Before diving into the details, let's understand how AI runs on your computer:
<Callout>
**Why do we need special tools for local AI?**
Think of AI models like compressed files - they need to be "unpacked" to work on your computer. Tools like llama.cpp do this job:
- They make AI models run efficiently on regular computers
- Convert complex AI math into something your computer understands
- Help run large AI models even with limited resources
</Callout>
## How to run AI models locally as a beginner?
![llama.cpp GitHub repository showing its popularity and wide adoption](./_assets/ai-locally-llama.cpp.jpg "llama.cpp is widely used and trusted in the AI community")
*llama.cpp helps millions of people run AI locally on their computers.*
[Jan](https://jan.ai) makes it straightforward to run AI models. Download Jan and you're ready to go - the setup process is streamlined and automated.
<Callout>
**What is GGUF and why do we need it?**
<Callout type="info">
What you can do with Jan:
- Download Jan
- Find models that work on your computer
Original AI models are huge and complex - like trying to read a book in a language your computer doesn't understand. Here's where GGUF comes in:
1. **Problem it solves:**
- Original AI models are too big (100s of GB)
- They're designed for specialized AI computers
- They use too much memory
2. **How GGUF helps:**
- Converts models to a smaller size
- Makes them work on regular computers
- Keeps the AI smart while using less memory
When browsing models, you'll see "GGUF" in the name (like "DeepSeek-R1-GGUF"). Don't worry about finding them - Jan automatically shows you the right GGUF versions for your computer.
</Callout>
Before diving deeper, let's be clear: this guide is opinionated. Instead of overwhelming you with every possible option, we'll focus on what actually works for beginners. You'll learn essential local AI terms, and more importantly, get clear recommendations on what to do. No "it depends" answers here - just straightforward guidance based on real experience.
## Understanding AI Models
## Understanding Local AI models
Think of AI models like apps on your computer - some are light and quick to use, while others are bigger but can do more things. When you're choosing an AI model to run on your computer, you'll see names like "Llama-3-8B" or "Mistral-7B". Let's break down what this means in simple terms.
Think of AI models like engines powering applications - some are compact and efficient, while others are more powerful but require more resources. Let's understand two important terms you'll see often: parameters and quantization.
<Callout>
The "B" in model names (like 7B) stands for "billion" - it's just telling you the size of the AI model. Just like how some apps take up more space on your computer, bigger AI models need more space on your computer.
### What's a "Parameter"?
When looking at AI models, you'll see names like "Llama-2-7B" or "Mistral-7B". Here's what that means:
- The "B" means "billion parameters" (like brain cells)
- More parameters = smarter AI but needs a faster computer
- Fewer parameters = simpler AI but works on most computers
<Callout type="info">
Which size to choose?
- **7B models:** Best for most people - works on most computers
- **13B models:** Smarter but needs a good graphics card
- **70B models:** Very smart but needs a powerful computer
- Smaller models (1-7B): Work great on most computers
- Bigger models (13B+): Need more powerful computers but can do more complex tasks
</Callout>
### What's Quantization?
![Jan Hub interface showing model sizes and types](./_assets/jan-hub-for-ai-models.jpg "Jan Hub makes it easy to understand different model sizes and versions")
*Jan Hub makes it easy to understand different model sizes and versions*
Quantization is a technique that optimizes AI models to run efficiently on your computer. Think of it like an engine tuning process that balances performance with resource usage:
**Good news:** Jan helps you pick the right model size for your computer automatically! You don't need to worry about the technical details - just choose a model that matches what Jan recommends for your computer.
Simple guide:
- **Q4:** Most efficient choice - good balance of speed and quality
- **Q6:** Enhanced quality with moderate resource usage
- **Q8:** Highest quality but requires more computational power
## What You Can Do with Local AI
<Callout type="info">
Understanding model versions:
- **Original models:** Full-sized versions with maximum capability (e.g., original DeepSeek)
- **Distilled models:** Optimized versions that maintain good performance while using fewer resources
- When you see names like "Qwen" or "Llama", these refer to different model architectures and training approaches
Running AI locally gives you:
- Complete privacy - your data stays on your computer
- No internet needed - works offline
- Full control - you decide what models to use
- Free to use - no subscription fees
</Callout>
Example: A 7B model with Q4 quantization provides an excellent balance for most users.
## Hardware Requirements
Before downloading an AI model, let's check if your computer can run it.
Before downloading an AI model, consider checking if your computer can run it. Here's a basic guide:
**The basics your computer needs:**
- A decent processor (CPU) - most computers from the last 5 years will work fine
- At least 8GB of RAM - 16GB or more is better
- Some free storage space - at least 5GB recommended
### What Models Can Your Computer Run?
| | | |
|---|---|---|
| Regular Laptop | 3B-7B models | Good for chatting and writing. Like having a helpful assistant |
| Gaming Laptop | 7B-13B models | More capable. Better at complex tasks like coding and analysis |
| Powerful Desktop | 13B+ models | Better performance. Great for professional work and advanced tasks |
<Callout type="info">
The most important thing is VRAM:
- VRAM is your graphics card's memory
- More VRAM = ability to run bigger AI models
- Most computers have between 4GB to 16GB VRAM
**Not Sure About Your Computer?**
Start with a smaller model (3B-7B) - Jan will help you choose one that works well on your system.
</Callout>
### How to check your VRAM:
## Getting Started with Models
**On Windows:**
1. Press Windows + R
2. Type "dxdiag" and press Enter
3. Click "Display" tab
4. Look for "Display Memory"
### Model Versions
**On Mac:**
1. Click Apple menu
2. Select "About This Mac"
3. Click "More Info"
4. Look under "Graphics/Displays"
When browsing models in Jan, you'll see terms like "Q4", "Q6", or "Q8". Here's what that means in simple terms:
**On Linux:**
1. Open Terminal
2. Run: `nvidia-smi` (for NVIDIA GPUs)
3. Or: `lspci -v | grep -i vga` (for general GPU info)
<Callout>
These are different versions of the same AI model, just packaged differently to work better on different computers:
### Which models can you run?
Here's a simple guide:
| Your VRAM | What You Can Run | What It Can Do |
|-----------|-----------------|----------------|
| 4GB | Small models (1-3B) | Basic writing and questions |
| 6GB | Medium models (7B) | Good for most tasks |
| 8GB | Larger models (13B) | Better understanding |
| 16GB | Largest models (32B) | Best performance |
<Callout type="tip">
Start with smaller models:
- Try 7B models first - they work well for most people
- Test how they run on your computer
- Try larger models only if you need better results
- **Q4 versions**: Like a "lite" version of an app - runs fast and works on most computers
- **Q6 versions**: The "standard" version - good balance of speed and quality
- **Q8 versions**: The "premium" version - highest quality but needs a more powerful computer
</Callout>
## Setting Up Your Local AI
**Pro tip**: Start with Q4 versions - they work great for most people and run smoothly on regular computers!
### Getting Models from Hugging Face
You'll often see links to "Hugging Face" when downloading AI models. Think of Hugging Face as the "GitHub for AI" - it's where the AI community shares their models. Jan makes it super easy to use:
1. Jan has a built-in connection to Hugging Face
2. You can download models right from Jan's interface
3. No need to visit the Hugging Face website unless you want to explore more options
## Setting up your local AI
### Getting Models from Hugging Face
You'll often see links to "Hugging Face" when downloading AI models. Think of Hugging Face as the "GitHub for AI" - it's where the AI community shares their models. This sounds technical, but Jan makes it super easy to use:
1. Jan has a built-in connection to Hugging Face
2. You can download models right from Jan's interface
3. No need to visit the Hugging Face website unless you want to explore more options
<Callout>
**What powers local AI?**
Jan uses [llama.cpp](https://github.com/ggerganov/llama.cpp), an inference that makes AI models run efficiently on regular computers. It's like a translator that helps AI models speak your computer's language, making them run faster and use less memory.
</Callout>
### 1. Get Started
Download Jan from [jan.ai](https://jan.ai) - it sets everything up for you.
@ -134,58 +167,59 @@ Download Jan from [jan.ai](https://jan.ai) - it sets everything up for you.
You can get models two ways:
### 1. Use Jan Hub (Recommended):
#### 1. Use Jan Hub (Recommended):
- Click "Download Model" in Jan
- Pick a recommended model
- Choose one that fits your computer
![AI model parameters explained](./_assets/jan-model-download.jpg)
![AI model parameters explained](./_assets/jan-model-download.jpg "Jan Hub makes it easy to download AI models")
*Use Jan Hub to download AI models*
### 2. Use Hugging Face:
#### 2. Use Hugging Face:
<Callout type="warning">
Important: Only GGUF models will work with Jan. Make sure to use models that have "GGUF" in their name.
</Callout>
#### Step 1: Get the model link
##### Step 1: Get the model link
Find and copy a GGUF model link from [Hugging Face](https://huggingface.co)
![Finding a GGUF model on Hugging Face](./_assets/hugging-face-jan-model-download.jpg)
![Finding a GGUF model on Hugging Face](./_assets/hugging-face-jan-model-download.jpg "Find GGUF models on Hugging Face")
*Look for models with "GGUF" in their name*
#### Step 2: Open Jan
##### Step 2: Open Jan
Launch Jan and go to the Models tab
![Opening Jan's model section](./_assets/jan-library-deepseek-r1.jpg)
![Opening Jan's model section](./_assets/jan-library-deepseek-r1.jpg "Navigate to the Models section in Jan")
*Navigate to the Models section in Jan*
#### Step 3: Add the model
##### Step 3: Add the model
Paste your Hugging Face link into Jan
![Adding a model from Hugging Face](./_assets/jan-hub-deepseek-r1.jpg)
![Adding a model from Hugging Face](./_assets/jan-hub-deepseek-r1.jpg "Paste your GGUF model link here")
*Paste your GGUF model link here*
#### Step 4: Download
##### Step 4: Download
Select your quantization and start the download
![Downloading the model](./_assets/jan-hf-model-download.jpg)
![Downloading the model](./_assets/jan-hf-model-download.jpg "Choose your preferred model size and download")
*Choose your preferred model size and download*
### Common Questions
<Callout type="info">
**"My computer doesn't have a graphics card - can I still use AI?"**
Yes! It will run slower but still work. Start with 7B models.
**"Which model should I start with?"**
Try a 7B model first - it's the best balance of smart and fast.
**"Will it slow down my computer?"**
Only while you're using the AI. Close other big programs for better speed.
</Callout>
## Need help?
<Callout type="info">
Having trouble? We're here to help! [Join our Discord community](https://discord.gg/Exe46xPMbK) for support.
[Join our Discord community](https://discord.gg/Exe46xPMbK) for support.
</Callout>