docs: add DeepSeek R1 local installation guide
- Add comprehensive guide for running DeepSeek R1 locally - Include step-by-step instructions with screenshots - Add VRAM requirements and model selection guide - Include system prompt setup instructions
BIN
docs/src/pages/post/_assets/download-jan.jpg
Normal file
|
After Width: | Height: | Size: 130 KiB |
BIN
docs/src/pages/post/_assets/jan-hub-deepseek-r1.jpg
Normal file
|
After Width: | Height: | Size: 87 KiB |
BIN
docs/src/pages/post/_assets/jan-hub-download-deepseek-r1-2.jpg
Normal file
|
After Width: | Height: | Size: 90 KiB |
BIN
docs/src/pages/post/_assets/jan-hub-download-deepseek-r1.jpg
Normal file
|
After Width: | Height: | Size: 87 KiB |
BIN
docs/src/pages/post/_assets/jan-library-deepseek-r1.jpg
Normal file
|
After Width: | Height: | Size: 37 KiB |
BIN
docs/src/pages/post/_assets/jan-runs-deepseek-r1-distills.jpg
Normal file
|
After Width: | Height: | Size: 62 KiB |
BIN
docs/src/pages/post/_assets/jan-system-prompt-deepseek-r1.jpg
Normal file
|
After Width: | Height: | Size: 55 KiB |
BIN
docs/src/pages/post/_assets/run-deepseek-r1-locally-in-jan.jpg
Normal file
|
After Width: | Height: | Size: 374 KiB |
109
docs/src/pages/post/deepseek-r1-locally.mdx
Normal file
@ -0,0 +1,109 @@
|
||||
---
|
||||
title: "Beginner's Guide: Run DeepSeek R1 Locally (Private)"
|
||||
description: "Quick steps on how to run DeepSeek R1 locally for full privacy. Perfect for beginners—no coding required."
|
||||
tags: DeepSeek, R1, local AI, Jan, GGUF, Qwen, Llama
|
||||
categories: guides
|
||||
date: 2024-01-31
|
||||
ogImage: assets/run-deepseek-r1-locally-in-jan.jpg
|
||||
---
|
||||
|
||||
import { Callout } from 'nextra/components'
|
||||
import CTABlog from '@/components/Blog/CTA'
|
||||
|
||||
# Beginner’s Guide: Run DeepSeek R1 Locally
|
||||
|
||||

|
||||
|
||||
You can run DeepSeek R1 on your own computer! While the full model needs very powerful hardware, we'll use a smaller version that works great on regular computers.
|
||||
|
||||
Why use a smaller version?
|
||||
- Works smoothly on most modern computers
|
||||
- Downloads much faster
|
||||
- Uses less storage space on your computer
|
||||
|
||||
## Quick Steps at a Glance
|
||||
1. Download and install [Jan](https://jan.ai/) (just like any other app!)
|
||||
2. Pick a version that fits your computer
|
||||
3. Choose the best settings
|
||||
4. Set up a quick template & start chatting!
|
||||
|
||||
Keep reading for a step-by-step guide with pictures.
|
||||
|
||||
## Step 1: Download Jan
|
||||
[Jan](https://jan.ai/) is a free app that helps you run AI models on your computer. It works on Windows, Mac, and Linux, and it's super easy to use - no coding needed!
|
||||
|
||||

|
||||
|
||||
- Get Jan from [jan.ai](https://jan.ai)
|
||||
- Install it like you would any other app
|
||||
- That's it! Jan takes care of all the technical stuff for you
|
||||
|
||||
## Step 2: Choose Your DeepSeek R1 Version
|
||||
DeepSeek R1 comes in different sizes. Let's help you pick the right one for your computer.
|
||||
|
||||
<Callout type="info">
|
||||
💡 Not sure how much VRAM your computer has?
|
||||
- Windows: Press Windows + R, type "dxdiag", press Enter, and click the "Display" tab
|
||||
- Mac: Click Apple menu > About This Mac > More Info > Graphics/Displays
|
||||
</Callout>
|
||||
|
||||
Below is a detailed table showing which version you can run based on your computer's VRAM:
|
||||
|
||||
| Version | Link to Paste into Jan Hub | Required VRAM for smooth performance |
|
||||
|---------|---------------------------|---------------|
|
||||
| Qwen 1.5B | [https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-1.5B-GGUF](https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-1.5B-GGUF) | 6GB+ VRAM |
|
||||
| Qwen 7B | [https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-7B-GGUF](https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-7B-GGUF) | 8GB+ VRAM |
|
||||
| Llama 8B | [https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF) | 8GB+ VRAM |
|
||||
| Qwen 14B | [https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-14B-GGUF](https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-14B-GGUF) | 16GB+ VRAM |
|
||||
| Qwen 32B | [https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-32B-GGUF](https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-32B-GGUF) | 16GB+ VRAM |
|
||||
| Llama 70B | [https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-70B-GGUF](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-70B-GGUF) | 48GB+ VRAM |
|
||||
|
||||
<Callout type="info">
|
||||
Quick Guide:
|
||||
- 6GB VRAM? Start with the 1.5B version - it's fast and works great!
|
||||
- 8GB VRAM? Try the 7B or 8B versions - good balance of speed and smarts
|
||||
- 16GB+ VRAM? You can run the larger versions for even better results
|
||||
</Callout>
|
||||
|
||||
Ready to download? Here's how:
|
||||
1. Open Jan and click the button in the left sidebar to open Jan Hub
|
||||
2. Find the "Add Model" section (shown below)
|
||||
|
||||

|
||||
|
||||
3. Copy the link for your chosen version and paste it here:
|
||||
|
||||

|
||||
|
||||
## Step 3: Choose Model Settings
|
||||
When adding your model, you'll see two options:
|
||||
|
||||
<Callout type="tip">
|
||||
- **Q4:** Perfect for most users - fast and works great! ✨ (Recommended)
|
||||
- **Q8:** Slightly more accurate but needs more powerful hardware
|
||||
</Callout>
|
||||
|
||||
## Step 4: Set Up & Start Chatting
|
||||
Almost done! Just one quick setup:
|
||||
|
||||
1. Click Model Settings in the sidebar
|
||||
2. Find the Prompt Template box
|
||||
3. Copy and paste this exactly:
|
||||
|
||||
<Callout type="warning">
|
||||
```
|
||||
<|User|>{prompt}<|Assistant|>
|
||||
```
|
||||
</Callout>
|
||||
|
||||
This helps DeepSeek understand when you're talking and when it should respond.
|
||||
|
||||
Now you're ready to start chatting!
|
||||
|
||||

|
||||
|
||||
## Need help?
|
||||
|
||||
<Callout type="info">
|
||||
Having trouble? We're here to help! [Join our Discord community](https://discord.gg/Exe46xPMbK) for support.
|
||||
</Callout>
|
||||