docs: improve local AI guides content and linking (#4600)

* docs: add DeepSeek R1 local installation guide

- Add comprehensive guide for running DeepSeek R1 locally
- Include step-by-step instructions with screenshots
- Add VRAM requirements and model selection guide
- Include system prompt setup instructions

* docs: add comprehensive guide on running AI models locally

* docs: address PR feedback for DeepSeek R1 and local AI guides

- Improve language and terminology throughout
- Add Linux support information
- Enhance technical explanations
- Update introduction for better flow
- Fix parameters section in run-ai-models-locally.mdx

* docs: improve local AI guides content and linking

- Update titles and introductions for better SEO
- Add opinionated guidance section for beginners
- Link DeepSeek guide with general local AI guide
- Fix typos and improve readability

* fix: remove git conflict markers from deepseek guide frontmatter

---------

Co-authored-by: Louis <louis@jan.ai>
This commit is contained in:
Emre Can Kartal 2025-02-08 00:44:36 +08:00 committed by GitHub
parent 5ca310384a
commit 404c3f096e
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
4 changed files with 80 additions and 26 deletions

View File

@ -0,0 +1,23 @@
---
title: "A few key issues have been solved!"
version: 0.5.13
description: "Jan v0.5.13 is here: A few key issues have been solved."
date: 2024-01-06
ogImage: "/assets/images/changelog/jan-v0-5-13.gif"
---
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
<ChangelogHeader title= "Jan v0.5.13: A few key issues have been solved!" date="2024-01-06" ogImage= "/assets/images/changelog/jan-v0-5-13.gif" />
👋 Jan v0.5.13 is here: A few key issues have been solved!
### Highlights 🎉
- Resolved model loading issues on MacOS Intel
- Fixed app resetting max_tokens to 8192 on new threads - now uses model settings
- Fixed Vulkan settings visibility for some users
Update your product or download the latest: https://jan.ai
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.13).

View File

@ -0,0 +1,36 @@
---
title: "Run DeepSeek R1 Distills error-free!"
version: 0.5.14
description: "Jan v0.5.14 is out: Run DeepSeek R1 Distills error-free!"
date: 2024-01-23
ogImage: "/assets/images/changelog/jan-v0-5-14-deepseek-r1.gif"
---
import ChangelogHeader from "@/components/Changelog/ChangelogHeader"
<ChangelogHeader title= "Jan v0.5.14: Run DeepSeek R1 Distills error-free!" date="2024-01-23" ogImage= "/assets/images/changelog/jan-v0-5-14-deepseek-r1.gif" />
👋 Jan v0.5.14 is out: Run DeepSeek R1 Distills error-free!
You can run DeepSeek R1 distills in Jan error-free. Follow our [step-by-step guide to run DeepSeek R1 locally](/post/deepseek-r1-locally) and get this AI model running on your device in minutes.
llama.cpp version updated via Cortex—thanks to GG & llama.cpp community!
- Paste GGUF links into Jan Hub to download
- Already downloaded the model but facing issues? Update Jan.
Models:
Qwen
- DeepSeek-R1-Distill-Qwen-1.5B-GGUF: https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-1.5B-GGUF
- DeepSeek-R1-Distill-Qwen-7B-GGUF: https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-7B-GGUF
- DeepSeek-R1-Distill-Qwen-14B-GGUF: https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-14B-GGUF
- DeepSeek-R1-Distill-Qwen-32B-GGUF: https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-32B-GGUF
Llama
- DeepSeek-R1-Distill-Llama-8B-GGUF: https://huggingface.co/bartowski/DeepSeek-R1-Distill-Llama-8B-GGUF
- DeepSeek-R1-Distill-Llama-70B-GGUF: https://huggingface.co/bartowski/DeepSeek-R1-Distill-Llama-70B-GGUF
Update your Jan or [download the latest](https://jan.ai/).
For more details, see the [GitHub release notes](https://github.com/janhq/jan/releases/tag/v0.5.14).

View File

@ -1,5 +1,5 @@
---
title: "Beginner's Guide: Run DeepSeek R1 Locally"
title: "Run DeepSeek R1 locally on your device (Beginner-Friendly Guide)"
description: "A straightforward guide to running DeepSeek R1 locally for enhanced privacy, regardless of your background."
tags: DeepSeek, R1, local AI, Jan, GGUF, Qwen, Llama
categories: guides
@ -10,11 +10,15 @@ ogImage: assets/run-deepseek-r1-locally-in-jan.jpg
import { Callout } from 'nextra/components'
import CTABlog from '@/components/Blog/CTA'
# Beginner's Guide: Run DeepSeek R1 Locally
# Run DeepSeek R1 locally on your device (Beginner-Friendly Guide)
![image](./_assets/run-deepseek-r1-locally-in-jan.jpg)
DeepSeek R1 brings state-of-the-art AI capabilities to your local machine. With optimized versions available for different hardware configurations, you can run this powerful model directly on your laptop or desktop computer. This guide will show you how to run open-source AI models like DeepSeek, Llama, or Mistral locally on your computer, regardless of your background.
DeepSeek R1 is one of the best open-source models in the market right now, and you can run DeepSeek R1 on your own computer! While the full model needs very powerful hardware, we'll use a smaller version that works great on regular computers.
<Callout type="info">
New to running AI models locally? Check out our [comprehensive guide on running AI models locally](/post/run-ai-models-locally) first. It covers essential concepts that will help you better understand this DeepSeek R1 guide.
</Callout>
Why use an optimized version?
- Efficient performance on standard hardware

View File

@ -1,5 +1,5 @@
---
title: "How to Run AI Models Locally: A Beginner's Guide"
title: "How to run AI models locally as a beginner?"
description: "A straightforward guide to running AI models locally on your computer, regardless of your background."
tags: AI, local models, Jan, GGUF, privacy, local AI
categories: guides
@ -10,39 +10,36 @@ ogImage: assets/jan-local-ai.jpg
import { Callout } from 'nextra/components'
import CTABlog from '@/components/Blog/CTA'
# How to Run AI Models Locally: A Beginner's Guide
# How to run AI models locally as a beginner?
DeepSeek R1 is one of the best open-source models in the market right now, and the best part is that we can run different versions of it on our laptop. This guide will show you how to run open-source AI models like DeepSeek, Llama, or Mistral locally on your computer, regardless of your background.
Most people think running AI models locally is complicated. It's not. The real complexity lies in believing you need cloud services to use AI. In 2025, anyone can run powerful AI models like DeepSeek, Llama, and Mistral on their own computer. The advantages are significant: complete privacy, no subscription fees, and full control over your AI interactions. This guide will show you how, even if you've never written a line of code.
## Quick steps:
1. Download [Jan](https://jan.ai)
2. Pick a recommended model
3. Start chatting
Read [Quickstart](https://jan.ai/docs/quickstart) to get started. For more details, keep reading.
![Run AI models locally with Jan](./_assets/jan-local-ai.jpg)
*Jan is for running AI models locally. Download [Jan](https://jan.ai)*
2. Choose a model that fits your hardware
3. Start using AI locally!
<Callout type="info">
Benefits of running AI locally:
- **Privacy:** Your data stays on your computer
- **No internet needed:** Use AI even offline
- **No limits:** Chat as much as you want
- **Privacy:** Your data stays on your device
- **No subscription:** Pay once for hardware
- **Speed:** No internet latency
- **Reliability:** Works offline
- **Full control:** Choose which AI models to use
</Callout>
## How to run AI models locally as a beginner
## How to run AI models locally as a beginner?
[Jan](https://jan.ai) makes it straightforward to run AI models. Download Jan and you're ready to go - the setup process is streamlined and automated.
<Callout type="tip">
<Callout type="info">
What you can do with Jan:
- Download AI models with one click
- Everything is set up automatically
- Download Jan
- Find models that work on your computer
</Callout>
Before diving deeper, let's be clear: this guide is opinionated. Instead of overwhelming you with every possible option, we'll focus on what actually works for beginners. You'll learn essential local AI terms, and more importantly, get clear recommendations on what to do. No "it depends" answers here - just straightforward guidance based on real experience.
## Understanding Local AI models
Think of AI models like engines powering applications - some are compact and efficient, while others are more powerful but require more resources. Let's understand two important terms you'll see often: parameters and quantization.
@ -51,9 +48,6 @@ Think of AI models like engines powering applications - some are compact and eff
When looking at AI models, you'll see names like "Llama-2-7B" or "Mistral-7B". Here's what that means:
![AI model parameters explained](./_assets/local-ai-model-parameters.jpg)
*Model sizes: Bigger models = Better results + More resources*
- The "B" means "billion parameters" (like brain cells)
- More parameters = smarter AI but needs a faster computer
- Fewer parameters = simpler AI but works on most computers
@ -69,9 +63,6 @@ Which size to choose?
Quantization is a technique that optimizes AI models to run efficiently on your computer. Think of it like an engine tuning process that balances performance with resource usage:
![AI model quantization explained](./_assets/open-source-ai-quantization.jpg)
*Quantization: Balance between size and quality*
Simple guide:
- **Q4:** Most efficient choice - good balance of speed and quality
- **Q6:** Enhanced quality with moderate resource usage