docs: add comprehensive guide on running AI models locally
This commit is contained in:
parent
705116c109
commit
8f620b9146
BIN
docs/src/pages/post/_assets/hugging-face-jan-model-download.jpg
Normal file
BIN
docs/src/pages/post/_assets/hugging-face-jan-model-download.jpg
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 162 KiB |
BIN
docs/src/pages/post/_assets/jan-hf-model-download.jpg
Normal file
BIN
docs/src/pages/post/_assets/jan-hf-model-download.jpg
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 218 KiB |
BIN
docs/src/pages/post/_assets/jan-local-ai.jpg
Normal file
BIN
docs/src/pages/post/_assets/jan-local-ai.jpg
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 141 KiB |
BIN
docs/src/pages/post/_assets/jan-model-download.jpg
Normal file
BIN
docs/src/pages/post/_assets/jan-model-download.jpg
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 240 KiB |
BIN
docs/src/pages/post/_assets/local-ai-model-parameters.jpg
Normal file
BIN
docs/src/pages/post/_assets/local-ai-model-parameters.jpg
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 61 KiB |
BIN
docs/src/pages/post/_assets/open-source-ai-quantization.jpg
Normal file
BIN
docs/src/pages/post/_assets/open-source-ai-quantization.jpg
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 102 KiB |
188
docs/src/pages/post/run-ai-models-locally.mdx
Normal file
188
docs/src/pages/post/run-ai-models-locally.mdx
Normal file
@ -0,0 +1,188 @@
|
|||||||
|
---
|
||||||
|
title: "How to run AI models locally: A Complete Guide for Beginners"
|
||||||
|
description: "A simple guide to running AI models locally on your computer. It's for beginners - no technical knowledge needed."
|
||||||
|
tags: AI, local models, Jan, GGUF, privacy, local AI
|
||||||
|
categories: guides
|
||||||
|
date: 2024-01-31
|
||||||
|
ogImage: assets/jan-local-ai.jpg
|
||||||
|
---
|
||||||
|
|
||||||
|
import { Callout } from 'nextra/components'
|
||||||
|
import CTABlog from '@/components/Blog/CTA'
|
||||||
|
|
||||||
|
# How to run AI models locally: A Complete Guide for Beginners
|
||||||
|
|
||||||
|
Running AI models locally means installing them on your computer instead of using cloud services. This guide shows you how to run open-source AI models like Llama, Mistral, or DeepSeek on your computer - even if you're not technical.
|
||||||
|
|
||||||
|
## Quick steps:
|
||||||
|
1. Download [Jan](https://jan.ai)
|
||||||
|
2. Pick a recommended model
|
||||||
|
3. Start chatting
|
||||||
|
|
||||||
|
Read [Quickstart](https://jan.ai/docs/quickstart) to get started. For more details, keep reading.
|
||||||
|
|
||||||
|

|
||||||
|
*Jan is for running AI models locally. Download [Jan](https://jan.ai)*
|
||||||
|
|
||||||
|
<Callout type="info">
|
||||||
|
Benefits of running AI locally:
|
||||||
|
- **Privacy:** Your data stays on your computer
|
||||||
|
- **No internet needed:** Use AI even offline
|
||||||
|
- **No limits:** Chat as much as you want
|
||||||
|
- **Full control:** Choose which AI models to use
|
||||||
|
</Callout>
|
||||||
|
|
||||||
|
## How to run AI models locally as a beginner
|
||||||
|
|
||||||
|
[Jan](https://jan.ai) makes it easy to run AI models. Just download the app and you're ready to go - no complex setup needed.
|
||||||
|
|
||||||
|
<Callout type="tip">
|
||||||
|
What you can do with Jan:
|
||||||
|
- Download AI models with one click
|
||||||
|
- Everything is set up automatically
|
||||||
|
- Find models that work on your computer
|
||||||
|
</Callout>
|
||||||
|
|
||||||
|
## Understanding Local AI models
|
||||||
|
|
||||||
|
Think of AI models like apps - some are small and fast, others are bigger but smarter. Let's understand two important terms you'll see often: parameters and quantization.
|
||||||
|
|
||||||
|
### What's a "Parameter"?
|
||||||
|
|
||||||
|
When looking at AI models, you'll see names like "Llama-2-7B" or "Mistral-7B". Here's what that means:
|
||||||
|
|
||||||
|

|
||||||
|
*Model sizes: Bigger models = Better results + More resources*
|
||||||
|
|
||||||
|
- The "B" means "billion parameters" (like brain cells)
|
||||||
|
- More parameters = smarter AI but needs a faster computer
|
||||||
|
- Fewer parameters = simpler AI but works on most computers
|
||||||
|
|
||||||
|
<Callout type="info">
|
||||||
|
Which size to choose?
|
||||||
|
- **7B models:** Best for most people - works on most computers
|
||||||
|
- **13B models:** Smarter but needs a good graphics card
|
||||||
|
- **70B models:** Very smart but needs a powerful computer
|
||||||
|
</Callout>
|
||||||
|
|
||||||
|
### What's Quantization?
|
||||||
|
|
||||||
|
Quantization makes AI models smaller so they can run on your computer. Think of it like compressing a video to save space:
|
||||||
|
|
||||||
|

|
||||||
|
*Quantization: Balance between size and quality*
|
||||||
|
|
||||||
|
Simple guide:
|
||||||
|
- **Q4:** Best choice for most people - runs fast and works well
|
||||||
|
- **Q6:** Better quality but runs slower
|
||||||
|
- **Q8:** Best quality but needs a powerful computer
|
||||||
|
|
||||||
|
Example: A 7B model with Q4 works well on most computers.
|
||||||
|
|
||||||
|
## Hardware Requirements
|
||||||
|
|
||||||
|
Before downloading an AI model, let's check if your computer can run it.
|
||||||
|
|
||||||
|
<Callout type="info">
|
||||||
|
The most important thing is VRAM:
|
||||||
|
- VRAM is your graphics card's memory
|
||||||
|
- More VRAM = ability to run bigger AI models
|
||||||
|
- Most computers have between 4GB to 16GB VRAM
|
||||||
|
</Callout>
|
||||||
|
|
||||||
|
### How to check your VRAM:
|
||||||
|
|
||||||
|
**On Windows:**
|
||||||
|
1. Press Windows + R
|
||||||
|
2. Type "dxdiag" and press Enter
|
||||||
|
3. Click "Display" tab
|
||||||
|
4. Look for "Display Memory"
|
||||||
|
|
||||||
|
**On Mac:**
|
||||||
|
1. Click Apple menu
|
||||||
|
2. Select "About This Mac"
|
||||||
|
3. Click "More Info"
|
||||||
|
4. Look under "Graphics/Displays"
|
||||||
|
|
||||||
|
### Which models can you run?
|
||||||
|
|
||||||
|
Here's a simple guide:
|
||||||
|
|
||||||
|
| Your VRAM | What You Can Run | What It Can Do |
|
||||||
|
|-----------|-----------------|----------------|
|
||||||
|
| 4GB | Small models (1-3B) | Basic writing and questions |
|
||||||
|
| 6GB | Medium models (7B) | Good for most tasks |
|
||||||
|
| 8GB | Larger models (13B) | Better understanding |
|
||||||
|
| 16GB | Largest models (32B) | Best performance |
|
||||||
|
|
||||||
|
<Callout type="tip">
|
||||||
|
Start with smaller models:
|
||||||
|
- Try 7B models first - they work well for most people
|
||||||
|
- Test how they run on your computer
|
||||||
|
- Try larger models only if you need better results
|
||||||
|
</Callout>
|
||||||
|
|
||||||
|
## Setting Up Your Local AI
|
||||||
|
|
||||||
|
### 1. Get Started
|
||||||
|
Download Jan from [jan.ai](https://jan.ai) - it sets everything up for you.
|
||||||
|
|
||||||
|
### 2. Get an AI Model
|
||||||
|
|
||||||
|
You can get models two ways:
|
||||||
|
|
||||||
|
### 1. Use Jan Hub (Recommended):
|
||||||
|
- Click "Download Model" in Jan
|
||||||
|
- Pick a recommended model
|
||||||
|
- Choose one that fits your computer
|
||||||
|
|
||||||
|

|
||||||
|
*Use Jan Hub to download AI models*
|
||||||
|
|
||||||
|
### 2. Use Hugging Face:
|
||||||
|
|
||||||
|
<Callout type="warning">
|
||||||
|
Important: Only GGUF models will work with Jan. Make sure to use models that have "GGUF" in their name.
|
||||||
|
</Callout>
|
||||||
|
|
||||||
|
#### Step 1: Get the model link
|
||||||
|
Find and copy a GGUF model link from [Hugging Face](https://huggingface.co)
|
||||||
|
|
||||||
|

|
||||||
|
*Look for models with "GGUF" in their name*
|
||||||
|
|
||||||
|
#### Step 2: Open Jan
|
||||||
|
Launch Jan and go to the Models tab
|
||||||
|
|
||||||
|

|
||||||
|
*Navigate to the Models section in Jan*
|
||||||
|
|
||||||
|
#### Step 3: Add the model
|
||||||
|
Paste your Hugging Face link into Jan
|
||||||
|
|
||||||
|

|
||||||
|
*Paste your GGUF model link here*
|
||||||
|
|
||||||
|
#### Step 4: Download
|
||||||
|
Select your quantization and start the download
|
||||||
|
|
||||||
|

|
||||||
|
*Choose your preferred model size and download*
|
||||||
|
|
||||||
|
### Common Questions
|
||||||
|
|
||||||
|
<Callout type="info">
|
||||||
|
**"My computer doesn't have a graphics card - can I still use AI?"**
|
||||||
|
Yes! It will run slower but still work. Start with 7B models.
|
||||||
|
|
||||||
|
**"Which model should I start with?"**
|
||||||
|
Try a 7B model first - it's the best balance of smart and fast.
|
||||||
|
|
||||||
|
**"Will it slow down my computer?"**
|
||||||
|
Only while you're using the AI. Close other big programs for better speed.
|
||||||
|
</Callout>
|
||||||
|
|
||||||
|
## Need help?
|
||||||
|
<Callout type="info">
|
||||||
|
Having trouble? We're here to help! [Join our Discord community](https://discord.gg/Exe46xPMbK) for support.
|
||||||
|
</Callout>
|
||||||
Loading…
x
Reference in New Issue
Block a user