diff --git a/docs/src/pages/post/_assets/hugging-face-jan-model-download.jpg b/docs/src/pages/post/_assets/hugging-face-jan-model-download.jpg
new file mode 100644
index 000000000..c6cfa8ea5
Binary files /dev/null and b/docs/src/pages/post/_assets/hugging-face-jan-model-download.jpg differ
diff --git a/docs/src/pages/post/_assets/jan-hf-model-download.jpg b/docs/src/pages/post/_assets/jan-hf-model-download.jpg
new file mode 100644
index 000000000..929acf2ff
Binary files /dev/null and b/docs/src/pages/post/_assets/jan-hf-model-download.jpg differ
diff --git a/docs/src/pages/post/_assets/jan-local-ai.jpg b/docs/src/pages/post/_assets/jan-local-ai.jpg
new file mode 100644
index 000000000..2c8c145ff
Binary files /dev/null and b/docs/src/pages/post/_assets/jan-local-ai.jpg differ
diff --git a/docs/src/pages/post/_assets/jan-model-download.jpg b/docs/src/pages/post/_assets/jan-model-download.jpg
new file mode 100644
index 000000000..7e949403d
Binary files /dev/null and b/docs/src/pages/post/_assets/jan-model-download.jpg differ
diff --git a/docs/src/pages/post/_assets/local-ai-model-parameters.jpg b/docs/src/pages/post/_assets/local-ai-model-parameters.jpg
new file mode 100644
index 000000000..1d26fc4a5
Binary files /dev/null and b/docs/src/pages/post/_assets/local-ai-model-parameters.jpg differ
diff --git a/docs/src/pages/post/_assets/open-source-ai-quantization.jpg b/docs/src/pages/post/_assets/open-source-ai-quantization.jpg
new file mode 100644
index 000000000..fe605c3cd
Binary files /dev/null and b/docs/src/pages/post/_assets/open-source-ai-quantization.jpg differ
diff --git a/docs/src/pages/post/run-ai-models-locally.mdx b/docs/src/pages/post/run-ai-models-locally.mdx
new file mode 100644
index 000000000..bdf15fec7
--- /dev/null
+++ b/docs/src/pages/post/run-ai-models-locally.mdx
@@ -0,0 +1,188 @@
+---
+title: "How to run AI models locally: A Complete Guide for Beginners"
+description: "A simple guide to running AI models locally on your computer. It's for beginners - no technical knowledge needed."
+tags: AI, local models, Jan, GGUF, privacy, local AI
+categories: guides
+date: 2024-01-31
+ogImage: assets/jan-local-ai.jpg
+---
+
+import { Callout } from 'nextra/components'
+import CTABlog from '@/components/Blog/CTA'
+
+# How to run AI models locally: A Complete Guide for Beginners
+
+Running AI models locally means installing them on your computer instead of using cloud services. This guide shows you how to run open-source AI models like Llama, Mistral, or DeepSeek on your computer - even if you're not technical.
+
+## Quick steps:
+1. Download [Jan](https://jan.ai)
+2. Pick a recommended model
+3. Start chatting
+
+Read [Quickstart](https://jan.ai/docs/quickstart) to get started. For more details, keep reading.
+
+
+*Jan is for running AI models locally. Download [Jan](https://jan.ai)*
+
+
+Benefits of running AI locally:
+- **Privacy:** Your data stays on your computer
+- **No internet needed:** Use AI even offline
+- **No limits:** Chat as much as you want
+- **Full control:** Choose which AI models to use
+
+
+## How to run AI models locally as a beginner
+
+[Jan](https://jan.ai) makes it easy to run AI models. Just download the app and you're ready to go - no complex setup needed.
+
+
+What you can do with Jan:
+- Download AI models with one click
+- Everything is set up automatically
+- Find models that work on your computer
+
+
+## Understanding Local AI models
+
+Think of AI models like apps - some are small and fast, others are bigger but smarter. Let's understand two important terms you'll see often: parameters and quantization.
+
+### What's a "Parameter"?
+
+When looking at AI models, you'll see names like "Llama-2-7B" or "Mistral-7B". Here's what that means:
+
+
+*Model sizes: Bigger models = Better results + More resources*
+
+- The "B" means "billion parameters" (like brain cells)
+- More parameters = smarter AI but needs a faster computer
+- Fewer parameters = simpler AI but works on most computers
+
+
+Which size to choose?
+- **7B models:** Best for most people - works on most computers
+- **13B models:** Smarter but needs a good graphics card
+- **70B models:** Very smart but needs a powerful computer
+
+
+### What's Quantization?
+
+Quantization makes AI models smaller so they can run on your computer. Think of it like compressing a video to save space:
+
+
+*Quantization: Balance between size and quality*
+
+Simple guide:
+- **Q4:** Best choice for most people - runs fast and works well
+- **Q6:** Better quality but runs slower
+- **Q8:** Best quality but needs a powerful computer
+
+Example: A 7B model with Q4 works well on most computers.
+
+## Hardware Requirements
+
+Before downloading an AI model, let's check if your computer can run it.
+
+
+The most important thing is VRAM:
+- VRAM is your graphics card's memory
+- More VRAM = ability to run bigger AI models
+- Most computers have between 4GB to 16GB VRAM
+
+
+### How to check your VRAM:
+
+**On Windows:**
+1. Press Windows + R
+2. Type "dxdiag" and press Enter
+3. Click "Display" tab
+4. Look for "Display Memory"
+
+**On Mac:**
+1. Click Apple menu
+2. Select "About This Mac"
+3. Click "More Info"
+4. Look under "Graphics/Displays"
+
+### Which models can you run?
+
+Here's a simple guide:
+
+| Your VRAM | What You Can Run | What It Can Do |
+|-----------|-----------------|----------------|
+| 4GB | Small models (1-3B) | Basic writing and questions |
+| 6GB | Medium models (7B) | Good for most tasks |
+| 8GB | Larger models (13B) | Better understanding |
+| 16GB | Largest models (32B) | Best performance |
+
+
+Start with smaller models:
+- Try 7B models first - they work well for most people
+- Test how they run on your computer
+- Try larger models only if you need better results
+
+
+## Setting Up Your Local AI
+
+### 1. Get Started
+Download Jan from [jan.ai](https://jan.ai) - it sets everything up for you.
+
+### 2. Get an AI Model
+
+You can get models two ways:
+
+### 1. Use Jan Hub (Recommended):
+ - Click "Download Model" in Jan
+ - Pick a recommended model
+ - Choose one that fits your computer
+
+
+*Use Jan Hub to download AI models*
+
+### 2. Use Hugging Face:
+
+
+Important: Only GGUF models will work with Jan. Make sure to use models that have "GGUF" in their name.
+
+
+#### Step 1: Get the model link
+Find and copy a GGUF model link from [Hugging Face](https://huggingface.co)
+
+
+*Look for models with "GGUF" in their name*
+
+#### Step 2: Open Jan
+Launch Jan and go to the Models tab
+
+
+*Navigate to the Models section in Jan*
+
+#### Step 3: Add the model
+Paste your Hugging Face link into Jan
+
+
+*Paste your GGUF model link here*
+
+#### Step 4: Download
+Select your quantization and start the download
+
+
+*Choose your preferred model size and download*
+
+### Common Questions
+
+
+**"My computer doesn't have a graphics card - can I still use AI?"**
+Yes! It will run slower but still work. Start with 7B models.
+
+**"Which model should I start with?"**
+Try a 7B model first - it's the best balance of smart and fast.
+
+**"Will it slow down my computer?"**
+Only while you're using the AI. Close other big programs for better speed.
+
+
+## Need help?
+
+Having trouble? We're here to help! [Join our Discord community](https://discord.gg/Exe46xPMbK) for support.
+
\ No newline at end of file