

Video Description
💻 Thinking about upgrading your MacBook for AI? I tested DeepSeek R1's open-source models on a $5000 M4 Max (128GB) vs a $1000 M1 Air (16GB) – the results will shock you.
🔍 In this 30-minute tutorial and comparison:
• Step-by-step guide to running AI models locally with LM Studio
• Real-time speed tests: 635% faster on M4 Max?
• Can a budget M1 handle 14B parameter models?
• Surprising failures (and wins) across Quinn/Llama architectures
👇 Timestamped chapters below 👇
✨ Hit subscribe for more AI hardware deep dives!
Chapters:
00:00 DeepSeek R1 Disrupts Silicon Valley
00:48 How to install run DeepSeek R1 locally, free, private, offline, unlimited (LM Studio)
01:07 How to download and install DeepSeek R1 in LM Studio for Mac
01:50 What DeepSeek R1 models can you run on MacBook Pro M4 Max 128GB and MacBook Air M1 16GB
03:31 How to test a reasoning AI model: The Reasoning Prompt
06:22 The Reasoning Prompt Solution
07:42 Test 01 - M1 - DeepSeek R1 Distill Qwen 1.5B
09:52 Test 02 - M4 - DeepSeek R1 Distill Qwen 1.5B
12:50 Test 03 - M1 - DeepSeek R1 Distill Qwen 7B
14:45 Test 04 - M4 - DeepSeek R1 Distill Qwen 7B
15:48 Test 05 - M1 - DeepSeek R1 Distill Llama 8B
18:05 Test 06 - M4 - DeepSeek R1 Distill Llama 8B
19:42 Test 07 - M1 - DeepSeek R1 Distill Qwen 14B
21:40 Test 08 - M4 - DeepSeek R1 Distill Qwen 14B
22:51 Test 09 - M1 - DeepSeek R1 Distill Qwen 32B
23:51 Test 10 - M4 - DeepSeek R1 Distill Llama 70B
25:39 The Test Results: What We Learned
28:55 Conclusion
Transcript
The AI Revolution: DeepSeek R1 Disrupts Silicon Valley
When NVIDIA loses over half a trillion dollars and there's panic at Meta, you know something big is happening in AI. DeepSeek R1 just proved that open source AI can outperform the giants, but here's what no one is showing you - how it actually runs on your machine.
Today we're testing extremes: a $5,000 M4 Max versus a $1,000 M1 Air. Whether you prefer a beast setup or flying light as air, you deserve to know what's possible.
Getting Started with LM Studio
There are several ways to run DeepSeek R1 on your computer, but I recommend LM Studio because it offers the best mix of power, functionality, and a beautiful interface.
To get started, go to LMStudio.ai and download the app. Once launched, navigate to the discover tab and type "DeepSeek R1". You'll see all available DeepSeek R1 models ready for free download. Simply click on a model and hit the download button to install it on your computer.
Understanding Model Limitations
Here's what you need to know about the models. With an M4 Max, you can run all DeepSeek R1 model sizes. However, with the M1 16GB, you're limited to 14B and lower models. This gives the M4 Max 128GB a clear advantage if you're an AI enthusiast who wants to run the biggest models and play with the latest AI tools.
If you do video editing, audio editing, image editing, always have 50 tabs open, and run multiple apps simultaneously, the M4 Max is a no-brainer for smooth performance. However, if you just want to dabble in AI, other M-series chips are surprisingly capable with a minimum of 16GB RAM.
The Testing Challenge
For our test, I selected a deceptively simple prompt that was actually used by Microsoft in the late 1990s and early 2000s to test human reasoning in interviews:
"You have three boxes, each incorrectly labeled. One contains only apples, one contains only oranges, and one contains a mix of both. You can only draw one fruit from one box, then you have to correctly reassign all the labels."
The correct solution requires opening the "mixed" box first. If you draw an apple, that box becomes "apples," the "apples" box becomes "oranges," and the "oranges" box becomes "mixed." The reverse applies if you draw an orange.
Performance Testing Results
We tested models from smallest to largest, tracking reasoning tokens, generation speed, total time, and answer accuracy. Here are the key findings:
Speed Comparison: The M4 Max consistently outperformed the M1 Air by 6-10x across all models. For example, the Quinn 7B model took 41 seconds on M1 but only 3.6 seconds on M4 Max.
Accuracy Insights: Surprisingly, the smallest model (1.5B) failed on both machines despite extensive reasoning. The Quinn 7B model succeeded on both machines, while Llama 8B showed inconsistent results - failing on M1 but passing on M4 Max.
Architecture Matters: Quinn models consistently outperformed Llama models of similar size, likely due to Quinn's superior mathematical reasoning capabilities.
Resource Requirements
The M1 Air maxed out at 90-92% RAM usage when running the 14B model, creating a hard cap on larger models. The M4 Max handled even the 70B model while maintaining 28% free RAM, allowing for multitasking with video editing, multiple browser tabs, and other AI tools simultaneously.
Key Takeaways
The most impressive finding was that a Quinn 7B model on the M1 Air delivered intelligent responses in just 2 minutes - comparable to much larger models on more powerful hardware. This demonstrates that significant AI capability is accessible even on budget hardware.
For AI enthusiasts who want access to the latest models and seamless multitasking, the M4 Max upgrade is worthwhile. However, for casual AI exploration, an M1 Air with 16GB RAM can provide surprisingly capable performance with smaller models.
The democratization of AI through local model execution means powerful reasoning capabilities are now available to everyone, regardless of budget constraints.