Every Apple CPU Compared: M1 Through M5 Max (All Variants)
TL;DR: this page is meant to list every Apple Silicon chip from the M1 through the M5 Max, including all the lower-spec binned variants Apple buries in the fine print, with verified specs, practical buying advice, and a dedicated section on which chips can actually run local AI models. Inspired by the popular r/mac comparison table, expanded with official Apple sources, all binned variants, and honest flagging of estimated values. Bookmark it. I will keep it updated.
The M5 Max delivers an estimated 33 to 41 TFLOPS of FP16 compute in a laptop, depending on whether you count just the GPU ALUs or include Apple’s new per-core Neural Accelerators. Five years ago, the M1 delivered 5.2 TFLOPS. Memory bandwidth has more than doubled. And Apple still does not publish a single official TFLOPS number for any of these chips.
So, lets try to make our own list! See below for all the details! (this is still a work in progress so if you see anything I missed, please let me know. the idea was from Mac reddit posts, but the numbers here are referenced against as many sources as I could find)
Last updated: March 30, 2026
The Master Comparison Table
This table covers the maximum configuration of each chip tier. Binned (lower-spec) variants are documented in the generation-specific sections below. I use Apple’s marketed bandwidth numbers here. FP16 TFLOPS are estimated as 2x FP32 unless independently verified; all estimated values are marked with “~”. The “+NA” column shows estimated total FP16 including M5’s dedicated Neural Accelerators (M1-M4 show “–” since they lack separate Neural Accelerators). CPU core breakdown shows Performance + Efficiency (or Super + Performance for M5).
| SoC | CPU Cores | GPU Cores | Process | Neural Engine | Max RAM | Bandwidth | ~FP16 TFLOPS | ~FP16+NA | TB | Year |
|---|---|---|---|---|---|---|---|---|---|---|
| A18 Pro (Neo) | 6 (2P+4E) | 5 | 3nm (N3E) | 16-core / 35 TOPS | 8 GB | 60 GB/s | ~3.2 | — | None | 2024 |
| M1 | 8 (4P+4E) | 8 | 5nm (N5) | 16-core / 11 TOPS | 16 GB | 68 GB/s | ~5.2 | — | TB3 | 2020 |
| M1 Pro | 10 (8P+2E) | 16 | 5nm | 16-core / 11 TOPS | 32 GB | 200 GB/s | ~10.4 | — | TB4 | 2021 |
| M1 Max | 10 (8P+2E) | 32 | 5nm | 16-core / 11 TOPS | 64 GB | 400 GB/s | ~20.8 | — | TB4 | 2021 |
| M1 Ultra | 20 (16P+4E) | 64 | 5nm | 32-core / 22 TOPS | 128 GB | 800 GB/s | ~42.0 | — | TB4 | 2022 |
| M2 | 8 (4P+4E) | 10 | 5nm (N5P) | 16-core / 15.8 TOPS | 24 GB | 100 GB/s | ~7.2 | — | TB3* | 2022 |
| M2 Pro | 12 (8P+4E) | 19 | 5nm (N5P) | 16-core / 15.8 TOPS | 32 GB | 200 GB/s | ~13.7 | — | TB4 | 2023 |
| M2 Max | 12 (8P+4E) | 38 | 5nm (N5P) | 16-core / 15.8 TOPS | 96 GB | 400 GB/s | ~27.2 | — | TB4 | 2023 |
| M2 Ultra | 24 (16P+8E) | 76 | 5nm (N5P) | 32-core / 31.6 TOPS | 192 GB | 800 GB/s | ~54.4 | — | TB4 | 2023 |
| M3 | 8 (4P+4E) | 10 | 3nm (N3B) | 16-core / 18 TOPS | 24 GB | 100 GB/s | ~7.1 | — | TB3* | 2023 |
| M3 Pro | 12 (6P+6E) | 18 | 3nm (N3B) | 16-core / 18 TOPS | 36 GB | 150 GB/s | ~12.8 | — | TB4 | 2023 |
| M3 Max | 16 (12P+4E) | 40 | 3nm (N3B) | 16-core / 18 TOPS | 128 GB | 400 GB/s | ~28.4 | — | TB4 | 2023 |
| M3 Ultra | 32 (24P+8E) | 80 | 3nm (N3B) | 32-core / 36 TOPS | 512 GB | 800 GB/s | ~56.8 | — | TB5 | 2024 |
| M4 | 10 (4P+6E) | 10 | 3nm (N3E) | 16-core / 38 TOPS | 32 GB | 120 GB/s | ~8.6 | — | TB4 | 2024 |
| M4 Pro | 14 (10P+4E) | 20 | 3nm (N3E) | 16-core / 38 TOPS | 64 GB | 273 GB/s | ~17.2 | — | TB5 | 2024 |
| M4 Max | 16 (12P+4E) | 40 | 3nm (N3E) | 16-core / 38 TOPS | 128 GB | 546 GB/s | ~34.4 | — | TB5 | 2024 |
| M5 | 10 (4S+6E) | 10 | 3nm (N3P) | 16-core / TBD | 32 GB | 153 GB/s | ~8.4 | ~10.2 | TB4 | 2025 |
| M5 Pro | 18 (6S+12P) | 20 | 3nm (N3P) | 16-core / TBD | 64 GB | 307 GB/s | ~16.6 | ~20.4 | TB5 | 2026 |
| M5 Max | 18 (6S+12P) | 40 | 3nm (N3P) | 16-core / TBD | 128 GB | 614 GB/s | ~33.2 | ~41.0 | TB5 | 2026 |
* M2 and M3 base chips use TB3/USB4 on laptops but TB4 on Mac mini. FP16 TFLOPS estimated as 2x FP32 from Flopper.io and CPU-Monkey calculations; anchor points verified via Wikipedia ALU counts (M1 8-GPU: 2.6 FP32, M2 Ultra 76-GPU: 27.2 FP32).
Red indicates a regression from M2 Pro to M3 Pro.
Apple does not officially publish TFLOPS for any chip. Sources: Apple Newsroom, Wikipedia, Flopper.io, CPU-Monkey.
+NA column (M5 only): The M5 GPU includes dedicated per-core Neural Accelerators that handle FP16 matrix operations separately from the standard GPU ALUs. The “+NA” values estimate total FP16 compute (GPU ALU + Neural Accelerators). M1 through M4 GPUs do not have separate Neural Accelerators, so only GPU ALU FP16 applies. +NA estimates from community sources; not independently verified by Apple.
Want a version you can copy and paste into an email, Reddit, Discord, or your notes? Here it is:
+------------------+-----------+--------+--------+--------+-----------+---------+------+------+
| EVERY APPLE M-CHIP COMPARED (M1-M5 MAX) jdhodges.com |
+------------------+-----------+--------+--------+--------+-----------+---------+------+------+
| SoC | Max CPU/ |Est.GPU | +NA | Max RAM| Memory | Neural | Node | Year |
| | GPU Cores |FP16 TF | FP16 TF| Config | Bandwidth | Engine | | |
+------------------+-----------+--------+--------+--------+-----------+---------+------+------+
| A18 Pro (Neo) | 6C / 5C | ~3.2 | -- | 8 GB | 60 GB/s | 35 TOPS | 3nmE | 2024 |
+------------------+-----------+--------+--------+--------+-----------+---------+------+------+
| M1 | 8C / 8C | ~5.2 | -- | 16 GB | 68 GB/s | 11 TOPS | 5nm | 2020 |
| M1 Pro | 10C / 16C | ~10.4 | -- | 32 GB | 200 GB/s | 11 TOPS | 5nm | 2021 |
| M1 Max | 10C / 32C | ~20.8 | -- | 64 GB | 400 GB/s | 11 TOPS | 5nm | 2021 |
| M1 Ultra | 20C / 64C | ~42.0 | -- | 128 GB | 800 GB/s | 22 TOPS | 5nm | 2022 |
+------------------+-----------+--------+--------+--------+-----------+---------+------+------+
| M2 | 8C / 10C | ~7.2 | -- | 24 GB | 100 GB/s |15.8TOPS | 5nm+ | 2022 |
| M2 Pro | 12C / 19C | ~13.7 | -- | 32 GB | 200 GB/s |15.8TOPS | 5nm+ | 2023 |
| M2 Max | 12C / 38C | ~27.2 | -- | 96 GB | 400 GB/s |15.8TOPS | 5nm+ | 2023 |
| M2 Ultra | 24C / 76C | ~54.4 | -- | 192 GB | 800 GB/s |31.6TOPS | 5nm+ | 2023 |
+------------------+-----------+--------+--------+--------+-----------+---------+------+------+
| M3 | 8C / 10C | ~7.1 | -- | 24 GB | 100 GB/s | 18 TOPS | 3nm | 2023 |
| M3 Pro | 12C / 18C | ~12.8 | -- | 36 GB | 150 GB/s | 18 TOPS | 3nm | 2023 |
| M3 Max | 16C / 40C | ~28.4 | -- | 128 GB | 400 GB/s | 18 TOPS | 3nm | 2023 |
| M3 Ultra | 32C / 80C | ~56.8 | -- | 512 GB | 800 GB/s | 36 TOPS | 3nm | 2024 |
+------------------+-----------+--------+--------+--------+-----------+---------+------+------+
| M4 | 10C / 10C | ~8.6 | -- | 32 GB | 120 GB/s | 38 TOPS | 3nmE | 2024 |
| M4 Pro | 14C / 20C | ~17.2 | -- | 64 GB | 273 GB/s | 38 TOPS | 3nmE | 2024 |
| M4 Max | 16C / 40C | ~34.4 | -- | 128 GB | 546 GB/s | 38 TOPS | 3nmE | 2024 |
+------------------+-----------+--------+--------+--------+-----------+---------+------+------+
| M5 | 10C / 10C | ~8.4 | ~10.2+ | 32 GB | 153 GB/s | TBD | 3nmP | 2025 |
| M5 Pro | 18C / 20C | ~16.6 | ~20.4+ | 64 GB | 307 GB/s | TBD | 3nmP | 2026 |
| M5 Max | 18C / 40C | ~33.2 | ~41.0+ | 128 GB | 614 GB/s | TBD | 3nmP | 2026 |
+------------------+-----------+--------+--------+--------+-----------+---------+------+------+
GPU FP16 est. (2x FP32). +NA = GPU ALU + Neural Accelerators (M5 only).
Sources: Apple Newsroom, Flopper.io, CPU-Monkey, Geekbench Browser.
Full post & methodology: jdhodges.com Updated 2026-03
Notable absences: There is no M4 Ultra. Apple skipped it entirely. The M5 Ultra has not been announced as of March 2026. Apple has not published Neural Engine TOPS for the M5 family; the M5 GPU includes new per-core “Neural Accelerators” that complicate direct comparison.
Key trends to notice: Bandwidth doubles at each tier (Base to Pro to Max to Ultra). The M3 Pro’s 150 GB/s is a 25% regression from the M2 Pro’s 200 GB/s — Apple narrowed the memory bus and never acknowledged it publicly. The M4 and M5 generations recover and then some. Neural Engine TOPS more than tripled from M1 (11) to M4 (38), reflecting Apple’s AI pivot.
![]()
![]()
M1 Family (2020-2022): The Revolution
The M1 was the chip that ended the Intel era. When Apple shipped it in November 2020, the industry’s reaction was roughly: “Wait, a tablet chip is beating our best laptop processors?” And it was, comfortably, while using a fraction of the power.
The M1 family spans four tiers: base M1, M1 Pro, M1 Max, and M1 Ultra (two M1 Max dies fused via UltraFusion). All use TSMC’s 5nm process. The base M1 used LPDDR4X memory, while Pro and above jumped to LPDDR5.
What to know about binned variants: The M1 7-core GPU shipped in the base MacBook Air and base iMac — one disabled GPU core, minimal real-world difference. The M1 Pro 8-core CPU (6P+2E) shipped in the base 14-inch MacBook Pro with two fewer performance cores. The M1 Max 24-core GPU and M1 Ultra 48-core GPU are similarly binned versions of the 32-core and 64-core respectively.
| Variant | CPU | GPU | Max RAM | Bandwidth | GB6 SC/MC |
|---|---|---|---|---|---|
| M1 (7-GPU) | 8 (4P+4E) | 7 | 16 GB | 68 GB/s | 2369 / 8576 |
| M1 (8-GPU) | 8 (4P+4E) | 8 | 16 GB | 68 GB/s | 2369 / 8576 |
| M1 Pro (8C/14G) | 8 (6P+2E) | 14 | 32 GB | 200 GB/s | 2360 / 10312 |
| M1 Pro (10C/16G) | 10 (8P+2E) | 16 | 32 GB | 200 GB/s | 2385 / 12347 |
| M1 Max (24G) | 10 (8P+2E) | 24 | 64 GB | 400 GB/s | 2397 / 12439 |
| M1 Max (32G) | 10 (8P+2E) | 32 | 64 GB | 400 GB/s | 2397 / 12439 |
| M1 Ultra (48G) | 20 (16P+4E) | 48 | 128 GB | 800 GB/s | 2398 / 18436 |
| M1 Ultra (64G) | 20 (16P+4E) | 64 | 128 GB | 800 GB/s | 2398 / 18436 |
Products: MacBook Air (2020), MacBook Pro 13″ (2020), Mac mini (2020), iMac 24″ (2021), MacBook Pro 14″/16″ (2021), Mac Studio (2022). All discontinued. The M1 MacBook Air remains one of the best value laptops ever made if you can find one under $600.
Source: Apple M1 Newsroom, M1 Pro/Max Newsroom, M1 Ultra Newsroom
M2 Family (2022-2023): The Refinement
The M2 was an evolution, not a revolution. Same 5nm process (TSMC N5P, an enhanced version), modest CPU gains, better GPU, and critically: the Pro and Max variants jumped from 2 efficiency cores to 4. That meant the M2 Pro and M2 Max had notably better multithreaded performance scaling.
The M2 also brought the memory ceiling up: 24 GB for the base chip (vs 16 GB on M1), 96 GB for the Max (vs 64 GB), and 192 GB for the Ultra (vs 128 GB).
Gotcha: the M2 256 GB storage problem. The base M2 MacBook Air and 13-inch MacBook Pro with 256 GB storage used a single NAND chip instead of two. This halved the SSD read/write speed compared to the 512 GB models (and compared to the M1). Apple fixed this with the M3. If you are buying a refurbished M2 Mac, get the 512 GB model.
Gotcha: Thunderbolt varies by product. The base M2 chip uses Thunderbolt 3 on laptops (MacBook Air, MacBook Pro 13″) but Thunderbolt 4 on the Mac mini. Same chip, different controller implementation.
| Variant | CPU | GPU | Max RAM | Bandwidth | GB6 SC/MC |
|---|---|---|---|---|---|
| M2 (8-GPU) | 8 (4P+4E) | 8 | 24 GB | 100 GB/s | 2600 / 9900 |
| M2 (10-GPU) | 8 (4P+4E) | 10 | 24 GB | 100 GB/s | 2640 / 10000 |
| M2 Pro (10C/16G) | 10 (6P+4E) | 16 | 32 GB | 200 GB/s | 2665 / 12300 |
| M2 Pro (12C/19G) | 12 (8P+4E) | 19 | 32 GB | 200 GB/s | 2680 / 14650 |
| M2 Max (30G) | 12 (8P+4E) | 30 | 64 GB | 400 GB/s | 2760 / 14750 |
| M2 Max (38G) | 12 (8P+4E) | 38 | 96 GB | 400 GB/s | 2760 / 14950 |
| M2 Ultra (60G) | 24 (16P+8E) | 60 | 192 GB | 800 GB/s | 2750 / 21400 |
| M2 Ultra (76G) | 24 (16P+8E) | 76 | 192 GB | 800 GB/s | 2750 / 21400 |
Products: MacBook Air 13″/15″ (2022-2023), MacBook Pro 13″ (2022), Mac mini (2023), MacBook Pro 14″/16″ (2023), Mac Studio (2023), Mac Pro (2023). M2 Ultra still current in the Mac Pro.
Source: Apple M2 Newsroom, M2 Pro/Max Newsroom, M2 Ultra Newsroom
M3 Family (2023-2025): The Move to 3nm (and a slight regression)
The M3 was Apple’s first 3nm chip (TSMC N3B), bringing hardware ray tracing, dynamic caching, and mesh shading to the GPU. Single-threaded CPU performance jumped roughly 30% over M1. All good.
But the M3 Pro has a problem that Apple would prefer you not notice.
The M3 Pro bandwidth regression. Apple narrowed the M3 Pro’s memory bus from 256-bit (M1/M2 Pro) to 192-bit. This dropped bandwidth from 200 GB/s to 150 GB/s. That is a 25% cut. For CPU-bound workflows, you will not notice. For memory-intensive tasks (large compiles, video editing timelines, running local AI models), the M3 Pro can bottleneck where the M2 Pro did not. If you are shopping refurbished and choosing between an M2 Pro and M3 Pro, this is the number that matters most.
The M3 Max also has a bandwidth split: the binned 14-core CPU / 30-GPU variant drops to 300 GB/s. The full 16-core / 40-GPU variant keeps 400 GB/s.
M3 Ultra: yes, it shipped. It shipped in March 2025 in the Mac Studio, with up to 512 GB of unified memory and Thunderbolt 5 — the first Apple Silicon chip with TB5.
| Variant | CPU | GPU | Max RAM | Bandwidth | GB6 SC/MC |
|---|---|---|---|---|---|
| M3 (8-GPU) | 8 (4P+4E) | 8 | 24 GB | 100 GB/s | 3123 / 12198 |
| M3 (10-GPU) | 8 (4P+4E) | 10 | 24 GB | 100 GB/s | 3123 / 12198 |
| M3 Pro (11C/14G) | 11 (5P+6E) | 14 | 36 GB | 150 GB/s | 3100 / 14463 |
| M3 Pro (12C/18G) | 12 (6P+6E) | 18 | 36 GB | 150 GB/s | 3153 / 15612 |
| M3 Max (14C/30G) | 14 (10P+4E) | 30 | 96 GB | 300 GB/s | 3172 / 19156 |
| M3 Max (16C/40G) | 16 (12P+4E) | 40 | 128 GB | 400 GB/s | 3212 / 21225 |
| M3 Ultra (28C/60G) | 28 (20P+8E) | 60 | 512 GB | 800 GB/s | 3247 / 26937 |
| M3 Ultra (32C/80G) | 32 (24P+8E) | 80 | 512 GB | 800 GB/s | 3247 / 28169 |
Source: Apple M3/Pro/Max Newsroom, M3 Ultra Newsroom
M4 Family (2024-2025): The AI Pivot
The M4 is where Apple got serious about on-device AI. The Neural Engine jumped from 18 TOPS (M3) to 38 TOPS (M4), more than doubling throughput. The base M4 also finally shipped with a minimum of 16 GB RAM in Macs.
TSMC’s second-generation 3nm (N3E) process brought modest efficiency gains. The M4 was also the first Apple SoC to use the ARMv9 instruction set. Thunderbolt 5 arrives on M4 Pro and M4 Max (120 Gbps). There is no M4 Ultra.
| Variant | CPU | GPU | Max RAM | Bandwidth | NE TOPS | GB6 SC/MC |
|---|---|---|---|---|---|---|
| M4 (8C/8G) | 8 (4P+4E) | 8 | 32 GB | 120 GB/s | 38 | 3737 / 14815 |
| M4 (10C/10G) | 10 (4P+6E) | 10 | 32 GB | 120 GB/s | 38 | 3755 / 15050 |
| M4 Pro (12C/16G) | 12 (8P+4E) | 16 | 64 GB | 273 GB/s | 38 | 3789 / 18500 |
| M4 Pro (14C/20G) | 14 (10P+4E) | 20 | 64 GB | 273 GB/s | 38 | 3789 / 22772 |
| M4 Max (14C/32G) | 14 (10P+4E) | 32 | 128 GB | 410 GB/s | 38 | 4040 / 23959 |
| M4 Max (16C/40G) | 16 (12P+4E) | 40 | 128 GB | 546 GB/s | 38 | 3982 / 26371 |
Source: Apple M4 Newsroom, M4 Pro/Max Newsroom, MacBook Pro 2024 Tech Specs
M5 Family (2025-2026): Fusion and New Core Names
The M5 brings two architectural changes worth understanding. First, Apple renamed its CPU cores: what used to be “performance cores” are now “super cores,” and a new tier called “performance cores” sits between super and efficiency. The base M5 has super + efficiency cores; the M5 Pro and M5 Max have super + performance cores (no efficiency cores).
Second, the M5 Pro and M5 Max use “Fusion Architecture”: two third-generation 3nm dies bonded together. This gives the Pro and Max tiers 18 CPU cores, up from 14 (M4 Pro) and 16 (M4 Max). Memory jumps to LPDDR5X-9600. The M5 Max 40-core GPU hits 614 GB/s bandwidth. Apple has not published Neural Engine TOPS for the M5 family. Additionally, M5 GPU cores include dedicated “Neural Accelerators” that handle FP16 matrix operations independently of the standard GPU ALUs. This means M5 raw GPU ALU FP16 compute appears flat or slightly lower than M4 (~8.4 vs ~8.6 TFLOPS for the base chips), but total FP16 throughput including the Neural Accelerators is significantly higher (~10.2 TFLOPS for the base M5). See the +NA column in the master table above.
| Variant | CPU | GPU | Max RAM | Bandwidth | GB6 SC/MC |
|---|---|---|---|---|---|
| M5 (10C/8G) | 10 (4S+6E) | 8 | 32 GB | 153 GB/s | 4255 / 17846 |
| M5 (10C/10G) | 10 (4S+6E) | 10 | 32 GB | 153 GB/s | 4255 / 17846 |
| M5 Pro (15C/16G) | 15 (5S+10P) | 16 | 64 GB | 307 GB/s | 4280 / 25970 |
| M5 Pro (18C/20G) | 18 (6S+12P) | 20 | 64 GB | 307 GB/s | 4278 / 28260 |
| M5 Max (18C/32G) | 18 (6S+12P) | 32 | 128 GB | 460 GB/s | 4303 / 28938 |
| M5 Max (18C/40G) | 18 (6S+12P) | 40 | 128 GB | 614 GB/s | 4303 / 28938 |
Source: Apple M5 Newsroom, M5 Pro/Max Newsroom
The A18 Pro: MacBook Neo’s Phone Chip in a Laptop
The MacBook Neo deserves its own section because it does not fit neatly into the M-chip hierarchy. It uses the A18 Pro, the same chip from the iPhone 16 Pro, but with a binned 5-core GPU (the iPhone gets 6 cores).
In raw single-threaded CPU performance, the A18 Pro beats both the M1 and M2 (Geekbench 6 single-core: 3566 vs M1’s 2369 and M2’s 2640). But it only has 6 CPU cores total (2P+4E), so multithreaded performance is closer to M1 territory (GB6 multi: 8646 vs M1’s 8576). The real story is the memory: 8 GB, 60 GB/s bandwidth, no Thunderbolt. This is an iPhone-class memory subsystem in a laptop. For $599.
Source: MacBook Neo Newsroom, MacBook Neo Tech Specs
Which Chip Can Run Your AI Models?
This is increasingly the reason technical users buy Macs. Apple Silicon’s unified memory means the GPU can access all available RAM for model inference, making Macs surprisingly capable local AI machines.
![]()
This heatmap uses each chip tier’s highest-memory configuration (for example, M4 Pro at 64 GB, M3 Ultra at 512 GB). If you bought a lower-RAM SKU, your chip may drop one or more model size tiers. Check the master table above for exact RAM configs.
![]()
| RAM | Model Size (Q4) | Examples | Chips |
|---|---|---|---|
| 8 GB | Up to 7B | Llama 3.1 8B (Q4), Mistral 7B | A18 Pro, M1/M2/M3 base |
| 16 GB | Up to 13B | CodeLlama 13B, Llama 3.1 8B (full) | All base chips (M1-M5) |
| 32 GB | Up to 30B | Qwen 2.5 32B (Q4) | M1/M2 Pro, M4/M5 base max |
| 64 GB | Up to 70B (Q4) | Llama 3.1 70B, Qwen 2.5 72B | M1/M2 Max, M4/M5 Pro/Max |
| 128 GB | Up to 120B | Qwen 2.5 110B (Q4) | M1/M2 Ultra, M3/M4/M5 Max |
| 192+ GB | Large MoE | Mixtral 8x22B | M2 Ultra, M3 Ultra (512 GB) |
The critical insight: Memory bandwidth matters as much as capacity for LLM inference. The M3 Pro’s 150 GB/s makes it notably slower for LLM inference than the M2 Pro’s 200 GB/s, even though the M3 Pro has more max RAM (36 vs 32 GB). If local AI is your primary use case, bandwidth should be your first filter.
Buying Guide: Which Chip Do You Actually Need?
Web, Email, Light Office: MacBook Neo ($599) or refurbished M1/M2 MacBook Air ($500-700).
Students: MacBook Neo ($599) for notes/browsing, MacBook Air M4 ($1,099) if coursework involves coding or creative tools.
Software Developers: M4 Pro ($1,999+) or M5 Pro ($2,199+). 48-64 GB if running Docker + IDE + databases.
Creative Professionals: M4/M5 Pro minimum, M4/M5 Max for long timelines. Bandwidth matters for video editing.
Local AI / LLM Inference: M4 Max 128 GB ($3,499+) or M5 Max 128 GB ($3,599+). Memory capacity and bandwidth are everything. Desktop option: M3 Ultra Mac Studio (512 GB).
“I Want the Best Laptop”: M5 Max 18-core / 40-GPU / 128 GB ($3,599+). 614 GB/s. 18 cores. TB5. Wi-Fi 7.
⭐price change quickly so always check for latest prices and deals⭐
Apple specific things to know and watch out for
RAM Pricing. Apple has always charged a premium for memory upgrades, and none of them can be done after purchase. The global DRAM shortage that began in late 2025 has pushed memory component costs up over 50%, but Apple’s long-term supply agreements and buying power have kept their upgrade pricing relatively stable compared to the broader market. That said, going from base RAM to maximum on an M5 Max will still cost you well over a thousand dollars in upgrades. Check current pricing on apple.com and buy what you need now.
The Base Model Tax. Apple shipped 8 GB base MacBooks from 2020 through 2023. If buying used M1/M2/M3, check the RAM. An 8 GB M3 Air is worse than a 16 GB M1 Air.
The M3 Pro Bandwidth Regression. 150 GB/s is 25% slower than the M2 Pro’s 200 GB/s. Apple never acknowledged it. If buying a 2023 MacBook Pro, the M3 Max is better if bandwidth matters.
Thunderbolt Confusion. Base chips: TB3/4 (varies by Mac). M1-M3 Pro/Max: TB4. M3 Ultra, M4+ Pro/Max: TB5 (120 Gbps vs 40 Gbps).
M2 256 GB SSD Speed. Single NAND chip halved SSD speed vs 512 GB. Fixed with M3. Avoid the 256 GB M2 when buying used.
Power Consumption. Apple does not publish TDP or power draw for any M-series chip. Third-party measurements from sites like Notebookcheck suggest peak package power ranges from roughly 10W (base M1/M2 at idle) to 75W or more (M4 Max/M5 Max under sustained GPU load), but numbers vary significantly by workload, cooling solution, and measurement methodology. Do not trust any single source claiming exact wattage figures.
The Bottom Line
![]()
Apple makes outstanding chips and then buries half the useful information in footnotes, tech spec PDFs, and marketing language that never includes a single TFLOPS number. These charts are meant to make the information a little more easily readable and findable.
The short version: if buying new in 2026, the M4 and M5 families are where the value is. The M4 Mac mini starting at ~$499 directly from Apple is a great value. The M5 Pro is the workhorse. The M5 Max is the ceiling for laptops. And the M3 Ultra Mac Studio remains the only option for 512 GB of unified memory.
If buying refurbished, watch for the M3 Pro bandwidth regression and the M2 256 GB SSD problem. Both are deal-breakers that original spec sheets do not make obvious.
I will update this guide when new chips are announced. Next expected update: M5 Ultra (anticipated 2026).
Sources and Further Reading
- Apple M1 Newsroom
- M1 Pro/Max Newsroom
- M1 Ultra Newsroom
- Apple M2 Newsroom
- M2 Pro/Max Newsroom
- M2 Ultra Newsroom
- M3/Pro/Max Newsroom
- M3 Ultra Newsroom
- Apple M4 Newsroom
- M4 Pro/Max Newsroom
- Apple M5 Newsroom
- M5 Pro/Max Newsroom
- MacBook Neo Newsroom
- Wikipedia: Apple M1, M2, M3, M4, M5
- Flopper.io: GPU specs, CPU-Monkey, Geekbench Browser
⭐price change quickly so always check for latest prices and deals⭐