Build log
Designing Project Zeus PC
Project Zeus is my personal workstation build. It's the machine I use for local AI inference, coding, shipping products, and gaming when I can sneak an hour in.
This page isn't here to flex parts. It's here because the build process is the bit I enjoy: turning vague requirements into constraints, turning constraints into decisions, and then proving those decisions with tests. It's systems thinking with a screwdriver.
Why I built it
I wanted a machine that could do four things well:
- Run local AI reliably (local inference, model routing experiments, multimodal tooling) without the whole system feeling fragile.
- Handle serious day-to-day work (lots of tabs, lots of docs, lots of "why is this container doing that?" moments).
- Game at 1440p and 4K without needing to compromise settings.
- Stay maintainable. If it breaks, I want clear diagnostics and a straightforward recovery path.
A lot of people build a PC by picking parts they like. I built mine like I build marketing systems: start with the outcome, identify the bottlenecks, then design around the real constraints.
Design principles
These are the rules I set myself before I bought anything:
No assumptions
If a component choice depends on a claim, I want a source or a test.
Budget for power and thermals
Performance is pointless if the system becomes unstable or noisy.
Prefer boring reliability
I'm not allergic to tinkering, but production machines need boring foundations.
Plan the failure modes
What happens if Windows does something dramatic? What's the rollback? How do I recover quickly?
If you're into Donella Meadows, you'll recognise the vibe: find the leverage points, respect the feedback loops, and don't fight the system physics.
The build, at a glance
| CPU | AMD Ryzen 9 7900X (12 cores / 24 threads) |
| GPU | Radeon RX 7900 XT (XFX MERC 310, 20 GB) |
| Motherboard | MSI MAG B650 Tomahawk WiFi |
| RAM | 48 GB DDR5 (2x24 GB) 6000 MHz CL28 |
| Storage | 2 TB NVMe (WD Black SN850X) + 1 TB NVMe (Acer FA200) |
| Cooling | DeepCool AK620 air cooler |
| PSU | Corsair RM850e (850 W) |
| Case + airflow | NZXT H5 Flow + extra Arctic P12 PWM fans |
If you're reading this as a non-PC person: translate that to "fast enough to do everything I throw at it, and stable enough to trust".
The actual design process
Step 1: Start with the bottleneck
For my use, the limiting factors were:
- VRAM and GPU throughput for local models.
- CPU multithread performance for builds, tooling, and the "many processes all at once" reality.
- Storage speed and organisation because AI projects create a lot of files, fast.
So I built around a strong GPU, a high-core-count CPU, and fast NVMe storage, then made sure the rest of the system didn't become the weak link.
Step 2: Choose a platform that won't fight me
AM5 (Ryzen 7000 series) was the right mix of performance, platform longevity, and ecosystem maturity. The B650 Tomahawk gave me the connectivity and M.2 options I needed without paying for features I wouldn't use.
This was a "buy the stable baseline" decision. A machine for experiments still needs a dependable chassis.
Step 3: Memory, but in the real-world sense
48 GB might sound oddly specific. It's because 2x24 GB kits hit a sweet spot: plenty of headroom without going into "enterprise pricing".
For local AI work, RAM matters less than VRAM until you're juggling multiple tools, datasets, and containers. At that point, RAM becomes the thing that keeps the whole system feeling smooth.
Step 4: Storage layout as a workflow tool
I deliberately ran two NVMe drives because it lets me separate:
- OS and applications
- Projects, datasets, and working files
It sounds mundane, but it's a quiet leverage point. When your storage is organised, everything is easier: backups, migrations, dual-boot setups, and keeping experiments from spilling into your daily driver.
Step 5: Airflow as a system, not a vibe
I'm not chasing "silent at all costs", but I do care about predictable thermals.
The H5 Flow is designed for airflow, and the extra fans let me balance intake and exhaust so the GPU isn't cooking in its own heat. I mapped the fans properly and set curves based on temperature behaviour, not guesswork.
The goal was stability first, then comfort.
Tuning and verification
This is the part most builds skip. For me, this was the whole point.
- CPU tuning: PBO + Curve Optimiser (per-core offsets) to improve efficiency and reduce heat while maintaining performance.
- GPU tuning: Undervolt plus a sensible power limit and fan curve to keep hotspots under control.
- Stability testing: OCCT (combined tests), FurMark (4K GPU load), and Cinebench 2024 baselines.
I treat this like shipping software: it's not "done" until it passes the gates.
Results that matter
The fun thing about a build like this is that it changes how you work.
- Local models feel practical, not like a novelty.
- Development workflows are faster and smoother.
- Everything is easier to test because the machine itself is stable.
When a system is reliable, you stop thinking about the system and you start thinking about the work.
Lessons learned
A clear requirement beats a fancy part
Know what you're optimising for before you start shopping.
Cooling and power are part of performance
Thermals and stability matter as much as raw specs.
Testing is the difference between a build and a workstation
Verification turns a pile of parts into something you can trust.
Organisation is a force multiplier
Storage layout and clean tooling are boring, but they make everything else easier.
Questions about the build?
If you're curious about the hardware choices, the tuning process, or how I think about systems, feel free to reach out. I'm always happy to talk shop.