Build log

Designing Project Zeus PC

November 2025

Project Zeus is my personal workstation build. It's the machine I use for local AI inference, coding, shipping products, and gaming when I can sneak an hour in.

This page isn't here to flex parts. It's here because the build process is the bit I enjoy: turning vague requirements into constraints, turning constraints into decisions, and then proving those decisions with tests. It's systems thinking with a screwdriver.

Hardware Systems thinking Decision log
Code on a screen.
Context

Why I built it

I wanted a machine that could do four things well:

  1. Run local AI reliably (local inference, model routing experiments, multimodal tooling) without the whole system feeling fragile.
  2. Handle serious day-to-day work (lots of tabs, lots of docs, lots of "why is this container doing that?" moments).
  3. Game at 1440p and 4K without needing to compromise settings.
  4. Stay maintainable. If it breaks, I want clear diagnostics and a straightforward recovery path.

A lot of people build a PC by picking parts they like. I built mine like I build marketing systems: start with the outcome, identify the bottlenecks, then design around the real constraints.

Philosophy

Design principles

These are the rules I set myself before I bought anything:

No assumptions

If a component choice depends on a claim, I want a source or a test.

Budget for power and thermals

Performance is pointless if the system becomes unstable or noisy.

Prefer boring reliability

I'm not allergic to tinkering, but production machines need boring foundations.

Plan the failure modes

What happens if Windows does something dramatic? What's the rollback? How do I recover quickly?

If you're into Donella Meadows, you'll recognise the vibe: find the leverage points, respect the feedback loops, and don't fight the system physics.

Specs

The build, at a glance

CPUAMD Ryzen 9 7900X (12 cores / 24 threads)
GPURadeon RX 7900 XT (XFX MERC 310, 20 GB)
MotherboardMSI MAG B650 Tomahawk WiFi
RAM48 GB DDR5 (2x24 GB) 6000 MHz CL28
Storage2 TB NVMe (WD Black SN850X) + 1 TB NVMe (Acer FA200)
CoolingDeepCool AK620 air cooler
PSUCorsair RM850e (850 W)
Case + airflowNZXT H5 Flow + extra Arctic P12 PWM fans

If you're reading this as a non-PC person: translate that to "fast enough to do everything I throw at it, and stable enough to trust".

Process

The actual design process

Step 1: Start with the bottleneck

For my use, the limiting factors were:

  • VRAM and GPU throughput for local models.
  • CPU multithread performance for builds, tooling, and the "many processes all at once" reality.
  • Storage speed and organisation because AI projects create a lot of files, fast.

So I built around a strong GPU, a high-core-count CPU, and fast NVMe storage, then made sure the rest of the system didn't become the weak link.

Step 2: Choose a platform that won't fight me

AM5 (Ryzen 7000 series) was the right mix of performance, platform longevity, and ecosystem maturity. The B650 Tomahawk gave me the connectivity and M.2 options I needed without paying for features I wouldn't use.

This was a "buy the stable baseline" decision. A machine for experiments still needs a dependable chassis.

Step 3: Memory, but in the real-world sense

48 GB might sound oddly specific. It's because 2x24 GB kits hit a sweet spot: plenty of headroom without going into "enterprise pricing".

For local AI work, RAM matters less than VRAM until you're juggling multiple tools, datasets, and containers. At that point, RAM becomes the thing that keeps the whole system feeling smooth.

Step 4: Storage layout as a workflow tool

I deliberately ran two NVMe drives because it lets me separate:

  • OS and applications
  • Projects, datasets, and working files

It sounds mundane, but it's a quiet leverage point. When your storage is organised, everything is easier: backups, migrations, dual-boot setups, and keeping experiments from spilling into your daily driver.

Step 5: Airflow as a system, not a vibe

I'm not chasing "silent at all costs", but I do care about predictable thermals.

The H5 Flow is designed for airflow, and the extra fans let me balance intake and exhaust so the GPU isn't cooking in its own heat. I mapped the fans properly and set curves based on temperature behaviour, not guesswork.

The goal was stability first, then comfort.

Verification

Tuning and verification

This is the part most builds skip. For me, this was the whole point.

  • CPU tuning: PBO + Curve Optimiser (per-core offsets) to improve efficiency and reduce heat while maintaining performance.
  • GPU tuning: Undervolt plus a sensible power limit and fan curve to keep hotspots under control.
  • Stability testing: OCCT (combined tests), FurMark (4K GPU load), and Cinebench 2024 baselines.

I treat this like shipping software: it's not "done" until it passes the gates.

Outcomes

Results that matter

The fun thing about a build like this is that it changes how you work.

  • Local models feel practical, not like a novelty.
  • Development workflows are faster and smoother.
  • Everything is easier to test because the machine itself is stable.

When a system is reliable, you stop thinking about the system and you start thinking about the work.

Takeaways

Lessons learned

01

A clear requirement beats a fancy part

Know what you're optimising for before you start shopping.

02

Cooling and power are part of performance

Thermals and stability matter as much as raw specs.

03

Testing is the difference between a build and a workstation

Verification turns a pile of parts into something you can trust.

04

Organisation is a force multiplier

Storage layout and clean tooling are boring, but they make everything else easier.

Get in touch

Questions about the build?

If you're curious about the hardware choices, the tuning process, or how I think about systems, feel free to reach out. I'm always happy to talk shop.

A hand placing a wooden block on a growing stack.