In early 2023, I had been pondering which computer I wanted to get next. I’d been running about a decade on laptops and NUCs or SFF PCs pulled from e-waste piles after the computer I built as a teenager finally kicked the bucket. I wanted something that would feed my growing photography hobby, get nostalgic over some of the video games I used to play, and leave room for expanding into other areas, such as the emerging AI field. At the time, I was looking at the Mac Studio, which was appealing given its size and performance, but would lock me into MacOS, limiting gaming and hardware flexibility. However, that did suggest my budget was around $2k-$3k. Now that I had an idea for a budget, the next most important question was: How many RGB LEDs are needed to make this the best computer possible?
Jumping ahead a bit: After the build was complete, I was talking to a co-worker whose son recently built a computer. That approach turned out to be much different than the one outlined below. His son purchased a few flagship items and pieced together the rest with budget buys and items on hand. Who knew that a top-of-the-line processor (Ryzen 9 7950X3D) and graphics card (Radeon RX7900X3D) wouldn’t overcome the limitations of a 10-year-old 7200RPM HDD? Needless to say, the son was disappointed with his build.
Starting with two seemingly obvious (but important) questions, the specifications followed:
What this is: A personal computer made to my somewhat squishy specifications. Tuning for data throughput, not short bursts (think optimizing for encoding a movie file as opposed to the world’s fastest NOP).
What this is not: A mass-produced product where every fraction of a cent is debated, nor is it a flagship demo where the top-of-the-line everything is showcased.
With those questions answered, the following ‘somewhat squishy’ requirements were formed:
Budget: $2k-$3k
Use-case: Lightroom/Photoshop, light video editing, light to moderate gaming, and possible AI exploration.
Optimization: Fast loading and sustained throughput.
Processor: Minimum 8-core dual-thread 4.5GHz.
Memory: Minimum 16GB DDR5.
Video Memory: 4GB minimum (8GB+ preferred).
Internal Storage:
Fast Data Storage (PCIe 4.0): 2TB
Medium-Speed Storage (SATA III SSD): 4TB
Slow Storage (SATA III HDD): 8TB
Peripherals: USB (3.0+), Ethernet (1Gbps+), Bluetooth (5.1+).
Chipset Options: SATA RAID0/1
Repair/Upgrade: Ability to replace individual components without significant data loss and a manageable headache.
Sound Level: Quiet; not silent or whisper quiet, but generally lower noise than the average PC.
Size: Mid-size desktop. Micro ATX or Mini ITX. Not SFF or NUC.
Block Diagram of PC Archetecture
While this may not be everyone’s first stop, I have a size constraint next to my desk, and a noise constraint to consider. Because of this, I chose to look for the case first, which will determine the maximum dimensions and content of what can fit inside (besides, if this choice turns out not to be workable, I can revisit and try again). After looking at quite a few cases, I settled on the Fractal Design Define 7 Mini-Tower as its design takes into account airflow, noise, and external ports. In addition, I really liked the black brushed aluminum look. With this decision made, it set maximum dimensions for the motherboard, CPU fan, graphics card, and the number of SATA drives will fit in the case.
AMD Ryzen uses a homogeneous core topology (all performance cores (P-cores)) whereas Intel 13th Gen uses a hybrid topology (P-cores and efficiency cores (E-cores)). My thought process was: Many interactive creative workloads are lightly threaded or scale poorly past a handful of performance cores; the hybrid architecture would allow the scheduler to preferentially allocate high-performance threads to P-cores, and background tasks to E-cores. Offloading background tasks to E-cores prevents them from stealing frequency and thermal headroom from the P-cores. Though this does depend on how well the scheduler allocates processor resources.
Price-wise, between comparable Intel and AMD processors, the Intel 13th-gen processor undercut comparable Ryzen 9 pricing. I was strongly leaning Team Blue, and then I found benchmark data specific to one of the use-cases, which showed Intel's hybrid architecture outperforming similarly priced Ryzen processors, reinforcing the architectural reasoning. Thanks to the data, I settled on the i7-13700K as a balance of performance and cost.
The chipset tends to be undervalued for enthusiasts. I'd already decided on Intel, so the options were the B660, B760, H770, Z690, and Z790. Only the Z's offer the ability to adjust the core clock frequency for "K" processors in addition to offering more options for adjusting the operating voltage than the B and H parts.
Looking at Asus' line, I decided on the Prime over the TUF and ROG series, and eventually settled on the Prime Z790M-Plus. This board was lower in cost than the equivalent TUF and ROG equivalents while still having a moderate ability to adjust the CPUs core operating voltage. I chose the Z790 over the Z690 as they were similarly priced, and the Z790 supports Intel 13th-gen processors without a need for a BIOS update. While Z790 offers additional PCIe lane flexibility over Z690, the micro-ATX form factor limits how many slots can actually be utilized, so this was a marginal benefit rather than a deciding factor.
The Z790M-Plus supports an additional two PCIe 4.0 drives that are chipset-connected (via DMI), 4x SATA III drives, and Intel's RST which will be utilized for the RAID configurations. The 10+1 stage VRM is adequate for sustained operation at Intel default PL2 limits, particularly given the modest undervolt and airflow configuration. While mini-ATX board lacks WiFi and Bluetooth, it's about $100 less than the ATX variant that has WiFi and Bluetooth, and can be remedied with a $30 module.
Off the bat, I decided on air cooling as something about potentially leaking liquid directly on top of electronics makes me uneasy. Eventually, I decided on the be quiet! Dark Rock Pro 4 due to the quiet fan and having sufficient thermal capacity to handle sustained loads near the processor's maximum turbo power, and the overall cost. I also decided to replace the latching processor cage with the Thermalright CPU Contact Frame because it holds the processor to the motherboard with more even pressure, which, in theory, means the processor sits flatter in the socket, providing a more uniform contact with the heat sink. While I didn’t validate this empirically, it was a $10-$15 add that gave me some peace of mind. Then, eventually decided on Thermal Grizzly Kryonaut for a balance of cost and performance.
For case fans, trying to figure out pressure vs. velocity was a rabbit hole. Eventually, I decided on the Noctua NF-A12/A14s for the case fans. For the space under the graphics card, I included a PCI slot exhaust fan to expel heat from directly under the GPU.
Another decision is which team to join. Given the requirement for potential AI experimentation, I chose Nvidia as a lot of AI has been developed around CUDA cores. The RTX 4060 initially looked like it met the requirements, but after further research, I settled on the RTX 4070 Eagle OC because the forums and benchmarks showed a disproportionate performance increase relative to the cost.
Eventually, I decided on four logical drives: One for the OS and documents (C), one for program files and libraries (D), a cheaper but faster large drive for large project files (photos, videos, etc.; E), and a large, slow drive for local backup and redundancy (F).
C: 1x 1TB PCIe 4.0 SSD. Boot, temp file, and document drive (OS and document files (Word, Excel, etc.))
D: 1x 2TB PCIe 4.0 SSD. Program file data, project libraries, program caches, active scratch work, etc.
E: 2x 2TB SATA III SSD (RAID0, RST). Large data files (pictures, videos, etc.).
F: 2x 12TB SATA III HDD (RAID1, RST). Boot/program drive backups, document and library backups, and a mirror of the RAID0 drive.
C/D Drives: While there isn’t much of a speed advantage having the OS on one drive and program files on another anymore, I chose to keep this architecture for organizational and restore granularity, not performance reasons.
E Drive: Ideally, this would have been combined with the D drive using a 6-8TB PCIe SSD; however, this would be extremely cost-prohibitive. The cost-performance compromise is two SATA SSD drives in a RAID0 configuration, almost doubling the read/write speed of one single larger drive for large, sequential workloads (for random I/O, the speed remains SATA-limited). This is the highest-risk element of the storage design, and is mitigated by daily mirroring to mitigate significant data loss (see ‘Backup’ below).
F Drive: This is two 7200RPM SATA HDD drives in a RAID1 configuration that serve as space for backing up the Boot and Program drives, in addition to mirroring the RAID0 drive.
RAM was nearly the last choice, and came down to ‘what do I have left in the budget?’ Although four DIMMs would have provided the same capacity, I opted for one DIMM per channel to avoid the signal-integrity penalties associated with 2DPC configurations. On modern DDR5 systems, fewer DIMMs per channel generally translate to higher stable speeds and better overall memory performance. While 16GB met the minimum requirement, it seemed reasonable to add more given Lightroom catalogs, video editing, and AI experimentation. The budget allowed an increase to 32GB 5600MHz.
DIMM A1: Empty
DIMM A2: 1x 16GB 5600MHz DDR5
DIMM B1: Empty
DIMM B2: 1x 16GB 5600MHz DDR5
While the Asus Z790 Micro ATX is about $100 less than the ATX version, it lacks onboard WiFi and Bluetooth. Bluetooth was remedied with a $30 AX210 M.2 WiFi/Bluetooth card (WiFi 6e / BT 5.3). While I don’t necessarily need the WiFi, this peripheral was cost-effective and had two external 5dBi antennas (one for WiFi, one for Bluetooth) that mount in the back of the case in an empty PCI slot for better signal.
Based on all the parts, I estimated that, under full, continuous load, the system would draw about 600W. I chose the Corsair RM850x 850W 80+ Gold for a balance of performance and cost (the 700W wasn't much cheaper, and the 1kW was a clear step in price). This supply will output 850W continuously, leaving about a 30% margin when the system runs at its maximum power draw. This headroom is important for transients, where spikes can briefly exceed the rated power draw, and given that the system rarely (if ever) runs at capacity, the headroom is actually more.
The first step is adjusting the processor core clock rate and operating voltages. To do this, I set all fans to run at 100% in the BIOS, enabled the Intel default PL1/PL2 power limits, then booted and started the stress-test software while monitoring each core's performance. With Intel's default PL1/PL2 limits enforced, the CPU still reached its maximum operating temperature within a minute, triggering CPU throttling of individual P-cores and E-cores to prevent exceeding temperature limits. The first adjustment was to slightly underclock and undervolt. Eventually, I settled on underclocking the P-cores turbo frequency by 300MHz (4.9GHz) and E-cores turbo by 200MHz (4.0GHz), and a small -100mV offset. This allowed for a 30-minute CPU stress test without internal CPU thermal throttling. Remember, we’re tuning for a marathon, not the 60-meter dash, and even in a marathon, the CPUs will have some duty cycle, which is why I decided 30 minutes was sufficient. Sidenote: By reducing the working and boost voltages, I may have missed the Raptor Lake voltage issue that was traced to voltage overshoot and aggressive board defaults.
The next step was to check that the RAM was stable under the given XMP profile. Stress testing the memory indicated stable operation for the enabled XMP profile, and that’s pretty much all I was concerned about at this point, so I moved on with tuning the fan speed curve.
To minimize fan noise at a light load and moderate load. To do this, I dug out two thermocouples and my benchtop multimeters, embedding one thermocouple in the CPU heat sink and another in the GPU heat sink. To give a light load, I recoded a browser use-case in Selenium and ran it in a loop while trying to find the minimum fan levels that brought the system temperature to equilibrium. For a moderate use case, I put a few MP4s back-to-back in OpenShot, limited it to 2 P-cores, and encoded a 3-hour 4K video while adjusting the fan speed to find equilibrium. With these two settings, I finalized the fan curve. The nice part is that at the light load, only the CPU fan and one rear exhaust fan were on at a low level, and the case was nearly silent.
There are a lot of backup options out there, which can be a rabbit hole in and of itself. Ultimately, I decided to go with a recommendation from a trusted colleague who has been building PCs and designing embedded architecture for longer than I've been alive.
The backup software is automatable, which makes the following backup tasks fairly easy:
The boot drive is automatically imaged on the first Monday of every month, then incrementally imaged twice a week for the duration of the month. The software will automatically clean up old images; right now I have it save the last two months of full+incremental images, and four months of full images before that.
There’s a shortcut sync documents from the Boot drive and libraries from the Data drive to the RAID0 drive, then mirrors the RAID0 drive to a folder on the RAID1 drive and shuts down or hibernates the computer, which I use at the end of the day.
Once a week, there’s a scheduled boot in the middle of the night which images the Data drive to the RAID1 drive, then mirror-syncs the RAID1 drive (boot drive images, data drive images, documents/library backup, and RAID0 drive backup) with my NAS (if connected).
What this means:
In the event of a Boot drive failure (or broken update) or Data drive failure, I’m out a maximum of one day’s work of documents or edits stored in the libraries, and a maximum of four days' tweaks to the OS or one week of drive programs that were installed/uninstalled/updated.
In the event of a failure in the RAID0 drive(s), I’m out a maximum of one day’s work.
In the event both of the RAID1 drives crash, well, probably minimal other than about a day of rebuild time, as what’s stored on them should be recoverable from the other drives or NAS.
As it turns out, the amount of RGB LEDs needed for the best possible computer build is: Zero.
Overall, I'm pleased with how the build turned out. It's not a flagship demo unit by any means, but compared to a similarly priced off-the-shelf computer that wasn't purpose-built for my application, it exceeds my expectations. Given that I'm both the stakeholder and the customer, that's all that really matters.