One of the reasons why I decided to splurge on all four available form factor options for the HP Elite 8300/6300 Pro computer family (and yes, you probably already knew I was going to eventually splurge on an MT i.e. microtower version of the 6300 Pro too, didn't you?) was their specification diversity:

  • available internal cavity area and length/width restrictions,
  • number and type of add-in card slots,
  • number and type of memory module slots,
  • number, type, and location of drive bays, etc.

It provides an opportunity to explore a multiplicity of processor, memory, mass storage, graphics, and other configuration combinations, and indeed this has proven true, as previous CPU-, DRAM-, and SSD-themed writeups have documented in detail.

This time it's graphics' turn in the spotlight. I'll begin with a visual comparison of the low profile (used in the SFF i.e., small form factor system option) and standard (for the CMT i.e., convertible minitower and MT i.e., microtower system form factors) bracket options for the MSI Radeon RX 560 4GT LP OC card that the Hackintosh community recommends when wrestling MacOS onto these systems, and which I've therefore installed on two of the three of them.

A tip o' the hat to MSI before continuing, by the way. One of the two graphics cards I bought was a factory refurbished unit from Newegg that only included a standard bracket, although the website didn't specify this constraint. An email to MSI’s customer support personnel resulted in the speedy no-cost shipment of a low profile bracket to me … and no, I didn't even need to tell them I was a “press guy”!

The other “interesting” limitation with these particular systems involves their use of the unconventional CFX form factor power supply. Thankfully, all of the HP Elite 8300/6300 Pro form factor options (save for the perhaps obvious exception of the diminutive USDT, which doesn’t have space for any add-in slots at all) feed 75 W of power to their PCI Express slots … apparently the SFF variant of the prior-generation 6200 Pro only supported 25 W PCIe.

But the lack of an ATX-standard supplemental power supply feed means that any graphics card I might want to install in these systems must consume no more than 75 W, which, as I soon learned, excluded a large percentage of candidates! Admittedly, as I’ve mentioned before, it’s possible to shoehorn an ATX PSU into the CMT system form factor, and I’ve got a Radeon RX 570 8 GB card sitting on the shelf in case I ever decide to attempt to actualize that particular aspiration. But for now, 75 W is the upper limit.

AMD and NVIDIA (in partnership with their corresponding add-in board retail partners) are (at least for now) my sole supplier options. NVIDIA, for example, sells both the Pascal architecture-based GeForce GTX 1050 and newer Turing architecture-based GeForce GTX 1650, both of which meet the 75 W power consumption requirement in board form. I (unwisely, in retrospect) went with an open-box ZOTAC 4 GB GeForce GTX 1050 Ti mini video card:

Since, unlike its GTX 1650-based successor (shown above), it didn't take up two add-in-card bracket slots, I'd hoped I might be able to squeeze an eSATA bracket or something else svelte in the scant remaining space next to the fan assembly (turns out I couldn't). The GTX 1050 Ti is a perfectly good GPU, mind you, but it's one architecture generation removed from the current state-of-the-art. To clarify, the low-end GTX 1650 dispenses with some of the new architecture's advancements to save cost, specifically the ray tracing cores and tensor cores. But it still offers 17% more CUDA cores (896 vs 768) and a 15-20% increase in (core and boost) clock speed … all at no incremental price versus its predecessor.

What about the AMD alternatives? Aye, there’s the rub. At launch of the GTX 1650, its price-matched (i.e., similar-performance) alternative from AMD was the earlier-mentioned Radeon RX 570, which consumes twice the power of the NVIDIA competitor (150 W vs 75 W) and is therefore unusable in PCI Express power-only systems. AMD's PCIe power-friendly “little brother,” the Radeon RX 560 I showed you earlier, has half the stream processors of the Radeon RX 570, with predictable resultant performance degradation versus both the Radeon RX 570 and NVIDIA GeForce GTX 1650.

Although the GTX 1650 runs at higher clock speeds than the Radeon RX 570, it still burns half the power. At least some of this discrepancy is likely due to manufacturing process differences; NVIDIA's chip is fabricated on TSMC's 12 nm lithography, while AMD is built in GlobalFoundries' 14 nm fab. But what this disparity fundamentally suggests to me is that AMD is likely compensating for foundational architectural shortcomings versus NVIDIA by cutting the prices on much larger comparative die size products in order to maintain competitiveness and preserve market share, albeit with resultant profit margin “hits” (which obviously aren't sustainable in the long term).

So why aren't I using NVIDIA-based graphics cards in all of my systems? Aye, there's another rub. A longstanding feud between Apple and NVIDIA (which may or may not have something to do, at least in part, with Apple's desire to completely control its software ecosystem) has resulted in modern NVIDIA graphics technology being excluded from use in the last several MacOS versions. And even on prior-generation Apple operating systems, you had to go directly to NVIDIA for optimized drivers. Unless the relationship is repaired, I'll be perpetually stuck running MacOS 10.13 “High Sierra” on my NVIDIA-based Hackintosh, even though MacOS 10.15 “Catalina” is the latest and greatest. And that's a shame, frankly; Adobe's Creative Suite, for example, has effectively harnessed NVIDIA's CUDA for GPGPU acceleration for years.

AMD has been churning out GPUs based on its “Graphics Core Next” architecture since the beginning of 2012. In fairness, in recent years the company has (likely and rightly) been focusing the bulk of its development resources in catching up with Intel on CPUs. And also in fairness, products based on AMD's next-generation graphics architecture, RDNA, are now starting to roll out to rave reviews. But NVIDIA has invested in multiple graphics architecture generations over the intervening time span and it'll therefore be quite a challenge for AMD to catch up and sustainably match up with these going forward:

Supportive and/or discordant thoughts, readers? Sound off in the comments!

Brian Dipert is Editor-in-Chief of the Embedded Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company's online newsletter .

Also in this series :