Choosing the Perfect AI PC Components

Choosing the Perfect AI PC Components
AI PCs. So hot right now. It seems like the hype is everywhere you turn. I think by now, we’re all familiar with generative AI chat bots like ChatGPT, but wrapping our organic based brains around the real world applications for AI is much more difficult. Can it really improve our lives, make us more productive, and solve the world’s problems? Or have we just invented our own Skynet?

Those important quandaries aside, for users building AI models locally rather than working in the cloud, another important question emerges – what’s the ideal set of AI PC components to minimize inference and analytic times? Read on to learn more.

CPUs banner

Choosing a Processor for an AI PC

Most generative AI applications are heavily threaded CPU intensive, meaning like many other workstation applications, they’ll benefit from utilizing multiple cores. Where that core count should max out depends on your budget, but also the complexity of your models. For simpler generative AI models, a 16 to 24 core CPU like AMD Ryzen or 14th Gen Intel Core is likely sufficient. Some of the newer AMD and Intel architecture will actually accelerate these workloads. But as model complexity increases, so then do CPU requirements, making an AMD Threadripper Workstation or even Epyc Workstation the best choice.

GPUs banner

Choosing a GPU for an AI PC

Since its introduction of the first CUDA core enabled GPU in 2006, NVIDIA has been guiding us toward a future where the GPU is king. Now with the proliferation of Tensor cores and HBM2 memory, it seems that future is now. We could write a full post (or series of posts) just on utilizing GPUs to accelerate AI applications, but for these purposes, let’s just breakdown the three classes of GPU available and quickly discuss which is best for what user.

Geforce Graphics – These are the consumer grade graphics cards typically marketed toward home users and gamers, but if your main concern is maxing out Tensor and CUDA cores per dollar without concern for double precision computation, these cards are actually a fine option. They’re the best choice for single GPU towers and users without major performance requirements.

Professional GraphicsFormally known as Quadro, NVIDIA’s professional graphics provide additional scalability and official support of rackmount form factors. The most powerful cards in this stack, like the RTX 6000 ADA, offer double the computational cores than Geforce counterparts and can be NVlinked together for resource pooling. Ideal for more higher-end AI models and computations.

Computational GPUs – These datacenter GPUs like the NVIDIA H100 are designed to do the heaviest lifting on the most massive models. Thermal limitations really make these enterprise solutions only.

*Want the power of an H100 but can’t support a datacenter or rackmount form factor? Check out the actively cooled NVIDIA A800, now available on our workstation class desktop workstations.

Other AI Components Banner

Other AI PC Components

CPU and GPU are clearly the most important AI components here, but don’t ignore the following:

Memory channels – We’ve discussed the importance of memory channels in previous posts, but they’re especially notable in AI configurations. 8 (Intel Xeon/Threadripper Pro) and even 12 (AMD Epyc) channel memory will vastly accelerate computation, especially with very memory dense configurations.

Storage Speed – The ability for your PC to read and write to storage quickly will result in faster Inference and Analysis. If your chipset supports it, we recommend at least PCIe Gen 4 NVMe at enterprise grade level for additional durability.

What is an ASIC?

This discussion would not be complete without at least a mention of ASICs, or Application Specific Integrated Circuits. At a high level, ASICs are purpose-built chips with architecture that’s optimized for one specific task, for example crypto mining or AI. That optimization is extremely specialized, so for example while an ASIC may be incredibly efficient at generative AI, it would not be able to handle other AI applications, or in many cases most basic applications either. So, in this example, the generative AI user may see incredible ROI when choosing an ASIC over a general purpose integrated circuit CPU, but for that application only. But as AI becomes more specialized and sophisticated in the coming years, ASICs will likely play a more prominent role in the industry.

Learn more about our AI PCs here or give our expert sales team a call at 804-419-0900 for assistance putting together you perfect configuration.

The following two tabs change content below.

Josh Covington

Josh has been with Velocity Micro since 2007 in various Marketing, PR, and Sales related roles. As the Director of Sales & Marketing, he is responsible for all Direct and Retail sales as well as Marketing activities. He enjoys Seinfeld reruns, the Atlanta Braves, and Beatles songs written by John, Paul, or George. Sorry, Ringo.

Latest posts by Josh Covington (see all)

Posted in: AI

Leave a Reply

Your email address will not be published. Required fields are marked *