For most of the past few years, HPC computing, general purpose GPU computing, and deep learning have all primarily been an Nvidia play (or an Nvidia-Intel tussle). AMD generally hasn’t had the resources to launch a huge effort into these markets, even though its GCN GPUs were generally recognized as formidable compute platforms, particularly against Nvidia’s Kepler architecture. A recent deal Team Red inked with Google could help AMD establish a foothold in this emerging market, and give its GPU business an important shot in the arm.
Beginning in 2017, AMD’s FirePro SC9300 x2 server GPUs will be deployed to accelerate the performance of Google’s Compute Engine and the Google Cloud Machine. This is the server variant of the Radeon Pro Duo that AMD launched earlier this year. It’s a dual GPU card with 2x AMD Fiji GPUs, 8GB of HBM (4GB per GPU), 1TB/s of total memory bandwidth, and a 300W power draw. It’s not hard to see why Google might be interested, given Fiji’s compute horsepower, and 28nm hardware isn’t as outdated on the compute side of things as you might think. While Nvidia is now baking GP100 into some supercomputing deployments, its 28nm Kepler-derived products are still sold widely in this space.
“Graphics processors represent the best combination of performance and programmability for existing and emerging big data applications,” said Raja Koduri, senior vice president and chief architect, Radeon Technologies Group, AMD. “The adoption of AMD GPU technology in Google Cloud Platform is a validation of the progress AMD has made in GPU hardware and our Radeon Open Compute Platform, which is the only fully open source hyperscale GPU compute platform in the world today. We expect that our momentum in GPU computing will continue to accelerate with future hardware and software releases and advances in the ecosystem of middleware and libraries.”
AMD has also announced a deal with Chinese company Alibaba to provide GPUs for the e-commerce giant’s data centers. Google’s Compute Engine is a virtual machine provider that can scale up depending on customer needs, while its Cloud Machine Learning program allows customers to build machine learning models. These models can be scaled up as well, analyzed with other Google services, and flexibly configured for other use cases, like transitioning from a training model to a predictive one.
AMD also announced a new Radeon Open Compute Project (ROCm) and has a variety of demos set up at SC16, including CUDA applications running on AMD hardware, Power8 servers with AMD FirePro GPUs, an ARM server system paired with the RX 460(but not based on its own silicon), and demonstrations of ray tracing and VR support for HPCapplications. The big question for AMD is whether this win is a one-time deal, or the beginning of a new push into the HPC and data center market. This is one area where Nvidia has established a clear, unambiguous lead for itself — the company has been working in GPGPU since 2007, and while AMD has made some efforts to address these spaces, limited funds have required it to spend the bulk of its attention elsewhere.
Source: Extremetech
0 commentaires:
Enregistrer un commentaire