The demands of ever more sophisticated Artificial Intelligence have pushed classical compute hardware and software to its limits. But will the architectures being put in place to enable the next generation of AI also transform how we approach traditional HPC and general computing?
That is precisely what software-defined AI hardware startup SambaNova Systems is working towards with its Reconfigurable Dataflow Architecture (RDA), which reimagines how we can free AI from the constraints of traditional software and hardware.
This extended profile explains how SambaNova Systems’ founders have drawn on decades of experience at some of Silicon Valley’s most storied companies and institutions to pinpoint why traditional architectures are running out of the road when it comes to advancing machine learning and AI, and how this led to the development of RDA.
It also explains how the application of RDA at institutions like Lawrence Livermore National Laboratory to help crack fundamental physics problems also points to how the architecture can redefine the way we approach classical compute problems.
Drivers for AI adoption include “delivering a better customer experience and helping employees to get better at their jobs,” says IDC. “This is reflected in the leading use cases for AI, which include automated customer service agents, sales process recommendation and automation, automated threat intelligence and prevention, and IT automation.” Some of the fastest-growing use cases include automated human resources and pharmaceutical research and discovery, the research firm adds.
However, the benefits of this technological revolution are spread very unevenly, according to Kunle Olukotun, co-founder and chief technologist of SambaNova Systems, the software-defined AI hardware startup. “If you look at the people who are able to develop these sorts of systems, they’re in the hands of the few, the large companies that have the data, the computation, and the talent to develop these sorts of algorithms, and of course, they’ve used these systems to become the most valuable companies in the world – Google, Apple, Amazon, Facebook and the like,” he says.
The fundamental challenge lies with the sheer amount of compute power needed to build and train many of the more advanced models that are being developed. The models are getting larger and larger, and for some applications, the volumes of data required to train them are also ballooning. This is exacerbated by the slowing of performance gains for successive generations of processor chips, a trend that some have labeled the end of Moore’s Law, according to SambaNova’s vice president of product, Marshall Choy.