Computer vision and image processing algorithms are increasingly common to commercial and research-related
applications. These high resolution models perform complex computations to process image data in real-time and require rapid processing to yield analysis and results.
In the context of machine learning image processing and analysis, resolution is everything. An image’s resolution can enable a more detailed, meaningful analysis that results in greater understanding. To this end, a high-resolution image will contain more information and detail than a low-resolution image of the very same subject.
Today, higher-resolution image processing requires significant computational capabilities. So much so, in fact, that training models to use these high-resolution images have rendered current state-of-the-art
technologies unusable.
When it comes to high-resolution processing, legacy architecture constraints are holding back research and technology advances across numerous use cases, including in areas such as autonomous driving, oil and gas exploration, medical imaging, anti-viral research, astronomy, and more.
SURPASS THE LIMITS OF THE GPU
SambaNova Systems has been working with industry to develop an optimized solution for training computer vision models with increasingly growing levels of resolution – without compromising high accuracy levels. We take a “clean sheet” complete systems approach to enable native support for high-resolution images. Co-designing across our complete stack of so!ware and hardware provides the freedom and flexibility from legacy GPU architecture constraints and legacy spatial partitioning methods.
ADDING MORE GPUS ISN’T THE ANSWER
If you consider images in the context of AI/ML training data, the richer and more expansive your training information (i.e., images), the more accurate your results can be.
Using a single GPU to train high-resolution computer vision models predictably results in “Out of Memory” errors. On the other hand, clustering multiple GPUs brings all the challenges of disaggregation of the computational workflows onto each individual GPU in the cluster to aggregate GPU memory.
In this case, this is not merely clustering a few GPUs in a single system, but aggregating hundreds, if not thousands of GPU devices. In addition, conventional data parallel techniques that slice the input image into independent tiles deliver less accurate results than training on the original image.
TRAIN LARGE COMPUTER VISION MODELS WITH HIGH-RESOLUTION IMAGES
Massive Data: A single SambaNova DataScale™