AI development powerhouse Nvidia is launching a new set of cloud-based application programming interfaces (APIs) designed to speed up the creation and deployment of specialized AI models in the medical imaging field. The company made the announcement this week at RSNA 2023, the annual radiology and medical imaging conference in Chicago.
Nvidia’s new offering is a cloud-native extension of its Monai framework. Monai, which stands for the Medical Open Network for AI, is the company’s open-source framework for medical imaging AI.
The overarching goal behind Monai is to make it easier for developers and platform providers to integrate AI into their medical imaging offerings using pretrained foundation models and healthcare-specific AI workflows. Over the past few years, providers have faced some hiccups when deploying AI tools and the cloud — integrating healthcare AI at scale requires the cooperation of thousands of neural networks, and the industry has demonstrated it wasn’t exactly prepared for that.
At the center of the new cloud-based APIs is Nvidia’s VISTA-3D (Vision Imaging Segmentation and Annotation) foundation model. This model was trained on a dataset of annotated images from 3D CT scans from more than 4,000 patients, spanning a range of diseases and body parts. VISTA-3D is designed to speed up the creation of 3D segmentation masks for medical image analysis, as well as to enable developers to fine-tune their AI models based on new data and user feedback.
David Niewolny, Nvidia’s director of business development for healthcare, said in an interview that these new APIs have promising potential to accelerate AI developers’ work in the imaging space. When asked what type of AI tools Nvidia’s new APIs might help bring to market, he said developers will probably start to build models for things like image segmentation before they dive deep into creating solutions for clinical decision support.
Segmentation involves dividing an image into meaningful regions, which is particularly useful in medical imaging for identifying and delineating structures or abnormalities. Developers can use Nvidia’s new APIs to create AI models for segmenting organs, tumors or other structures in medical images, which can aid clinicians in diagnosis, treatment planning and disease monitoring.
Down the line, developers can use the APIs for most complex uses cases like disease classification. For example, AI developers could use the APIs to build classification models for identifying specific diseases or conditions in medical images — such as classifying X-ray images for pneumonia detection or mammograms for breast cancer screening.
Creating medical imaging AI tools that are both efficient and cost-effective necessitates a domain-specific development foundation, Niewolny pointed out.
“What these new APIs end up doing is providing the healthcare development community with a very powerful set of tools — based on the community-based Monai — to build, deploy and scale these AI applications directly in the cloud. That cloud data piece is really a key foundational element. Everything is on the cloud now, even these AI development tools,” he said.
Flywheel, a medical imaging data and AI platform, has already begun using Nvidia’s new cloud APIs. Other companies — including medical image annotation company RedBrick AI and machine learning platform Dataiku — are slated to adopt the new offerings soon.
Nvidia wasn’t the only company that announced a new offering seeking to accelerate the adoption of generative AI solutions in the medical imaging field at RSNA 2023, though. AI startup Hoppr announced that it teamed up with AWS to launch Grace, a B2B model to help application developers build better AI solutions for medical images.
Photo: Andrzej Wojcicki, Getty Images