Home Health Care Reducing bias in AI requires high-quality data

Reducing bias in AI requires high-quality data

5
0
SHARE

The rise of the use of artificial intelligence (AI) in medical practice during the Covid-19 pandemic is impossible to ignore. Providers and organizations have adopted and increased their AI capabilities to deal with the many new challenges brought by the crisis. From virtually screening patient symptoms to managing the influx of patients, AI-driven support tools now more than ever are being used by clinicians at the point of care to guide their decision-making.

The global pandemic also has highlighted serious racial disparities and underscored how important it is for health leaders to improve diagnostic accuracy and outcomes for traditionally disadvantaged populations. Whether the bias is implicit or explicit, racism in healthcare is an issue that requires deliberate solutions.

One way to reduce medical racism is through equitable representation of patients of color in our medical education curricula and training materials. Of all the specialties, training about bias in dermatology is perhaps the most critical, because a diagnosis can look different on different skin colors. There are potentially life-threatening infectious diseases that can present with skin changes that are very subtle on darker skin colors. We must train to that subtlety and train to the differences.

We must also ensure that the AI tools we are using in the exam room are not perpetuating racism in medicine and that AI-powered clinical decision tools are taking steps to ensure accuracy and equity. While AI technology can improve diagnosis, testing, and treatment decisions in medicine, we have a duty to ensure those decisions are equitable for all patients, especially patients of color.

So how do we ensure that the AI tools being developed are accurate for all? It boils down to data collection. Any information that a doctor uses to make a decision in the exam room must also be incorporated into the AI technology, and that data needs to be high quality. If you train on inaccurate data or if data is missing, then your AI is going to be inaccurate, potentially resulting in patient harm.

Some questions that developers of clinical decision support AI should be asking themselves: How good is this technology, not just overall but among different patient demographics? Are the results more accurate for the majority and less accurate for marginalized groups? Is this product trustworthy for all people?

When determining AI bias in dermatology, one should ensure that an AI tool performs equally well among different skin colors. The best data set representative of all groups may be used, but if critical information is missing, then the AI is going to be wrong. To ensure that bias is not embedded in the technology, the machine learning algorithms must be trained using images of skin of color, not just light skin images. A benefit of machine learning is that once a metric is defined—such as fairness—it can be optimized for.

We cannot—and should not—create a separate technology for people of color. We need a holistic system to help us treat patients of all skin colors. It is our duty as clinicians to ensure that the AI-powered technology we use to guide our decision-making at the bedside is moving us toward racial equity.

Photo: metamorworks, Getty Images

Source link