Home Health Care How to mitigate algorithmic bias in healthcare

How to mitigate algorithmic bias in healthcare

10
0
SHARE

Artificial intelligence (AI) has the promise to revolutionize healthcare with machine learning (ML) techniques to predict patient outcomes and personalize patient care, but use of AI carries legal risks, including algorithmic bias, that can affect outcomes and care.

AI seeks to enable computers to imitate intelligent human behavior, and ML, a subset of AI, involves systems that learn from data without relying on rules-based programming. Among others, ML techniques include supervised learning (a method of teaching ML algorithms to “learn” by example) as well as deep learning (a subset of ML that abstracts complex concepts through layers mimicking neural networks of biological systems). Trained ML algorithms can identify causes of diseases by detecting relationships between a set of inputs, such as weight, height, and blood pressure, and an output, such as the likelihood of developing heart disease.

Despite its promise, the use of AI is not without legal risks. Among other risks, the development and use of ML algorithms could result in discrimination against individuals on the basis of a protected class, a potential violation of anti-discrimination and other laws. Such algorithmic bias occurs when an ML algorithm makes decisions that treat similarly situated individuals differently where there is no justification for such differences, regardless of intent. Absent strong policies and procedures to prevent and mitigate bias throughout the life cycle of an ML algorithm, it is possible that existing human biases can be embedded into the ML algorithm with potentially serious consequences, particularly in the healthcare context, where life and death decisions are being made.

In this article, we discuss the potential for bias in healthcare algorithms, the current recommended best practices for mitigating algorithmic bias, and why those best practices often fall short.

Algorithmic bias in Healthcare
In one study, an algorithm used by UnitedHealth to predict which patients would require extra medical care favored white patients over black patients, bumping up the white patients in the queue for special treatments over sicker black patients. This motivated the New York Department of Financial Services (DFS) and Department of Health (DOH) to write a letter to UnitedHealth inquiring about this alleged bias. Race was not a factor in the algorithm’s decision-making, but race correlated with other factors that affected the outcome. The lead researcher of this study stated that the “algorithm’s skew sprang from the way it used health costs as a proxy for a person’s care requirements, making its predictions reflect economic inequality as much as health needs.” The DFS and DOH were particularly troubled by the algorithm’s reliance on historical spending to evaluate future healthcare needs, arguing that utilizing medical history in the algorithm, including healthcare expenditures, is unlikely to reflect the true medical needs of black patients because they historically have had less access to, and therefore less opportunity to receive and pay for, medical treatment.

At the time of writing this article, AI is being utilized in the fight against the Covid-19 pandemic, including to triage patients and expedite the discovery of a vaccine. For example, researchers have developed an AI-powered tool that predicts with 70% to 80% accuracy which newly infected Covid-19 patients are likely to develop severe lung disease. While these advances may benefit many patients, they do not resolve the concerns relating to algorithmic bias. Use of AI to triage Covid-19 patients based on symptoms and preexisting conditions can perpetuate well-researched and well-documented prejudice against the pain and symptoms of people of color and women. Data on Covid-19 already shows disparities based on race and socioeconomic status, highlighting the need to develop and deploy effective strategies to reduce the risk of bias and help ensure AI tools triage patients appropriately based on their symptoms.

Current Best Practices for Combating Algorithmic Bias
Current best practices for combating algorithmic bias focus on avoiding the introduction of bias at each stage of the ML algorithm’s development. The Federal Trade Commission’s 2016 report entitled “Big Data: A Tool for Inclusion or Exclusion?” takes this approach and encourages companies to, among other things, consider four questions:

  1. How representative is the data set? If data sets are missing information from populations, take appropriate steps to address the problem. This is more simply known as the “garbage in, garbage out” problem.
  2. Does your data model account for biases? Ensure that hidden bias is not having an unintended impact on certain populations.
  3. How accurate are your predictions based on big data? Correlation is not causation. Balance the risk of using the results from big data, especially where policies could negatively affect certain populations. Consider human oversight for important decisions, such as those impacting healthcare.
  4. Does your reliance on big data raise ethical or fairness concerns? Consider using big data to advance opportunities for underrepresented populations.

Challenges in Eliminating Algorithmic Bias
Although these recommended best practices seem obvious and straightforward, implementing them is easier said than done. Data scientists cannot easily remove biases that human beings infuse, often unconsciously, into training data and, therefore, the ML algorithm. There are few practical solutions to eliminate such unintended bias from an ML algorithm without impairing its efficacy.

Part of the challenge of eliminating algorithmic bias is that bias may be introduced into an ML algorithm at many different points in the development cycle, and eliminating bias requires constant vigilance throughout the development and deployment of the ML algorithm. For example, training data can be infected by historical bias (bias already existing in the world that is reflected in the data collection process, even with perfect sampling and feature selection), representation bias (bias resulting from how the relevant population is defined and sampled), measurement bias (bias resulting from the way features are selected and measured), and coding bias (the bias introduced by the people developing the algorithms).

Even when care is taken to root out bias from data sets by not relying on protected characteristics (e.g., race, religion, or national origin), other variables may still function as proxies for protected classes. For example, zip codes and/or language may correlate closely with race. While it may seem logical to “blind” the ML algorithm to protected characteristics by omitting this variable from the training data, such mechanical blindness may have harmful consequences. As an example, the ways Covid-19 appears to affect men and women differently means that an algorithm trained on genderblind data may be less effective than one trained on datasets that include information on gender.

Since it will often be challenging to eliminate bias from the training data for the reasons stated above, emerging tools such as adversarial debiasing and synthetic data are also being developed to help address the problem of bias; however, employing these tools may raise new legal concerns. Because the data scientists who develop ML algorithms may not be attuned to the legal considerations of algorithmic bias, both developers and users of ML algorithms should partner closely with their legal teams to mitigate potential legal challenges arising from developing and/or using ML algorithms, particularly when sensitive data, like healthcare data, is involved.

Photo: MF3d, Getty Images

Source link