
MIT Engineers Create AI That Can Identify One’s Race Based on X-Ray Images Alone
In a new study in The Lancet Digital Health an international team of engineers and scientists describe how they can use an—apparently mystifying—deep-learning AI to identify a person’s race based solely on X-ray images of their body.
Examples of so-called “deep learning” artificial intelligence (AI) programs abound, and, in many regards, continue to grow more and more disturbing. In the latest intersection between nascent silicon-brain intelligence and potential societal harms, an international team of engineers and scientists says it has created an AI able to accurately predict medical patients’ race based on X-ray images alone. And the team isn’t even sure of how the AI’s pulling off its trick.
The team of researchers, led in part by Leo Anthony Celi, a principal research scientist at MIT’s Institute for Medical Engineering and Science (IMES) and associate professor of medicine at Harvard Medical School, outlined the workings of its AI in a paper published in The Lancet Digital Health. In its paper, the team described how it used both private and public X-ray datasets—including images of chest X-rays, limb X-rays, chest CT scans, and mammograms—to “train” its AI program to identify a given image as being from a “white, Black, or Asian” race.
Despite the fact the X-rays themselves contained no explicit mention of patients’ races, the researchers were still able to train their AI using the images—alongside patients’ self-reporting of their race to verify the program’s accuracy—and then test it against subsets of unseen ones. Including subsets of unseen non-chest X-ray images from multiple alternate body locations.
As for results, the researchers found their AI models could predict race from medical images with “high performance” across multiple imaging modalities. The researchers also reported “the ability of deep models to predict race was generalized across different clinical environments, medical imaging modalities, and patient populations, suggesting that these models do not rely on local idiosyncratic differences in how imaging studies are conducted for patients with different racial identities.”
Celi et al. attempted to explain their AI’s ability to decipher between races based on X-ray images in several ways. The team considered differences in physical characteristics between different racial groups (such as body physique and breast density), disease distribution (previous studies have reportedly shown Black patients have a greater incidence for health problems like cardiac disease), location-specific or tissue-specific differences, and even effects of societal bias and environmental stress. However, none of the explanations were able to describe the way the AI actually works.
A new study has found that artificial intelligence can identify patients’ self-reported race from medical images that contain no indicators detectable by human experts. https://t.co/go0Q6DSuW6 pic.twitter.com/FWThVucu98
— Massachusetts Institute of Technology (MIT) (@MIT) May 23, 2022
“These results were initially confusing, because the members of our research team could not come anywhere close to identifying a good proxy for this task,” paper co-author Marzyeh Ghassemi, an assistant professor in the MIT Department of Electrical Engineering and Computer Science and co-author of the study, said in an MIT press release. “Even when you filter medical images past where the images are recognizable as medical images at all, deep models maintain a very high performance,” Ghassemi added.
“We think that… [algorithms in a clinical setting] are only looking at vital signs or laboratory tests, but it’s possible they’re also looking at your race, ethnicity, sex, whether you’re incarcerated or not—even if all of that information is hidden,” Celi added in the MIT press release. “This paper should make us pause and truly reconsider whether we are ready to bring AI to the bedside.”
Very proud that this paper it out in @LancetDigitalH. Our work highlights the importance of understanding how we use deep neural models in practice. These systems may see things we cannot, e.g., patient self-reported race from a medical image, and this is not always desirable. https://t.co/2rIZ9GJTyn
— Marzyeh (@MarzyehGhassemi) May 12, 2022
Indeed, much of the researchers’ comments and conclusions regarding this research center not on the efficacy of an AI’s ability to identify a person’s race based on X-rays, but rather on the morality of such an ability.
“Even when you filter medical images past where the images are recognizable as medical images at all, deep models maintain a very high performance,” Ghassemi said in the press release. “That is concerning because superhuman capacities are generally much more difficult to control, regulate, and prevent from harming people.” On Twitter, the co-author added that “These systems may see things we cannot, e.g., patient self-reported race from a medical image, and this is not always desirable.”
Perhaps the most troubling—and, in some ways, encouraging—finding of the study is that AI can accurately predict self-reported race, “even from corrupted, cropped, and noised medical images.” In fact, the researchers report the results of their “low-pass filter” and “high-pass filter” experiments with the AI “suggest features relevant to the recognition of racial identity were present throughout the image frequency spectrum.” I.e. however the AI is able to tell somebody’s race based on their X-rays, an accurate guess from the program is not reliant on the rate of change of intensity per pixel in the images. Indeed, the authors report their models trained on high-pass filtered images “maintained performance well beyond the point that the degraded images contained no recognizable structures.” To the human coauthors and radiologists involved in the study, “it was not clear that the image was an X-ray at all.”
The other trend we observe is where algorithm prediction scores perform better than traditionally used scores – eg for breast imaging – https://t.co/pzkqb3c05W from @BarzilayRegina and @YalaTweets or work by @oziadias on osteoarthritis- https://t.co/rpDFI2cc8D
— Judy Gichoya (@judywawira) May 12, 2022
In another tweet co-author Judy Gichoya noted tangential research shows how an AI program like this one can still be useful despite its potential downsides; by, for example, helping physicians to better predict breast cancer risk based on traditional mammograms.
The researchers’ study was funded, in part, by the US National Science Foundation, the Taiwan Ministry of Science and Technology, and the National Institutes of Health (NIH).
Feature image: Lefteris
Related News
An Overview of the Creepy Connections Between Albert Bourla the Veterinarian CEO, Improvac, ‘Birth Control Vaccines,’ and the COVID Injections’ Negative Effects on the Reproductive System
Here's a look at the disturbing connections between Pfizer CEO and veterinarian Albert Bourla, Improvac castration "vaccines" for boars, "birth control vaccines" for people,...
What Is Hydrogel and Is It in the COVID Injections? Here’s a Deep Dive into the Evidence Suggesting the Former Material Is Indeed Present in the Novel Jabs
Here is an overview of the evidence suggesting that hydrogel—a material consisting of a crosslinked three-dimensional polymer network structure that can absorb and retain...
Here’s How Street Lights Will Be Weaponized with ‘Puke Rays,’ Beam-Forming Microwaves, and Civilian Monitoring Systems
Here's an overview of how street lights will be "weaponized" in so-called "smart cities" being ushered in by the likes of the World Economic...
Bill Gates Pours Millions Into Company That Wants Cows To Wear Masks In The Name Of Climate Change
In his latest act of "environmentalism" Bill Gates has poured nearly $5 million into a startup agricultural company manufacturing masks that supposedly reduce the...
ER Physician Says Ultraviolet Streetlights May Be Used to Detect Who’s Had Their Genomes Spliced by the COVID Injections
In this interview with "Dr. Jane Ruby" Canadian ER physician Dr. Daniel Nagase speculates that the recent increase in (potentially) ultraviolet streetlights around his...
UN ‘Solar Radiation Modification’ Report Touts Using Aerosols Sprayed from Planes and ‘Space Mirrors’ as Ways to Dim Skies and Cool Earth
In a new report on "Solar Radiation Modification" research the United Nations outlines various methods it could deploy to dim the skies and cool...