After overhearing a conversation about my company, my 5-year-old niece asked between bites of frosted flakes, “What is A I M L?” I thought about flexing my intellectual muscle by explaining how our company’s patent-pending method of using artificial intelligence and machine learning to determine which community members are likely to participate in a clinical trial, highlighting that our technology ensures that people who have historically been left out of the process are aware that clinical trials can be potential treatment options. But something told me that a more concise answer that could be understood by an actual kindergartener was warranted.
I replied, “AI/ML is a set of instructions for solving a problem, and the instructions include the ability to change the instructions to get better at solving the problem.” I paused to see if her response provided a level of validation of my answer. “So A I M L is like the Hulk. It gets stronger and stronger and stronger,” she replied in a way that let me know that she understood but that my explanation could have been simpler.
“You got it!”, I said. My niece then asks, “Is A I M L a hero or a villain?” Her follow-up question was unexpected and made me pause to think about AI/ML in the realm of healthcare.
AI/ML holds the promise of helping practitioners streamline tasks, improve operational efficiencies, and simplify complex procedures. The subsequent benefits should lead to better design of interventions, improved prediction of patient outcomes, and better allocation of resources. For example, Google’s Cloud Healthcare AI solution takes data from users’ electronic health records through machine learning – creating insights for physicians to make more informed clinical decisions. Google worked with the University of California, Stanford University, and the University of Chicago to generate an AI system that predicts the outcomes of hospital visits to prevent readmissions and shorten the time patients must stay in hospitals. Clearly, AI/ML took the form of a hero in this case.
However, studies have found that AI/ML can also act as a villain by introducing algorithmic bias that can perpetuate healthcare disparities. When artificial intelligence relies on algorithms that reflect statistical or social bias, significant harm can result. For example, when an AI used cost as a proxy for health needs, it falsely named Black patients as healthier than equally sick white ones, as less money was spent on them.
Most AI algorithms need large datasets to learn from, but certain subgroups of people have a long history of being absent or misrepresented in existing biomedical datasets, creating statistical bias. We faced this dilemma when developing our company’s algorithmic models for identifying likely participants for clinical trials. In the case of clinical trials, most participants are White males of a limited age group therefore, the data sets we had to train the models were based on homogeneous data sets of those who had historically participated in clinical trials, namely White males. Our company’s vision is health equity through inclusive research, but if we relied exclusively on the trained models our platform would fail to deliver benefits to our communities of color and exacerbate a lack of clinical trial diversity.
This bias can also appear when evaluating “off the shelf” predictive software for a specific problem or solution. Our company is exploring the use of conversational audio and video AI interactions to better engage, educate, and empower our communities of color about their health. However, these conversational AI models have been developed primarily in Western American English, creating clear challenges as the conversational AI is thrown off by slang, jargon, and regional dialects. Conversational AI developers must train the technology to properly address such challenges before we can integrate the solution in ways that best serve our communities of color.
Greater algorithmic stewardship is needed in healthcare to mitigate the statistical and social biases in AI/ML. We must hold algorithm developers and users accountable for ensuring the safety, effectiveness, and fairness of clinical algorithms prior to their implementation. Thankfully, challenges around the identification and mitigation of racial, socioeconomic, gender, or other disparities are central to a new subfield of machine learning focused on algorithmic fairness.
My niece, taking note of my long pause as I processed her question, again asked. “Uncle Del, is A I M L a hero or a villain?” I finally answered, “AI/ML is a hero, as long as we are intentional about telling it to help everyone.” Seemingly satisfied with my answer, she resumed her attack on the last pieces of frosted flakes. I couldn’t help but think that my discussion with a potential future scientist could help ensure that AI/ML truly lives up to its promise of reducing health disparities and improving health outcomes for all.
____________________________________________________________________________________
The AIM-AHEAD Communities of Change Symposium is a 2-day virtual conference (June 28th and 29th) and a startup competition (June 30th) followed by a 4-week workshop series (begins the first week of July) intended to unite students, professionals, and practitioners to discuss cutting-edge technologies and the impact of utilizing Artificial Intelligence (AI) and Machine Learning (ML) in biomedicine and beyond.
The purpose of the conference is to align on a shared knowledge base of current applications of AI/ML and bioinformatics while raising awareness and tackling challenges with applications of AI/ML. The outcome of this conference and workshop series is actionable strategies designed to reduce bias and tackle other issues in AI/ML, specifically the effects on underrepresented groups when utilizing AI/M to drive decision-making.
Speakers, round table discussions, and panels will address topics in AI and Health Equity, including AI 101, public health, public policy, applications of AI/ML on biomedicine and beyond, and the ethics of these applications. Please join us as we spark a discussion with stakeholders from industry, academia, Historically Black Colleges and Universities (HBCUs), Minority Serving Institutions (MSIs), community stakeholders, healthcare providers & systems, the public, and MORE!