false
OasisLMS
Login
Catalog
Kidney Week 2025 Early Program - Advances in Resea ...
Recognizing and Mitigating Algorithmic Bias in Nep ...
Recognizing and Mitigating Algorithmic Bias in Nephrology Research and Practice
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Video Summary
Dr. Lily Chan (Mount Sinai) discusses how algorithmic bias arises in medical AI and how to detect and mitigate it across the AI lifecycle. Bias is described as unfair differences in model predictions that can benefit or harm specific groups. She opens with examples: a dermatology app trained mostly on white patients performed far worse on images from Black patients in Uganda, and a systematic review found over half of AI studies had high risk of bias. She also shows large language models can misrepresent disease demographics (e.g., overrepresenting Black patients in GPT-generated HIV cases), illustrating “bias in, bias out.”<br /><br />Chan explains bias can enter during conception (implicit and systemic biases, e.g., NIH disease funding favoring male-predominant conditions), data collection (selection/sampling, measurement, missingness), model development (representation and confirmation bias; harmful labels from crowdsourced image datasets), and deployment (concept drift and automation bias, where clinicians over-trust incorrect AI, shown in a mammography study affecting less-experienced radiologists).<br /><br />Mitigation strategies include diverse teams, representative datasets, subgroup performance audits, counterfactual and adversarial testing, ongoing monitoring/retraining, and bias evaluation tools for language models (word analogies/associations and benchmarks like StereoSet). Q&A highlights risks of bias in clinical trial enrollment algorithms and emphasizes multi-institution data and equity-focused evaluation.
Asset Subtitle
Lili Chan
Meta Tag
Module
ARC
Speaker
Lili Chan
Keywords
algorithmic bias
medical AI
healthcare equity
dataset representativeness
subgroup performance auditing
large language model bias
model deployment monitoring
×
Please select your language
1
English