Artificial intelligence (AI) is revolutionizing the field of medicine by helping physicians make more informed decisions and prioritize the care of high-risk patients. However, researchers are calling for more oversight of AI from regulatory bodies to ensure equity and prevent discrimination in patient care decision support tools.
A recent commentary published in the New England Journal of Medicine AI highlighted the need for regulation in AI healthcare tools. The U.S. Office for Civil Rights (OCR) of the Department of Health and Human Services (HHS) issued a new rule under the Affordable Care Act (ACA) that prohibits discrimination in patient care decision support tools based on race, color, national origin, age, disability, or sex.
This rule, developed in response to President Joe Biden’s Executive Order on AI development and use, aims to advance health equity and prevent discrimination. It marks an important step forward in ensuring that AI algorithms and non-AI tools used in medicine prioritize equity and non-discrimination.
Despite the increasing number of FDA-approved AI-enabled devices in healthcare, there is a lack of oversight for the clinical risk scores produced by these tools. The majority of U.S. physicians use clinical decision support tools regularly to guide patient care, making it essential to regulate these tools to ensure transparency and non-discrimination.
To address this gap, the MIT Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic) will host a regulatory conference in March 2025. The goal is to promote discussions among regulators, industry experts, and faculty on the regulation of AI in healthcare.
While clinical risk scores are less complex than AI algorithms, they play a crucial role in clinical decision-making and must be held to the same standards to prevent bias and discrimination in healthcare. Regulating these scores presents challenges due to their widespread use in clinical practice, but it is essential for ensuring transparency and equity.
The incoming administration’s emphasis on deregulation and opposition to certain nondiscrimination policies may pose challenges to the regulation of clinical risk scores. However, researchers emphasize the importance of maintaining oversight to prevent bias and discrimination in healthcare.
In conclusion, while AI has the potential to improve patient care and outcomes, it is crucial to regulate AI healthcare tools and clinical risk scores to ensure equity and prevent discrimination. By promoting transparency and non-discrimination in healthcare, regulatory bodies can help harness the full potential of AI in improving patient outcomes and advancing health equity.