The Onslaught of Artificial Intelligence in Healthcare
Authors: Kanwaljeet J. S. Anand
- Feb 22, 2026
- 4 views
1. Algorithmic Bias and Health Inequities
Most of the current AI models are trained on non-representative patient datasets, which often lead to biased outcomes and flawed recommendations for the most marginalized groups. For example, models for detecting skin cancer were trained primarily on fair-skinned Caucasian populations and showed significantly lower diagnostic accuracy in people with darker skin.2 AI algorithms developed in USA systematically prioritized healthier White patients over sicker Black patients because they used healthcare spending as a proxy for medical need, failing to account for the fact that poor patients have less access to medical care. Further, most patient datasets are obtained from urban American and Chinese populations, and generalizing these models to low-income or rural populations or other countries and regions may be fraught with the potential to worsen health inequities and healthcare delivery to marginalized groups.
2. The “Black Box” and Explainability
Advanced AI systems, like the deep learning models developed by major corporate entities, are like “black boxes” because their decision-making processes appear too complex for human interpretation.3 Use of these AI algorithms is likely to undermine clinical trust among both doctors and patients. Many medical practitioners will not follow AI recommendations when the underlying logic is opaque, because this lack of explainability contradicts their underlying principles for practicing evidence-based medicine.4 On the other side of physician-patient equation, using recommendations from an AI “black box” obviates the patient’s legal right to an explanation for the medical decisions that significantly affect their health coming from automated systems whose logic is unclear.4
3. Liability and “The Captain of the Ship”
Perhaps the most grievous concern is the question of who will be held responsible when the AI system makes a harmful error? Current malpractice and tort law typically holds the treating physician as the “captain of the ship” responsible and legally liable for errors, even if they were following flawed AI decisions. AI developers routinely issue disclaimers shifting the liability for following their automated decisions to the treating physicians, and are unwilling to take responsibility for the flaws that may be inherent in their algorithmic logic.5
Legal theories like FDA Preemption can further shield the developers of high-risk (Class III) devices from state-law tort claims if they have received formal premarket approval from the FDA, thus leaving patients and doctors with limited avenues for legal recourse. Express preemption means that the federal statute, namely the US Federal Food, Drug, and Cosmetic Act (FDCA) – explicitly includes a preemption provision that all or some state law is displaced. It also provides for an implied field preemption in cases where the statutory language is unclear, but the displacement of state law can be implied by Congress’s intent to occupy the area exclusively.6 Further, it provides for conflict preemption, in cases where state and federal requirements contradict each other, and a party cannot comply with both.6
Oftentimes, sick patients don’t have the energy to read or the ability to understand the detailed disclaimers that they are asked to sign, and even busy practicing physicians would be unlikely to read through or comprehend the dense legal language that accompanies most disclaimers. At best, the regulatory oversight for legal consent while using AI algorithms involves a patchwork of existing data protection laws like the GDPR (General Data Protection Regulation) in the European Union, which gives patients more control over their digital data and the DPDP Act in India (Digital Personal Data Protection Act, 2023), a comprehensive law for protecting the digital personal data of Indian citizens. Emerging, AI-specific regulations (like the EU AI Act) emphasize the key principles of informed and explicit consent, including transparency, human oversight, liability and accountability, especially for high-risk healthcare applications of AI. This law remains to be legally tested European courts.
4. Data Privacy and “Bio-surveillance”
Along the same theme, significant ethical concerns exist about non-consensual data usage and the re-identification of patients from putatively “anonymized” healthcare datasets. Corporate partnerships where patient data is shared with tech companies without explicit patient notification or consent (e.g., Google’s Project Nightingale) are becoming more prevalent.7 Using data scrubbed from social media, financial, educational, and commercial platforms, AI algorithms can easily re-identify some patients even from anonymized datasets, thereby exposing their sensitive genetic, physical or mental health information and creating the potential for discriminatory bias in diverse areas like bank loans, employment, or health insurance.8
5. Erosion of the Patient-Provider Relationship
By eroding the quality of patient interactions, there are growing public and professional concerns that AI will dehumanize the delivery of medical care. In the US, a majority of the population (57%) believes that AI will significantly worsen the patient-provider relationship, because AI algorithms and automation cannot provide the empathy or compassion needed for healing. Medical and education experts warn of “automation bias,” where busy clinicians might blindly follow AI prompts rather than using their own critical judgment, potentially leading to a gradual de-skilling of the medical workforce.
6. Lack of Formal Scientific Testing
All advances in medical care have to undergo a prolonged process of formal scientific validation, including case-control studies, multiple randomized controlled trials (RCTs in Phase I, II, III), meta-analyses, and medical effectiveness testing before being incorporated into routine medical practice. Similar scientific validation rarely, if ever, occurs with incorporation of AI-based decision-making algorithms into medical care.9 AI tools are being used to automate and streamline clinical trial operations throughout a randomized trial’s life cycle, including clinical trial design, identifying eligible patients, obtaining informed consent, selecting physiological and clinical outcomes, interpreting imaging, and analyzing or disseminating the results.10,11
In summary, although AI applications do have the potential to promote healthcare equity in several ways, key challenges remain in its deployment across different settings of care. First, AI can optimize resource utilization, improve public health surveillance, optimize workflow in hospitals and clinics, and improve overall efficiency, potentially benefiting underserved populations. Second, AI tools can identify the social determinants of health (SDoH) and target the limited resources to vulnerable populations, promoting greater fairness and health equity. Third, several AI solutions are claiming to provide personalized care by tailoring the treatments to achieve better individual outcomes.
However, in an unjust commercially-driven healthcare system, AI solutions may create tiers of care due to algorithmic bias and thus perpetuate or amplify existing racial and socioeconomic disparities in diagnosis, treatment, and resource allocation. Fairness and justice demand due considerations of patient need, efficiency, and discrimination in healthcare delivery. Wealthier nations, institutions and populations are likely to benefit more from cutting-edge AI, slowly widening the gap in care quality between the haves and have-nots, and eventually undermining equitable access, procedural justice, and accountability.
Regulatory Responses
The regulatory responses to these rising concerns have been muted and fragmented. Launched in August 2024, the European Union has passed the world’s first comprehensive AI law, categorizing most healthcare AI as “High-Risk”. This law mandates that “high-risk” medical AI — including most diagnostic software and surgical robotics — must meet strict requirements for transparency, human oversight, and bias mitigation. The consequences of breaking this law are designed to be deterrent, whereby organizations that fail to comply face fines up to 7% of global annual turnover. To address the “liability gap” noted above, the EU has explicitly ruled to treat software as a “product”, which makes it easier for patients to claim compensation for harm caused by defective AI, often shifting the burden of proof from the patient to the company if the AI’s internal logic is too opaque (“black box”). To reduce demographic bias and health inequity, new EU rules from 2025 create a secure framework for researchers to access large, diverse datasets across the EU, specifically aimed at training AI models on more representative populations.
Instead of enacting a single overarching AI law, USA relies on the FDA’s existing authority to regulate medical software. The FDA published a new framework in 2024, the Predetermined Change Control Plans (PCCP), allowing developers to pre-define how an AI model will “learn” and update after it is released.12 This allows for continuous improvement while ensuring the FDA has already approved the method of change to prevent “algorithmic drift”. Prior to that, President Biden had directed the Department of Health and Human Services (DHHS) to establish an AI Safety Program by creating a central repository to track clinical errors, discrimination, and bias incidents specifically caused by AI in healthcare.13,14 Recent rules also require vendors of electronic health records (EHRs) to disclose greater details how their predictive algorithms were validated and will be monitored, giving clinicians better tools to detect potential errors in algorithmic logic.8
The collection, analysis, processing, and inferences from identifiable digital data for AI healthcare applications in India is covered by the Digital Personal Data Protection Act of 2023 (DPDPA). Healthcare AI tools like diagnostic models, clinical decision-support systems, chatbots, or telemedicine platforms must comply with the DPDPA when handling patient data. Patients must provide explicit and informed consent before their personal data can be processed by a healthcare AI application.15 All healthcare providers including hospitals, clinics, digital platforms, and professionals are considered data fiduciaries under this Act. Technology vendors that process patient data on behalf of a healthcare facility are treated as data processors and their compliance with DPDPA is contractually obligated. However, the DPDPA does not categorize health data as specially “sensitive”, which leads to weaker protections compared to EU standards. Further, it does not require explainability standards for AI, avoiding statutory mandates for the interpretation of AI decisions by patients or clinicians despite carrying consent and transparency obligations.15
For other countries that lack such rules, the World Health Organization (WHO) has issued 2024-2025 programmatic guidance specifically directed at Large Multimodal Models in healthcare. The WHO recommends mandatory post-release audits by third parties to verify that AI systems do not develop biased outcomes after acquiring real-world patient data. WHO also favors “human-in-the-loop” models, ensuring that critical medical decisions are not purely automated without a clear path for patients to challenge their rationale with the help of a human physician.9 These are broad general principles and may not be legally binding, but they will certainly guide the developers of AI algorithms with some incentive to promote their products in low-resource populations.
With increasing penetration of AI healthcare applications into medical care, hospital and clinic operations, patient experience, health resource allocation, and other aspects, several concerns need to be addressed to enable widespread acceptance, safety, and equity of AI applications in healthcare. Patients are already experiencing the insidious direct and indirect effects of AI on their health and healthcare access. Experts in the field of healthcare and the developers of AI/ML technologies must collaborate to create human-in-the-loop applications formally test and validate all AI decision-making algorithms before they are incorporated into the clinical realm.
Supplementary Materials: None.
Author Contributions: Dr. Anand created the first draft of this Editorial; Dr. Setty revised and edited it; both authors are in agreement with the final version.
Funding: None.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Acknowledgments: None.
Conflicts of Interest: None.
References
- Ayantika Pai. Jan 04. GIMS starts state’s first govt AI clinic in Noida to treat critical illnesses. Times of India. January 04, 2026. https://timesofindia.indiatimes.com/city/noida/gims-starts-states-first-govt-ai-clinic-in-noida-to-treat-critical-illnesses/articleshow/126326912.cms
- Wei ML, Tada M, So A, Torres R. Artificial intelligence and skin cancer. Front Med (Lausanne). 2024;11:1331895. doi:10.3389/fmed.2024.1331895
- Rueda J, Rodríguez JD, Jounou IP, Hortal-Carmona J, Ausín T, Rodríguez-Arias D. “Just” accuracy? Procedural fairness demands explainability in AI-based medical resource allocations. AI Soc. Dec 21 2022:1-12. doi:10.1007/s00146-022-01614-9
- Anibal J, Bedrick S, Nguyen H, et al. DeepSeek for healthcare: do no harm? AI Ethics. 2026;6(1)
- Wong PH, Rieder G. After Harm: A Plea for Moral Repair after Algorithms Have Failed. Sci Eng Ethics. Sep 18 2025;31(5):26. doi:10.1007/s11948-025-00555-y
- Hutt PB, Merrill RA, Grossman LA. Food and Drug Law: Cases and Materials (4th Edition). Foundation Press (subsidiary of Wolters Kluwer, The Netherlands); 2014.
- Aidun E. Implications of Utilizing AI in Healthcare Settings. University of Cincinnati Law Review. 2025;93(1):https://uclawreview.org/2025/01/09/implications-of-utilizing-ai-in-healthcare-settings/.
- Klonoff DC, Scheideman AF, Shao MM, et al. The need for clear medical data ownership laws. J Transl Med. Jan 3 2026;24(1):5. doi:10.1186/s12967-025-07486-z
- Organization W-WH. Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models. 2024:1-98. January 18, 2024. https://www.who.int/publications/i/item/9789240084759
- Cunningham JW, Abraham WT, Bhatt AS, et al. Artificial Intelligence in Cardiovascular Clinical Trials. J Am Coll Cardiol. Nov 12 2024;84(20):2051-2062. doi:10.1016/j.jacc.2024.08.069
- Miller MI, Shih LC, Kolachalama VB. Machine Learning in Clinical Trials: A Primer with Applications to Neurology. Neurotherapeutics. Jul 2023;20(4):1066-1080. doi:10.1007/s13311-023-01384-2
- Carvalho E, Mascarenhas M, Pinheiro F, et al. Predetermined Change Control Plans: Guiding Principles for Advancing Safe, Effective, and High-Quality AI-ML Technologies. JMIR AI. Oct 31 2025;4:e76854. doi:10.2196/76854
- Handley JL, Krevat SA, Fong A, Ratwani RM. Artificial intelligence related safety issues associated with FDA medical device reports. NPJ Digit Med. Dec 3 2024;7(1):351. doi:10.1038/s41746-024-01357-5
- Hose BZ, Handley JL, Biro JM, Krevat SA, Ratwani RM. Enhancing Patient Safety in Artificial Intelligence-Enabled Health Care: The Role of Human Factors. J Patient Saf. Dec 24 2025;doi:10.1097/PTS.0000000000001444
- Kulkarni A, Ramanathan C. An Agentic Software Framework for Data Governance under DPDP. ArXiv 2026;arXiv:2601.01101v1 doi:https://doi.org/10.48550/arXiv.2601.01101
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of SSSUHE and/or the editor(s). SSSUHE and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions, or products referred to in the content.