2025 | Vol 1(1) | November
When machines diagnose: The new liability puzzle in AI-driven healthcare
2025
When machines diagnose: The new liability puzzle in AI-driven healthcare
Lemuela Mary J, Student of Saveetha University, Chennai, India
Contact at: lemuelamary@gmail.com
Abstract
Artificial intelligence (AI) is revolutionising medical diagnostics through enhanced accuracy and efficiency in disease detection. However, its integration into clinical practice raises complex legal, ethical, and regulatory challenges that require urgent attention. This article critically examines the multifaceted implications of AI in medical diagnosis, analysing liability frameworks, ethical principles, algorithmic bias, data privacy concerns, and regulatory responses across jurisdictions. Through comprehensive evaluation of the European Union Artificial Intelligence Act 2024 (EU AI Act 2024), World Health Organization (WHO) guidelines, and India's National Digital Health Blueprint (NDHB), this study identifies critical gaps in existing regulatory frameworks and proposes recommendations for responsible AI deployment. The findings reveal that current liability doctrines inadequately address autonomous AI decision-making, whilst algorithmic bias and data privacy concerns necessitate strengthened oversight mechanisms. This analysis contributes to the evolving discourse on balancing technological innovation with patient safety, autonomy, and equitable healthcare access.
Keywords: Artificial intelligence, medical diagnosis, algorithmic bias, data privacy, digital health governance
1 Introduction
Artificial intelligence (AI) represents a paradigm shift in medical diagnostics, with machine learning algorithms demonstrating performance matching or surpassing human physicians in specialised domains including radiology, dermatology, pathology, and oncology.[1] The global AI healthcare market, valued at approximately USD 15.1 billion in 2023, is projected to reach USD 188 billion by 2030, reflecting widespread adoption across clinical settings.[2] Technologies such as Google's Med-PaLM, Stanford's CheXNet, and advanced diagnostic algorithms are increasingly integrated into routine medical practice, analysing medical imaging, predicting disease progression, and supporting clinical decision-making.[3]
Despite transformative potential, AI deployment in medical diagnosis generates unprecedented legal, ethical, and regulatory challenges. The opacity of neural network decision-making processes, commonly termed the 'black box' problem, complicates accountability when diagnostic errors occur.[4] Algorithmic bias perpetuates healthcare disparities when training datasets lack demographic diversity, whilst data privacy concerns escalate as sensitive health information becomes integral to AI system functionality.[5] Furthermore, liability frameworks established for human clinicians and traditional medical devices prove inadequate for autonomous AI systems that continuously learn and adapt post-deployment.[6]
This article provides comprehensive analysis of these challenges, examining regulatory responses including the European Union Artificial Intelligence Act 2024 (EU AI Act 2024), World Health Organization (WHO) ethical guidelines, and India's National Digital Health Blueprint (NDHB). The objective is to identify critical gaps in existing frameworks and propose evidence-based recommendations for responsible AI integration in medical diagnostics.
2 Regulatory frameworks for AI in healthcare
2.1 The European Union Artificial Intelligence Act 2024
The EU AI Act 2024 entered into force on 01 August 2024, establishing the world's first comprehensive legal framework specifically regulating AI technologies.[7] The Act adopts a risk-based approach, classifying AI systems into prohibited, high-risk, limited-risk, and minimal-risk categories. Medical AI applications intended for diagnosis, monitoring physiological processes, or treatment decision-making are predominantly classified as high-risk systems, particularly when they qualify as medical devices under the Medical Device Regulation 2017 (MDR 2017) or In Vitro Diagnostic Medical Devices Regulation 2017 (IVDR 2017).[8]
High-risk AI systems must satisfy stringent requirements including risk management systems, data governance standards, technical documentation, transparency obligations, human oversight mechanisms, and accuracy benchmarks.[9] The EU AI Act 2024 mandates that AI systems embedded in medical devices comply within thirty-six months of entry into force, whilst general high-risk systems must comply within twenty-four months.[10] Notably, approximately seventy-five per cent of commercial AI medical devices relate to radiology, with most classified as Class IIa or higher under the MDR 2017, thereby falling within high-risk categorisation.[11]
However, scholars identify significant implementation challenges. The horizontal legislative approach insufficiently addresses sector-specific healthcare needs, necessitating detailed guidelines for practical application.[12] Questions persist regarding liability allocation when AI systems produce harmful recommendations, particularly concerning the interaction between the EU AI Act 2024 and existing MDR 2017 requirements.[13] Furthermore, the Act's emphasis on pre-market approval may inadequately account for AI systems' dynamic nature, wherein algorithms continue learning and evolving post-deployment.[14]
2.2 World Health Organisation guidelines
The WHO has published comprehensive ethical guidance addressing AI in healthcare, emphasising six core principles: protecting autonomy, promoting human well-being and safety, ensuring transparency and explainability, fostering responsibility and accountability, ensuring inclusiveness and equity, and promoting responsive and sustainable AI.[15] In January 2024, the WHO released specific guidance on large multi-modal models (LMMs), addressing generative AI technologies capable of accepting diverse data inputs and generating varied outputs.[16]
The WHO guidance recognises multiple applications including diagnosis and clinical care, patient-guided symptom investigation, administrative tasks, medical education, and scientific research.[17] However, it identifies substantial risks including generation of false or biased information, data quality concerns, automation bias, degradation of physician skills, and informed consent challenges.[18]The WHO emphasises governmental responsibility for establishing standards, investing in public infrastructure, enacting protective regulations, and mandating post-release audits by independent third parties.[19]
Critically, the WHO advocates stakeholder engagement throughout AI development, requiring involvement of healthcare providers, patients, researchers, and medical professionals from early design stages.[20] The guidance also addresses the necessity for algorithms trained on diverse datasets to prevent perpetuation of healthcare disparities across demographic groups.[21]
2.3 India's National Digital Health Blueprint and Ayushman Bharat Digital Mission
India's NDHB, released in 2019, establishes foundational principles for digital health infrastructure, subsequently operationalised through the Ayushman Bharat Digital Mission (ABDM) launched in August 2020.[22] The Blueprint envisions a federated architecture enabling seamless health data exchange whilst maintaining data sovereignty and citizen-centric control.[23] Core components include the Health Facility Registry (HFR), Healthcare Professionals Registry (HPR), Unique Health Identifier (UHI), and Unified Health Interface.[24]
The NDHB explicitly addresses AI integration, stipulating that the mission shall maintain checks on AI system reliability.[25] The framework emphasises anonymisation-as-a-service capabilities, recognising that non-personal aggregated health data remains crucial for ecosystem development whilst protecting individual privacy.[26] The Blueprint adopts 'Zero Trust Architecture' principles, with proposed Health-Cloud monitored through Security Operations Centre (SOC) and Privacy Operations Centre (POC) mechanisms.[27]
However, implementation faces significant challenges. The NDHB proceeds without comprehensive data protection legislation, creating regulatory gaps concerning sensitive health information governance.[28] Critics highlight concerns regarding private sector involvement through regulatory sandboxes, questioning whether patient data receives adequate protection when shared with technology developers and insurance companies.[29] Furthermore, the Digital Personal Data Protection Act 2023 (DPDP Act 2023) requires strengthening to address AI-specific concerns including algorithmic transparency, bias mitigation, and accountability mechanisms.[30]
3 Ethical principles in AI-driven medical diagnosis
3.1 Autonomy and informed consent
Patient autonomy constitutes a fundamental ethical principle in medical practice, requiring that individuals make informed decisions regarding their healthcare.[31] AI deployment complicates informed consent processes, as patients frequently remain unaware that algorithms influence diagnostic recommendations.[32] Empirical research demonstrates that when AI systems participate in diagnosis, patients perceive information regarding algorithm performance, physician experience with the technology, and concordance between physician and AI recommendations as highly important.[33]
The General Data Protection Regulation 2016 (GDPR 2016) addresses automated decision-making, requiring notification when decisions are based 'solely' on AI.[34] However, scholars note this provision applies narrowly, potentially excluding AI used as decision-support tools rather than autonomous decision-makers.[35] From ethical perspectives, commentators argue that patients deserve disclosure whenever AI influences medical care, regardless of whether final decisions rest with physicians.[36] Proposed disclosures include AI limitations, systematic bias risks, cybersecurity vulnerabilities, and potential mismatches between algorithmic assumptions and patient circumstances.[37]
3.2 Beneficence and non-maleficence
The principles of beneficence and non-maleficence obligate healthcare providers to maximise benefits whilst minimising harm.[38] In AI contexts, these principles mandate rigorous testing and validation to prevent diagnostic errors resulting from algorithmic flaws, insufficient training data, or inappropriate application beyond validated use cases.[39] The 'black box' nature of deep learning algorithms particularly challenges non-maleficence, as clinicians cannot readily assess whether AI recommendations align with sound medical reasoning.[40]
Automation bias poses additional concerns, wherein healthcare professionals over-rely on algorithmic outputs, potentially diminishing critical thinking skills.[41] Conversely, physicians ignoring accurate AI recommendations due to unfamiliarity or mistrust may compromise patient care.[42] Balancing these competing risks requires comprehensive training, clear protocols for AI-human collaboration, and continuous monitoring of patient outcomes.[43]
3.3 Justice and equity
Justice principles demand equitable healthcare access and fair distribution of benefits and burdens.[44] Algorithmic bias threatens equity when training datasets underrepresent certain populations, resulting in differential accuracy across demographic groups.[45] High-profile cases illustrate these risks; research revealed that healthcare algorithms systematically disadvantaged Black patients by using healthcare costs as proxies for health needs, despite Black patients experiencing greater illness burden at equivalent cost levels.[46]
Addressing bias requires diverse, representative datasets encompassing varied races, ethnicities, genders, ages, and socioeconomic backgrounds.[47] However, achieving diversity proves challenging given historical healthcare inequities and data collection disparities.[48]Furthermore, AI deployment patterns risk exacerbating inequalities if advanced technologies remain accessible primarily to well-resourced healthcare institutions, leaving underserved communities reliant on conventional methods.[49]
4 Legal challenges in AI medical diagnosis
4.1 Liability and medical malpractice
Determining liability when AI systems contribute to diagnostic errors presents complex legal questions without established precedents.[50] Traditional medical malpractice doctrine requires proving duty of care, breach of standard of care, causation, and damages.[51] Physicians bear ultimate responsibility for patient care under current frameworks, maintaining liability even when relying on flawed AI recommendations.[52]
Several liability models emerge in scholarly discourse. Under the negligence framework, physicians face liability for harmful errors falling below standard of care thresholds.[53] If AI serves merely as decision support, radiologists or clinicians making final determinations bear primary liability risk.[54] Alternatively, vicarious liability doctrine, wherein subordinates' faults transfer to principals, could apply if AI algorithms function analogously to healthcare facility employees, attributing negligence to supervising physicians or institutions.[55]
Product liability provides another framework, holding manufacturers accountable for defective products causing harm.[56] The learned intermediary doctrine traditionally positions physicians as intermediaries assessing product risks and benefits, potentially shielding manufacturers from direct patient claims.[57] However, as AI systems achieve greater autonomy, this doctrine's applicability becomes questionable.[58] Scholars propose common enterprise models encompassing manufacturers, physicians, and hospitals, shifting from individualistic responsibility concepts toward distributed accountability frameworks.[59]
The 'black box' problem intensifies liability challenges, as neural network decision-making processes remain inscrutable to manufacturers and clinicians alike.[60] Without algorithmic transparency, physicians struggle to assess whether AI recommendations merit trust based on their clinical knowledge.[61] Inexperienced physicians may blindly accept AI diagnoses, complicating malpractice determination when both healthcare professionals and AI developers share involvement.[62]
Dynamic AI systems that continuously learn post-deployment further complicate liability assessment. Courts must determine whether injuries resulted from initial design defects, inadequate training data, post-deployment learning processes, or physician misuse.[63] The European Union's proposed AI Liability Directive 2022 attempts addressing these complexities through non-fault rules for high-risk AI failures, potentially offering precedent for liability reform.[64]
4.2 Standard of care evolution
Medical malpractice law traditionally evaluates physician conduct against 'reasonable physician under similar circumstances' standards.[65] AI integration potentially transforms standard of care expectations bidirectionally. Firstly, as AI becomes ubiquitous, physicians failing to utilise available diagnostic tools may face liability for breaching evolving standards.[66] Secondly, standard of care may require physicians to exercise due diligence in evaluating and validating black-box algorithms before relying upon them.[67]
This creates concerning scenarios wherein physicians face accountability for both using and not using AI systems, particularly when they lack technical expertise to assess algorithmic validity.[68] Legal scholars advocate procedural standards requiring healthcare facilities and professionals to systematically evaluate AI technologies, validate algorithmic outputs, and maintain human oversight mechanisms.[69]
5 Data privacy and security
5.1 Regulatory compliance
AI systems require vast quantities of health data for training and operation, raising significant privacy concerns governed by frameworks including the GDPR 2016 in the European Union, the Health Insurance Portability and Accountability Act 1996 (HIPAA 1996) in the United States, and the DPDP Act 2023 in India.[70] These regulations mandate explicit consent for data collection and processing, impose strict security requirements, and grant individuals rights including data access, rectification, and erasure.[71]
The GDPR 2016's right to erasure proves particularly challenging for AI systems, where training data becomes integrated into algorithmic structure, rendering specific data identification and extraction technically infeasible.[72] This conflicts with healthcare providers' obligations to maintain comprehensive medical records and patients' rights to data portability.[73] Furthermore, the GDPR 2016's requirement for data processing transparency clashes with the 'black box' nature of deep learning systems.[74]
5.2 Anonymisation and re-identification risks
Healthcare organisations employ anonymisation techniques to protect patient privacy whilst enabling data utilisation for AI research.[75] However, sophisticated algorithms increasingly demonstrate capacity to re-identify anonymised datasets through cross-referencing multiple data sources, particularly as AI technologies advance.[76] This undermines traditional anonymisation protections, necessitating enhanced security measures including differential privacy techniques, homomorphic encryption, and secure multi-party computation.[77]
India's NDHB addresses these concerns by recommending anonymisation-as-a-service capabilities, enabling data anonymisation proximate to sources.[78] However, implementation requires substantial technical infrastructure and ongoing vigilance as re-identification techniques evolve.[79]
6 Algorithmic bias and fairness
6.1 Sources of bias
Algorithmic bias emerges from multiple sources including biased training data, inappropriate feature selection, flawed algorithm design, and biased evaluation metrics.[80] Historical healthcare data reflects systemic inequities, with certain populations underrepresented or misrepresented due to discriminatory practices, differential healthcare access, and socioeconomic disparities.[81] When AI systems learn from such data, they risk perpetuating and amplifying existing biases.[82]
Feature selection bias occurs when algorithms prioritise variables correlating with protected characteristics such as race or socioeconomic status.[83] The aforementioned healthcare cost algorithm exemplifies this, using expenditures as health need proxies despite systematic spending differences across racial groups.[84] Evaluation bias manifests when AI performance metrics emphasise accuracy for majority populations whilst overlooking subgroup performance disparities.[85]
6.2 Mitigation strategies
Addressing algorithmic bias requires multifaceted approaches throughout AI lifecycle stages. During data collection, researchers must ensure dataset diversity representing varied demographics, geographies, and healthcare settings.[86] Pre-processing techniques including oversampling underrepresented groups, synthetic data generation, and bias-aware data cleaning can improve dataset balance.[87]
Algorithm design strategies include fairness-aware machine learning techniques incorporating equity constraints directly into optimisation objectives.[88] Post-deployment monitoring remains critical, requiring regular audits assessing performance across demographic subgroups and establishing feedback loops enabling continuous model refinement.[89] The WHO emphasises stakeholder engagement from development inception, ensuring diverse perspectives inform algorithm design and validation.[90]
Transparency and explainability constitute essential bias mitigation components, enabling healthcare professionals to scrutinise AI recommendations and identify potential discriminatory patterns. [91]Explainable AI (XAI) methodologies including attention mechanisms, saliency maps, and counterfactual explanations can illuminate decision-making processes, facilitating bias detection and accountability.[92]
7 Regulatory gaps and recommendations
Despite progress represented by the EU AI Act 2024, WHO guidelines, and national initiatives, significant regulatory gaps persist. The EU AI Act 2024's horizontal approach inadequately addresses healthcare-specific nuances, requiring detailed sectoral guidance clarifying liability allocation, human oversight requirements, and post-market surveillance obligations for medical AI.[93] The thirty-six-month implementation timeline for medical device AI may prove insufficient given the complexity of achieving compliance whilst maintaining innovation momentum.[94]
The WHO guidelines, whilst ethically comprehensive, lack binding enforcement mechanisms, relying primarily on voluntary adoption by governments and developers.[95] Harmonisation challenges arise as nations develop divergent regulatory approaches, potentially fragmenting global AI healthcare markets and impeding cross-border data sharing essential for diverse training datasets.[96]
India's NDHB advances digital health infrastructure but proceeds without robust data protection legislation specifically addressing AI challenges.[97] The DPDP Act 2023 requires strengthening through provisions mandating algorithmic transparency, bias audits, and clear accountability frameworks when AI systems influence health decisions.[98] The regulatory sandbox approach, whilst promoting innovation, raises concerns regarding adequate patient protection and informed consent during experimental phases.[99]
7.1 Recommendations for regulatory enhancement
7.1.1 Liability framework reform
Policymakers should develop AI-specific liability frameworks acknowledging distributed responsibility among developers, healthcare institutions, and clinicians.[100] The common enterprise model merits serious consideration, establishing joint liability mechanisms whilst allowing contractual allocation among parties based on fault contribution.[101] Mandatory liability insurance for high-risk medical AI systems, following Digital Diagnostics' precedent with its IDx-DR diabetic retinopathy system, could ensure victim compensation whilst incentivising quality assurance.[102]
7.1.2 Standard of care guidance
Regulatory bodies must provide clear guidance on AI integration into standard of care, specifying when physicians should utilise available tools and establishing protocols for validating algorithmic recommendations before reliance.[103] Professional medical societies should develop best practice guidelines addressing AI-human collaboration, including circumstances warranting override of algorithmic suggestions and documentation requirements for decision-making processes.[104]
7.1.3 Enhanced data protection
Strengthened data protection regulations must address AI-specific challenges including technical measures preventing data re-identification, strict limitations on secondary data uses, and enhanced transparency requirements for algorithmic data processing.[105] India should expedite implementation of comprehensive frameworks complementing the DPDP Act 2023, explicitly governing AI health applications.[106] International cooperation mechanisms should facilitate secure cross-border data sharing whilst maintaining sovereignty and protection standards.[107]
7.1.4 Bias mitigation mandates
Regulations should mandate pre-deployment bias audits assessing AI performance across demographic subgroups, with approval contingent upon demonstrating equity across populations.[108] Ongoing post-market surveillance requirements must include regular bias assessments, obligating developers to address identified disparities through model updates.[109] Transparency requirements should extend to training data characteristics, enabling independent verification of dataset diversity and representativeness.[110]
7.1.5 Informed consent standards
Regulatory frameworks must establish clear informed consent requirements when AI participates in diagnosis, regardless of whether systems function as decision-support or autonomous tools.[111] Patients deserve disclosure regarding AI involvement, algorithm performance characteristics, known limitations and biases, data privacy implications, and alternatives to AI-assisted care.[112] Consent processes should accommodate varying patient preferences, including options to opt out of AI involvement where clinically appropriate.[113]
7.1.6 International harmonisation
The WHO should leverage its convening authority to facilitate international harmonisation of AI healthcare regulations, developing model laws and standards adaptable to national contexts whilst ensuring baseline protections.[114] Mutual recognition agreements for AI approvals could reduce duplicative regulatory burdens whilst maintaining safety standards.[115] Collaborative research initiatives should address common challenges including bias mitigation methodologies, XAI techniques, and liability frameworks.[116]
8 Conclusion
The use of AI in medical diagnosis has revolutionary potential to improve the effectiveness, accessibility, and quality of healthcare. But achieving these advantages while preserving patient welfare necessitates resolving difficult moral, legal, and legal issues. Current frameworks, including the EU AI Act 2024, WHO guidelines, and India's NDHB, represent important progress but reveal significant gaps demanding urgent attention.
Innovative frameworks recognizing distributed responsibility are required because liability doctrines developed for human clinicians and conventional medical devices are insufficient for autonomous, continuously learning AI systems. When training datasets are not diverse, algorithmic bias poses a threat to healthcare equity, necessitating mandatory bias audits and continuous monitoring across demographic subgroups. As re-identification risks increase, data privacy protections must advance beyond conventional anonymization techniques, and informed consent procedures must take into account AI's special traits and dangers.
Through worldwide collaboration, the regulatory harmonization of different systems can stop the division of those systems while at the same time setting up the minimum requirements that ensure the safety and rights of the patients. In order to keep up with this fast development of AI technologies and to come up with strong governance frameworks which at the same time allow for further innovation but also guarantee accountability, transparency and equity, it is necessary that policymakers, healthcare professionals, developers and patients work together. The way leading there requires, among other things, regulation based on solid evidence and guided by the knowledge of different fields, it also requires being always ready to change since technologies keep evolving and it is a firm belief that patient welfare should be at the centre of the integration of AI in healthcare.
References
[1] Topol EJ, 'High-performance medicine: the convergence of human and artificial intelligence' (2019) 25 Nature Medicine 44.
[2]Grand View Research, 'Artificial Intelligence in Healthcare Market Size Report 2024-2030' (2024) https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-healthcare-market accessed 20 November 2025
[3]Savchuk K, 'AI Will Be as Common in Healthcare as the Stethoscope' (Stanford Business, 15 May 2024) https://www.gsb.stanford.edu/insights/ai-will-be-common-healthcare-stethoscope accessed 14 January 2025.
[4] Finlayson SG and others, 'The Clinician and Dataset Shift in Artificial Intelligence' (2021) 385 New England Journal of Medicine 283.
[5] Obermeyer Z and others, 'Dissecting racial bias in an algorithm used to manage the health of populations' (2019) 366 Science 447.
[6] Price WN II, 'Medical Malpractice and Black-Box Medicine' in I Glenn Cohen and others (eds), Big Data, Health Law, and Bioethics (CUP 2018).
[7] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) OJ L 2024/1689.
[8] Van Kolfschooten H and Van Oirschot J, 'The EU Artificial Intelligence Act (2024): Implications for healthcare' (2024) 149 Health Policy 105152.
[9]Adams LC and others, 'Navigating the European Union Artificial Intelligence Act for Healthcare' (2024) 7 npj Digital Medicine 210.
[10] Ibid.
[11]Busch F, Kather JN and Johner C, 'Navigating the European Union Artificial Intelligence Act for Healthcare' (2024) 7 npj Digital Medicine 210.
[12] The EU Artificial Intelligence Act (n 8).
[13] Vardas EP, Marketou M and Vardas PE, 'Medicine, healthcare and the AI act: gaps, challenges and future implications' (2025) 6 European Heart Journal - Digital Health 833.
[14] Ibid.
[15]World Health Organization, 'Ethics and governance of artificial intelligence for health' (WHO 2021) https://www.who.int/publications/i/item/9789240029200 accessed 18 November 2025.
[16]World Health Organization, 'WHO releases AI ethics and governance guidance for large multi-modal models' (WHO, 18 January 2024) https://www.who.int/news/item/18-01-2024-who-releases-ai-ethics-and-governance-guidance-for-large-multi-modal-models accessed 18 November 2025
[17] Ibid.
[18] Ibid.
[19] Ibid.
[20] Ibid.
[21]World Health Organization, 'WHO calls for safe and ethical AI for health' (WHO, 16 May 2023) https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health accessed 18 November 2025.
[22]National Health Authority, 'National Digital Health Mission Strategy Overview' (NITI Aayog 2020) https://www.niti.gov.in/sites/default/files/2023-02/ndhm_strategy_overview.pdf accessed 20 November 2025.
[23]Mukherjee S and others, 'Inception of the Indian Digital Health Mission: Connecting the Dots' (2024) Journal of Medical Systems PMC11080683.
[24] Ibid.
[25] National Digital Health Mission Strategy Overview (n 22).
[26] Ibid.
[27] Inception of the Indian Digital Health Mission (n 23).
[28] Chaudhuri C and Kaur R, 'Digital Health and the National Digital Health Mission' (2025) RUPE India Aspects 81-82 https://rupe-india.org/aspects-no-81-82/digital-health-and-the-national-digital-health-mission/ accessed 22 November 2025.
[29] Ibid
[30] Kumar A and others, 'Ethical and legal considerations in healthcare AI: innovation and policy for safe and fair use' (2025) 11 Royal Society Open Science 241873.
[31] Beauchamp TL and Childress JF, Principles of Biomedical Ethics (8th edn, OUP 2019).
[32] Kim MS and others, 'Patient perspectives on informed consent for medical AI: A web-based experiment' (2024) BMJ Health & Care Informatics PMC11064747.
[33] Ibid.
[34] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data (General Data Protection Regulation) OJ L 119.
[35] Patient perspectives on informed consent (n 32).
[36] Muller R and others, 'Informed consent and medical artificial intelligence: what to tell the patient?' (2021) 30 Georgetown Law Technology Review 77.
[37] Kiener M, 'Artificial Intelligence in Medicine and the Disclosure of Risks' (2021) 34 AI & Society 705.
[38] Gillon R, 'Medical ethics: four principles plus attention to scope' (1994) 309 British Medical Journal 184.
[39] Rigby MJ, 'Ethical framework for artificial intelligence in healthcare research: A path to integrity' (2024) World Journal of Clinical Cases PMC11230076.
[40] Castelvecchi D, 'Can we open the black box of AI?' (2016) 538 Nature 20.
[41] Goddard K and others, 'Automation Bias: A Systematic Review of Frequency, Effect Mediators, and Mitigators' (2012) 19 Journal of the American Medical Informatics Association 121.
[42] Filice RW and others, 'Physician Decisions to Override Radiology AI' (2020) 296 Radiology 1109.
[43] Ethical and legal considerations in healthcare AI (n 30).
[44] Powers M and Faden R, Social Justice: The Moral Foundations of Public Health and Health Policy (OUP 2006).
[45] Gianfrancesco MA and others, 'Potential Biases in Machine Learning Algorithms Using Electronic Health Record Data' (2018) 178 JAMA Internal Medicine 1544.
[46] Dissecting racial bias in an algorithm (n 5).
[47] Potential Biases in Machine Learning Algorithms (n 45).
[48] Rajkomar A and others, 'Ensuring Fairness in Machine Learning to Advance Health Equity' (2018) 169 Annals of Internal Medicine 866.
[49] Ethical and legal considerations in healthcare AI (n 30).
[50] Balthazar P and others, 'Defining medical liability when artificial intelligence is applied on diagnostic algorithms: a systematic review' (2023) 10 Frontiers in Medicine 1305756.
[51] Raskin I, 'Medical Malpractice in the Age of Artificial Intelligence' (2018) 45 Syracuse Law Review 1, 392.
[52] Price WN II, 'Liability for use of artificial intelligence in medicine' in Research Handbook on Health, AI and the Law (NCBI Bookshelf 2024) https://www.ncbi.nlm.nih.gov/books/NBK613216/ .
[53] Defining medical liability (n 50).
[54] Mezrich R, 'Is Artificial Intelligence Ready to Fly Solo in Radiology Practice?' (2018) 10 Journal of the American College of Radiology 1565.
[55] Defining medical liability (n 50).
[56] Liability for use of artificial intelligence (n 52).
[57] Froomkin AM, Kerr I and Pineau J, 'When AIs Outperform Doctors: Confronting the Challenges of a Tort-Induced Over-Reliance on Machine Learning' (2019) 61 Arizona Law Review 33.
[58] Ibid.
[59] Chan E, 'The FDA, the FDCA, and Biologics: A Historical Perspective' (2013) 1 FDA Law Journal 561.
[60] Defining medical liability (n 50).
[61] Ibid.
[62] Chung YJ and others, 'The Future of Orthopedic Care: Opportunities, Concerns, and Challenges of Artificial Intelligence' (2023) Journal of Clinical Medicine PMC.
[63] Liability for use of artificial intelligence (n 52).
[64] European Commission, 'Proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence' (COM(2022) 496 final, 2022).
[65] Liability for use of artificial intelligence (n 52).
[66] Griffin BJ, 'AI as a Black Box: Discovering the Machine Learning Warrant' (2019) 60 Jurimetrics 91.
[67] Medical Malpractice and Black-Box Medicine (n 6).
[68] When AIs Outperform Doctors (n 57).
[69] Medical Malpractice and Black-Box Medicine (n 6).
[70] Ethical and legal considerations in healthcare AI (n 30).
[71] Ibid.
[72] Tessier C, 'Integrating artificial intelligence into health care through data access: can the GDPR act as a beacon for policymakers?' (2019) 22 Journal of Law and the Biosciences PMC6813940.
[73] Ibid.
[74] Wachter S, Mittelstadt B and Russell C, 'Why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI' (2021) 41 Computer Law & Security Review 105567.
[75] Ethical and legal considerations in healthcare AI (n 30).
[76] El Emam K and Arbuckle L, Anonymizing Health Data: Case Studies and Methods to Get You Started (O'Reilly Media 2013).
[77] Kaissis GA and others, 'Secure, privacy-preserving and federated machine learning in medical imaging' (2020) 3 Nature Machine Intelligence 305.
[78] National Digital Health Mission Strategy Overview (n 22).
[79] Digital Health and the National Digital Health Mission (n 28).
[80] Ensuring Fairness in Machine Learning (n 48).
[81] Vyas DA, Eisenstein LG and Jones DS, 'Hidden in Plain Sight — Reconsidering the Use of Race Correction in Clinical Algorithms' (2020) 383 New England Journal of Medicine 874.
[82] Dissecting racial bias in an algorithm (n 5).
[83] Ibid.
[84] Ibid.
[85] Ethical and legal considerations in healthcare AI (n 30).
[86] Ibid.
[87] Potential Biases in Machine Learning Algorithms (n 45).
[88] Mehrabi N and others, 'A Survey on Bias and Fairness in Machine Learning' (2021) 54 ACM Computing Surveys 1.
[89] Ethical and legal considerations in healthcare AI (n 30).
[90] WHO releases AI ethics and governance guidance (n 16).
[91] Miller T, 'Explanation in artificial intelligence: Insights from the social sciences' (2019) 267 Artificial Intelligence 1.
[92] Arrieta AB and others, 'Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI' (2020) 58 Information Fusion 82.
[93] The EU Artificial Intelligence Act (n 8).
[94] Medicine, healthcare and the AI act (n 13).
[95] Ethics and governance of artificial intelligence (n 15).
[96] Price WN, Sachs R and Eisenberg RS, 'New Innovation Models in Medical AI' (2022) 99 Washington University Law Review 1121.
[97] Digital Health and the National Digital Health Mission (n 28).
[98] Ethical and legal considerations in healthcare AI (n 30).
[99] Digital Health and the National Digital Health Mission (n 28).
[100] Liability for use of artificial intelligence (n 52).
[101] The FDA, the FDCA, and Biologics (n 59).
[102]Digital Diagnostics, 'IDx-DR: First FDA-Authorized AI for Diabetic Retinopathy Diagnosis' (2024) https://www.digitaldiagnostics.com/products/idx-dr/ accessed 25 November 2025.
[103] Medical Malpractice and Black-Box Medicine (n 6).
[104] American Medical Association, 'Augmented Intelligence in Medicine' (AMA Code of Medical Ethics Opinion 2.3.2, 2023).
[105] Ethical and legal considerations in healthcare AI (n 30).
[106] Ibid.
[107] New Innovation Models in Medical AI (n 96).
[108] Ensuring Fairness in Machine Learning (n 48).
[109] Ethical and legal considerations in healthcare AI (n 30).
[110] Ibid.
[111] Patient perspectives on informed consent (n 32).
[112] Informed consent and medical artificial intelligence (n 36).
[113] Artificial Intelligence in Medicine and the Disclosure of Risks (n 37).
[114] WHO releases AI ethics and governance guidance (n 16).
[115] New Innovation Models in Medical AI (n 96).
[116] Ibid.
