Date: 16th June
Venue: Justitia meeting room 7A-2-04, 2nd floor, Njalsgade 76, DK-2300 Copenhagen S and Zoom (this is a hybrid event).
Time: 13:30 - 15:30 CET
Organizers: CeBIL- Centre for Advanced Studies in Biomedical Innovation Law, Lifesciencelaw.dk & Nordic PerMed Law
You will receive a zoom link in the automatic email receipt once the registration is complete.
13:30 - 13:45 Arrival, mingle & light refreshments
13:45 - 14:00 Introduction by CeBIL Director Timo Minssen: Medical AI and the work of CeBIL
14:00 - 14:20 Medical AI: Contextual bias, liability, and regulation by Nicholson Price
14:20 - 14:40Label Bias, Informed Consent, Explainability, and Promise and Peril of ChatGPT by Glenn Cohen
14:40 - 15:00 Medical AI: GDPR and informed consent in Nordic law by Katharina Ó Cathaoir
15:00 - 15:30 Q&A, Discussion & Mingle with light refreshments
Medical AI: Contextual bias, liability, and regulation
AI systems perform differently when used in different environments; how should the law respond? Are ex post liability rules adequate to create incentives for safe development and deployment, or can ex ante regulatory oversight resolve the problem from a different direction? As AI enters health use, policymakers, innovators, and health system actors alike should consider how to address the problems of contextual bias and differential performance.
Nicholson Price, JD, PhD, is a Professor of Law at the University of Michigan Law School. His work considers how various areas of law shape biomedical innovation, including the use of big data and artificial intelligence, drug manufacturing, and drug development. Heis co-PI of the Project on Precision Medicine, Artificial Intelligence, and the Law at the Petrie-Flom Center at Harvard Law School and a Core Partner at the University of Copenhagen’s Center for Advanced Studies in Biomedical Innovation Law (CeBIL).
Medical AI: Label Bias, Informed Consent, Explainability, and Promise and Peril of ChatGPT
It is well-known that medical AI sometimes generates outcomes that are less good as to certain groups, especially racialized minority. But while data set bias is easy to conceptualize, how well-equipped are regulators and the law for more subtle forms of bias such as label bias? When are physicians legally or ethically obligated to inform patients that medical AI is involved in their care as a matter of informed consent? Should regulators be demanding explainability or interpretability in medical AI or is the black box defensible, and under what circumstances? Finally how might integration of Large Language Models (LLMs) such as ChatGPT raise particular challenges?
Glenn Cohen, JD, PhD, is a Deputy Dean and James A. Attwood and Leslie Williams Professor of Law, Harvard Law School as well as the Faculty Director of the Petrie-Flom Center for Health Law Policy, Biotechnology & Bioethics. His work focuses on how the law grapples with new medical technologies – including reproductive technologies, psychedelics, and artificial intelligence. He is co-PI of the Project on Precision Medicine, Artificial Intelligence, and the Law at the Petrie-Flom Center at Harvard Law School and a Core Partner at the University of Copenhagen’s Center for Advanced Studies in Biomedical Innovation Law (CeBIL).
Medical AI: GDPR and informed consent in Nordic law
The explainability requirements of the GPDR have been widely discussed in academic literature. However, in European countries healthcare is mainly governed at the national level, which may set higher standards than what is mandated by EU law. Consequently, healthcare providers must adhere to domestic healthcare regulations when utilizing ML models to offer medical advice. What level of understanding must physicians have regarding such models in order to integrate them into patient care in a manner compliant with domestic law?
Katharina Ó Cathaoir, PhD, is an Associate Professor of Law at the Faculty of Law, University of Copenhagen and Pro Futura Scientia Fellow at the Swedish Collegium for Advanced Study. Katharina’s work explores the rights of those at the margins of health law, including older people, children and migrants. She is particularly interested in the implications of data driven treatment for such groups, namely questions of access and acceptability.