“Should we care that AI is intrinsically incapable of worrying about us?”
Reading: Echoes of Concern—AI and Moral Agency in Medicine, JAMA Cardiology, 2 September 2024. Plus an interview with the author: AI Can’t Worry About Patients, and a Clinical Ethicist Says That Matters, JAMA, 15 November 2024.
Should we care that AI is intrinsically incapable of worrying about us? Would this matter even if one day it could provide more accurate diagnostic and therapeutic recommendations than an undeniably expert but inescapably fallible human physician?
An essay on a range of broadly ethical issues, such as: responsibility for mistakes, or balancing trade offs which are based on values rather than purely data.
Getting to the right answer for patient care decisions not only requires advanced knowledge and clinical expertise, but it also requires the moral work of interrogating a patient’s goals and values to prioritize relevant and sometimes competing considerations.
The level of care we’d like is hard to get:
Executing this duty requires emotional intelligence and moral agency, attributes that AI may be able to mimic but never truly possess.
Can moral choices be “computed”? They are very hard for people who can and do reasonably disagree. This feels like I could be heading down a massive rabbit hole of law and ethics.