Choose a channel
Check out the different Progress in Mind content channels.
Progress in Mind
With the rapidly expanding frontiers of digital health, as with any move into the/this new territory, the space created can seem lawless.1 So we need regulation to clarify liability, and new codes of ethics to maintain standards of responsibility in the era of Artificial Intelligence and big data, ECNP 2019 was told. For those concerned with brain health, the time has come for proactive thinking.
The symposium heard two compelling examples of the impact of new technology on social and legal relationships.
One, drawn from football, was limited in its application. The other, relevant to driverless cars, potentially affects us all.
But both have implications for medical practice in the sense that they pose a fundamental question about our responsibility for decision-making.
In the football example, a referee challenged by players points to the technology that has absolved him of the role in making a decision: “the computer says NO”. No discussion or negotiation. End of story.
If expert systems undermine meaningful control, what happens to clinical and ethical responsibility?
In the second example, the driver of a “driverless” car which causes an accident is watching a movie at the time when – in theory – he should have had his hands on the wheel and been ready to reverse any faulty decision made by the on-board computer.
Culpability by design
In this example, the person technically assigned with responsibility is judged to be at fault. But if people are put in a position where normal psychological processes mean that they cannot sustain the attention and concentration required to be in charge, isn’t the technology itself guilty of “entrapment” by creating conditions in which accidents are inevitable?
If expert systems undermine meaningful control, what happens to clinical and ethical responsibility? Is this not culpability by design?
Artificial Intelligence may alter understanding of professional expertise and the doctor-patient relationship
And these questions apply equally to a complex clinical situation in which expert systems arrive at a diagnosis or a decision on how best to manage their patients. Where does the ethical legitimacy and legal liability lie? What happens to age-old relationships between physician and patient that are based on trust, and how do physicians retain professional pride in the worth of what they do when a machine is helping to direct care? Where is the line drawn?
Medical ethics is more than prohibitions
At the moment, there are more questions than answers. But the ECNP session pointed to the need for proactive thinking about the questions: the time to do this is “before”, not after, new technology takes hold.
Deliberation should be multidisciplinary, and involve ethicists. And by ethics in this context we do not mean telling people what they should and should not do, or how to be good, but identifying their moral concerns, stimulating discussion about them, and integrating the findings into the development of technology. This is better than trying to bolt solutions on after the event.
Humans have to retain a meaningful degree of control
We have digital biomarkers, software-based therapies (increasingly common in the mental health field), and advanced algorithms to support diagnosis and patient management. Each can have issues of accuracy, efficacy, professional responsibility and legal liability. So it is right that the ECNP has an expanding interest in this field, and that this was reflected in the thought-provoking symposium.
Avoiding an algogracy
We should beware coining new words without a need for them, but it seems that “algocracy” has a place.2 After democracy, or bureaucracy, might we be entering a world in which there is rule not by the people, or by officials, but by algorithms? Such a world has been foreseen.
In the public sphere, such a development would pose a crisis of legitimacy in decision making. In the medical sphere too, the unchallenged dominance of Artificial Intelligence and machine learning would cause problems. Humans must retain a meaningful degree of control, the ECNP meeting heard.
Our correspondent’s highlights from the symposium are meant as a fair representation of the scientific content presented. The views and opinions expressed on this page do not necessarily reflect those of Lundbeck.
1. Nebeker C et al. BMC Med 2019;17(1):137.
2. Danaher J. Philosophy and Technology 2016;29(3)