Black Box Medicine: It’s A Thing.

We have all heard about the self-driving cars that are coming to a highway near you in the not-too-distant future.  They are powered by artificial intelligence (“AI”).  We had a fatal accident here in Arizona last year when a self-driving car under test hit a woman crossing the road late at night.  Needless to say, litigation has ensued.

The defendant is the company that was testing the car and which employed the driver, who was supposed to be monitoring things.  But, as many have asked, who gets sued if a private individual is in a self-driving car and the car causes an accident?  Is it the owner of the car?  Is it the person sitting in the car at the time of the accident?  Is it the manufacturer of the car?  Is it the developer of the software that runs the car and makes it self-driving?  Many people have been asking about this and thinking about it for some time.  Very few people, however, have been asking about and thinking about what happens when an artificial intelligence program makes a medical mistake that injures or kills a patient.

Image result for robot doctor

“Black Box Medicine” is the name given to artificial intelligence programs that more and more health care providers are using to assist them in medical decision making.  Typically, the programs employ algorithms which are developed from large data sets of patient information.  The doctors using the algorithms answer certain questions about their particular patient and his or her symptoms and the algorithm suggests a treatment.  It is called “Black Box Medicine” because no one but the creator of the algorithm knows what is going on inside and how it was programmed to make decisions.  More and more medical decisions are being made either by algorithms or with the help of an algorithm.

Although the algorithms are powered by AI, AI is only as intelligent as the people who program it and only as accurate as the data upon which it relies.  People develop the algorithms.  People select and record the data used to train the algorithms.  People are fallible.  People make mistakes.  People have biases.  Algorithms are going to make mistakes and have biases too.  Some already have.

In preparing this post, I read a report of an AI recommendation for chemotherapy for a patient that included use of a blood thinner.  Unfortunately, the patient had a history of extensive bleeding.  The bad recommendation was caught in time but if it had been used, it could have resulted in the patient hemorrhaging.

One of the ways in which malpractice can occur with the use of algorithms is the data entry by the patient’s physician.  If the physician enters incorrect data, the answer is likely going to be incorrect as well.  Even when the data entered by the patient’s physician is correct, the algorithm may have unknown defects that make the answer incorrect.

So who is responsible if a patient is injured by the recommendation made by an algorithm?  Is it the physician or hospital that selected the algorithm?  Is it the manufacturer who programmed the algorithm?  What responsibility does the patient’s physician have to double check the recommendation of the algorithm?  Is it malpractice to disregard the recommendation of the algorithm?  What happens if the patient’s physician thinks the algorithm is wrong and chooses a different treatment and the patient is injured by the chosen treatment?

These are all questions for which the law provides no easy answers.  As always, choose your physician and your hospital as wisely as you can and hope for the best after that.

 

 

Posted in Doctors, Hospital Negligence, Hospitals, Medical Devices, Medical Malpractice, medical mistakes, Medical Negligence, science news |