GROW YOUR TECH STARTUP

The problems of using diagnostic AI in medicine and healthcare

August 18, 2017

SHARE

facebook icon facebook icon

The medical profession has been working very closely with machines for some time now, so what can previous introductions of machines to this unique industry tell us about how the introduction of AI might unravel?

Medicine and cars are often cited as the two areas where your average citizen might first feel the effect of artificial intelligence. Cars, we all know, will become driverless – providing Elon Musk’s car testing goes smoother than his rocket testing. On the other hand, in the medical profession we’re looking at a far broader range of applications.

The testing which seems farthest advanced, so the products of which will be with us soonest, seems to be in the field of diagnostics. There are said to be algorithms which can diagnose skin cancer as accurately as a board-certified dermatologist, image scanning machines now considered broadly on par with a professional radiologist, and apps which use crowdsourcing which feeds an AI to “improve the accuracy of individual physicians”.

This all sounds mighty exciting, but there’s a “but”. First of all one should acknowledge that upon airing any criticism or concern about the introduction of new technology, it is likely, and perhaps right, that one inspires criticism or doubt on the basis of being a technophobe, naysayer, Luddite or clickbait-hunting online content-destroyer. This phenomenon is amplified when it comes to AI, to the point where the technology’s biggest proponents are not exempt.

But let’s proceed from this point to take an argument on its merit, rather than move ballast to a side of the ship on which it potentially does not belong.

There are two considerations when it comes to medicine and healthcare which make it different to other industries, and which should be germinal to a discussion on introducing of medical AI.

The healthcare community isn’t so good at preventing mistakes

Many have commented on how AI is a black box. But more broadly the medical industry is too. In his book, Black Box Thinking, Matthew Syed makes some uncomfortable comparisons for the healthcare industry. Everyone trusts doctors and hospitals but they kill more people each year than traffic accidents. Meanwhile people are naturally a little fretful about catching a plane, but the aviation industry has a crazy-impressive safety record.

Annual deaths in aviation industry: 325

Annual deaths from traffic accidents: 40,000

Annual preventable deaths in healthcare industry: 250,000

Stats on this stuff aren’t easy to obtain so not all studies are watertight; except those of the aviation industry, of course. But these figures aren’t even nearly close. A paper in the BMJ suspects medical error to be the third leading cause of death in the US. What’s more is the aviation figures are global while the traffic and healthcare stats are just the US.

Why? The aviation industry is in a situation where it cannot fail. To fail is to go out of business. So planes have black boxes which record everything and each time a mistake happens every detail of every mistake is excruciatingly analysed and the findings advertised across the aviation industry in the form of new measures designed to prevent those mistakes from ever happening again. There’s nowhere to hide.

By comparison, as soon as a mistake is made in the medical profession the first thing that happens is the doctor has to go and see the patient or patient’s family. A perfectly natural fear of judgement, legal cases, or merely provoking sadness leads to obfuscation and a removal of agency in the language used. Morbidity and mortality conferences are often run to get to the bottom of mistakes but they are confidential, not mandated, have limited post hoc oversight and besides all that the tone of dealing with the issue has already been set.

ai medicine

Enrico Coiera, Director of the Centre for Health Informatics at Macquarie University

This manifests directly in terms of the equipment used, as Professor Enrico Coiera, Director of the Centre for Health Informatics at Macquarie University, points out “In aviation, if you develop a new model of aeroplane you have to meet stringent testing and accreditation guidelines before it can be used for the public. We don’t say, ‘oh, aeroplanes are a great idea overall, so new aeroplanes are OK to go’.”

So given all this takes place what can we expect from the introduction of AI? Hopefully point 2 will be instructive here…

The healthcare community has form when introducing new technology

While everyone is banging on about machines which can do jobs better than humans, it would only be polite to point out that these machines aren’t taking the jobs of humans just changing them. They can improve the ways humans work, making them quicker and more accurate and all that good stuff.

This is probably true, and to a large extent I agree with it. Further, it’d be both churlish and boorish to suggest that new technology hasn’t made healthcare quicker, more accurate or generally saved lives. The point here is not to reject the introduction of AI, it’s to try and bring down those preventable deaths down and avoid the mistakes we know will happen if we don’t try. Ultimately, human and machine will have to work together.

But human and machine working together is where I foresee the first problem. Some of the first really time-saving machines which were brought into medicine were used in blood sampling. It must have been fantastic not to have to comb through options one by one performing tests, but to pass a blood sample through a machine or two which can spit out a reading for the most common problems.

However, parasites posed a problem. Machines found it difficult to spot them because they’re living organisms. And many are relatively rare in developed countries. But they do kill, and the introduction of clever machines means doctors may not think to actually look down the microscope at a sample they’ve just taken.

This was covered in a 2003 paper called “Current strategies to avoid misdiagnosis of malaria,” which said, “In the laboratory, unfamiliarity with microscopic diagnosis may be the main reason, considering the large number of laboratory staff who provide on-call services, often without expert help at hand, as well as the difficulty in detecting cases with low-level parasitemia. Staff should therefore be provided with continuing microscopic training to maintain proficiency. The complementary use of immunochromatographic rapid detection tests (RDTs) may be useful, especially during on-call hours, although, in order to ensure correct interpretation, their inherent limitations have to be well known.”

Which strikes me as polite, professional doctor-talk for “I know we’re all busy, but if you check you’ll know”. Immunochromatographic RDTs are the kind of technique pregnancy tests use, so they might help a bit, but even with rudimentary technology like that the paper’s author is stressing awareness of it’s limitations. Ultimately the problem here is that humans can become lazy around machines, and the gaps which people point to as a necessary space between machine functionality in which a human can operate, quickly become left unattended.

This can be further demonstrated if we return to Coiera’s thinking on new equipment, “There is currently no safety body that can oversee or accredit any of these medical AI technologies, and we generally don’t teach people to be safe users of the technology. We keep on unleashing new genies and we need to keep ahead of it.”

Troubling stuff, eh? You can imagine why. Some new company building new machinery for the healthcare industry coughs up something cool, like a machine which writes prescriptions or analyses blood for diseases. The hospital brings it in and gets it functioning, it cuts costs so their profits go up and they free up doctors’ time in a busy hospital. Cool.

Now, although the medical system isn’t particularly good at updating itself, it does have a system of checks and balances. And the system has been built to deal with human shortcomings. As we’ve seen, it doesn’t even do that well, but it’s not built for machine failures. So imagine how bad it’ll be then? And, as we’ve also said, human and machine will need to work together.

Coiera points to an example where a rural GP’s prescription error led to a patient death, to which the GP’s defence was “the computer didn’t tell me it was wrong.”

This, in a nutshell, is the problem with the healthcare profession’s attitude towards mistakes, humans overlooking the gaps left by machines, there being little or no training on machine procedures, there being no accreditation body for new machines, and generally the situation which might be coming down the tracks, right towards us.

SHARE

facebook icon facebook icon

Sociable's Podcast

Trending