When it comes to the daily tasks that a doctor has, there are three clear ‘categories’. If you’re not medically trained then perhaps the easiest way to think of these would be like fires. Let’s break them down:
- A fire in the garden: This is a patient who is clearly ill, and who is deteriorating rapidly. Something needs to be done right now. The waiting list may be too long, and they may need to jump ahead.
- A fire in a different building: This is less urgent, but still a problem. It cannot be neglected, otherwise the fire might spread closer, but it is not as pressing as a fire in the garden. The patient still needs treated promptly.
- A fire much further away: This is unfortunate and it does need taken care of, but it’s a more distant priority. This can wait, perhaps not long, because it will get worse, but if there are more pressing fires to put out then it makes sense to handle them first.
Most service improvement tends to occur in handling the more distant, less pressing priorities, and getting them sorted out before they become serious issues. Traditionally, AI will be integrated into those second or third place areas.
If you take a look at “This is Going to Hurt” by Adam Kay, it explains what life is like as a doctor. Another interesting read is “Do No Harm” by Henry Marsh, who explains the challenges that a consultant faces. Both of these publications show how doctors struggle to deliver good patient care, and how they need systems that will help them to do so.
Putting Patients First
Patients must be prioritised. It is not always easy to do so because there are limits on the workforce and these limits mean that it is harder for large-scale digital transformation to take place. The number of doctors who are even joining into the conversation is not high enough, and we cannot make improvements in patient care without making progress in this area.
AI gets sensationalised in the mainstream media, but healthcare executives and healthcare organisations have a good understanding of when and where AI can be helpful. Operationally, AI can be useful because it will not cause anxiety among doctors or patients. AI is being moved into operational areas first, ahead of those more critical areas, because this will help to limit the fear and risk, and also avoid any disappointment surrounding any limitations that could crop up. AI advocates want to make sure that AI is not over-sold to the market.
At the moment, clinical responsibility is not something that can be given over to a machine. It is not technically possible yet, and even if it was, the ethical implications are another area of concern. Clinicians are highly trained, and as of yet, there is not an algorithm that can make the same decisions in the same way. Clinical AI can improve, learn from humans and reduce the risk to patients, while humans can, and should have the opportunity to over-rule. In essence, at this stage, we are looking at augmented intelligence, rather than full artificial intelligence.
The Big Question
It’s fair to say that AI is right now in the same boat as a graduate who needs the experience to get a job, but can get a job because they have no experience. AI could be trialed, and perhaps should be, but with oversight from real human doctors. It is also possible that places like The Dental Practice could trial AI, in a medical situation that is not as high-stakes as a hospital. Many doctors are struggling with an overflow of patients, and AI could help them by speeding up the diagnostic process.
Could AI help with managing appointments? Could it improve the process of telemedicine? Could it help the workforce by making it easier to take advantage of digital tools? What about using it to process large amounts of data very quickly? There are a lot of areas where AI could do the boring work, freeing up doctors to do what they do best; spend time with patients.