How artificial intelligence is transforming healthcare
Artificial intelligence (AI) is already beginning to transform many aspects of healthcare, from offering advice to interpreting scans. This article discusses the progress that has been made so far and the AI-based applications we are likely to see in the near future.
It’s hard to remember a time when we weren’t reliant on computers. The capabilities of computer systems that mimic human cognition and learn from experience are expanding rapidly, and the penetration of artificial intelligence (AI) in all its forms into the healthcare market is set to grow dramatically in the next few years.
Robot-assisted surgery and virtual nursing assistants may make headlines, but some of the most exciting prospects are in diagnosis – triaging patients and interpreting scans. At the heart of these advances is the burgeoning availability of data, and the technology to process and interpret it faster than a human brain ever could.
The first port of call for many patients with worrisome symptoms nowadays is often Google. While this can sometimes reassure them that nothing serious is wrong, it can also do the exact opposite, suggesting alarming illnesses that are, in reality, pretty unlikely. Yet it can also convince patients who really ought to visit their doctor that they are safe to ignore their symptoms. Is there a better way?
Helping patients take action
The health service provider Babylon (www.babylonhealth.com) launched a symptom checker in 2016 that harnesses AI, in combination with human medical expertise, to give patients free, fast, accurate information on what they should do. “Broadly speaking, there are five outcomes: go to the emergency department; see your GP urgently; see your GP routinely; see a pharmacist; or [manage your condition using] self-care and, if things worsen, seek care,” explains Babylon’s Medical Director Mobasher Butt. “A lot of things [already existed] that gave information about different diseases, but patients just want to know what action they need to take.”
Since then, Babylon has launched a chatbot app that leads patients through a series of questions, in a similar way to a doctor or nurse, providing information on conditions that might cause those symptoms. The technology also powers part of the NHS 111 service.
Of course, the app can only work if it understands what it is told, hence the importance of training it in natural language processing. “If you say something like ‘my back is killing me’, clearly you’re talking about back pain and not murder,” Dr Butt says. But a computer might take it literally. If the system doesn’t understand what it is told, it gives no information rather than giving a guess that might be incorrect. “If your query can’t be managed through the AI, then you might be directed to have a conversation with one of our clinicians, or to ask a text-based question,” he says.
Next on Babylon’s agenda is a tool that uses its AI technology to give a comprehensive health assessment. “The information gathered will generate a ‘digital twin’ of the patient,” Dr Butt explains. “You can then simulate different environmental conditions and see how it performs. Patients can see what would happen if they don’t stop smoking, don’t reduce cholesterol and so forth. [It can help people] feel more empowered about their health and be proactively engaged to, hopefully, prevent them from becoming sick.”
AI also offers the prospect of augmenting clinical practice, Dr Butt says. “Doctors have rigorous training, but even then, we don’t always get it right,” he notes. “There are many human factors that can lead to misdiagnosis – fatigue, inaccurate pattern recognition, even reliance on what you might have seen in the previous patient influencing the next decision. How do we use AI to prevent that from happening?”
The AI tool highlights what diseases are most likely to be the cause of the patient’s symptoms, and the clinician adds their acumen. “The machine doesn’t make mistakes a human might,” he says.
Dr Butt also believes there is great potential for AI to replace low-level administrative tasks. “It allows us to make the best possible use of our doctors and nurses, using technology to augment their clinical practice, and offers the potential to significantly improve the safety and quality of healthcare,” he claims.
Predicting treatment pathways
Precision medicine and the advent of genomics, allied to additional information about tissue pathology and computational pathology, means it is becoming possible to predict the most effective treatment pathway for a patient, according to Arun Ananthapadmanabhan, Head of Products for Computational Pathology at technology company Philips. “The ultimate benefit, some years down the line, will be sustained improved patient outcomes,” he says.
In oncology, for example, Philips has AI-based research applications to identify, quantify and estimate tumours. By identifying areas with maximum density of cancer cells for DNA extraction, it avoids false negatives caused by collecting too many normal cells. “Our TissueMark system quantifies samples and informs the pathologist whether there is sufficient tumour material. This large-scale data matching exercise is something the human mind cannot do. Similarly, in genomics, we have an application that maps genetic mutations from tumour samples to identify those that are clinically significant, and match the patient to relevant clinical trials.”
They are now moving from quantification to efficiency improvements, such as prioritising slides for diagnosis by circling regions of interest. “Instead of a human scanning the entire image, the computer will highlight where the tumour is,” Mr Ananthapadmanabhan says. “We think there could be a 30–40% efficiency improvement in some cases. The clinician can focus on treatment, rather than analysing slides.”
There is evidence that systems like this can improve accuracy. In a research project, pathologists under no time pressure were 96% accurate at predicting breast metastasis, while an algorithm had an accuracy of 93%. Combining the algorithm with human review, the accuracy jumped to 99.5%. “Machine screening with a human making the final diagnosis improves accuracy significantly,” Mr Ananthapadmanabhan says.
Box 1. There’s an app for that
Interpreting scans more quickly and more accurately, and integrating data gleaned from multiple sources, is an area where AI is set to have a massive impact, not least because of the potential to guide treatment choices and reduce inter-observer variability by training the algorithm with huge numbers of previous scans.
AI software developer Optellum, for example, wants to improve the diagnosis of lung cancer. “Only around 15% of patients survive longer than five years, primarily because they are diagnosed too late,” says Chief Executive Officer Vaclav Potesil. “If you can detect it in stage 1a, the survival rate is 85%.”
Lung cancer is often found incidentally when patients are scanned for other reasons and suspicious pulmonary nodules are spotted – but most are not lung cancer. “It takes on average two years between the suspicion and the definite diagnosis, with repeat scans and biopsies,” Dr Potesil says.
Optellum’s platform automatically captures all suspicious lung scans anywhere in the hospital, and channels patients to a pulmonary nodule clinic. “The machine learning algorithm predicts the eventual diagnosis from the very first scan,” he says. Predicting which patients do not have lung cancer could save a lot of NHS resources. The system also flags those patients who should be sent to a surgeon straight away.
The algorithm was trained with data from 40,000 scans and 15,000 patients, linked to pathology results after resection. In future, Dr Potesil says, other clinical data could be added, such as age, sex, smoking history, and blood or breath biomarkers. A prospective multicentre trial is being run by Oxford University, and the system will be deployed at the first pilot sites at the end of the year. Importantly, no new scanners are needed: it uses scans that are already being taken. “It could even be expanded to other diseases such as COPD and interstitial lung diseases,” Dr Potesil claims.
Mumbai-based Qure.ai is developing AI systems for the automatic interpretation of CT head scans and chest X-rays. “In both, we want to automate a lot of the reporting,” Chief Executive Officer Prashant Warier says. The systems can spot abnormalities on 99% of chest X-rays, he says; with brain scans, it can detect skull fractures and bleeds, as well as pneumocephalus, hydrocephalus, and even some infarcts.
By prioritising those scans that appear abnormal for review by a radiologist, a lot of time and resources can be saved. “Only a few will be emergency cases that require immediate attention,” Dr Warier says. “The system can automatically read a scan in a few seconds, identify whether it is an emergency case or not, and identify which scan the radiologist should read first.”
Meanwhile, AI diagnostic tools developer Ultromics is looking to improve the accuracy of stress echocardiograms. “They are only about 80% accurate, and 60% in some hospitals,” says Chief Executive Officer Ross Upton. Oxford University’s database of ECGs was used to develop an algorithm using a machine learning approach that Dr Upton claims is 90% accurate. These images are tied to patient outcomes, and used to train the algorithm to improve its predictive power and reduce false negatives. “If a clinician gets it wrong, the patient could either be sent home with coronary artery disease, or sent for unnecessary surgery,” he says. “We know exactly what happened to a patient a year later, so we can train our algorithm to make better predictions.”
Trials are set to begin later this year in a network of NHS hospitals, and Dr Upton hopes regulatory clearance will be granted early next year. Ultimately, Ultromics hopes the results will be instantaneous, saving the 15–20 minutes a clinician typically spends on reporting a scan. This could have a huge impact on waiting lists.
Ethics and security
Should we be worried about the ethics of this explosion in AI-driven technology? Catherine Joynson, Assistant Director at the Nuffield Council on Bioethics, cites some potential concerns: how reliable is it, and how will we know if something has gone wrong? “As humans, we don’t know how AI has come up with its answer,” she says. “How can we check it? If something goes wrong, who is accountable? And can it be trusted?”
Then there’s the thorny issue of data sharing. Dr Upton says the patients he has spoken to are happy with the idea of AI, at least for scan interpretation. “AI isn’t that scary if you are likely to get a more accurate diagnosis,” he says. “I think where patients are more wary is about what happens to their data. We need to work out how we can responsibly access data.”
And, of course, from the perspective of healthcare professionals, there are big questions. “Is this technology going to make me a better healthcare professional, is it going to complement my skills, or is it going to mean I am obsolete?” Ms Joynson says. “A positive outcome is if AI could help healthcare professionals with paperwork, and leave them more time for patients.”
Saving time – and money
AI can also offer real advantages in health economics terms. “You could introduce an innovation that just adds costs, randomly flagging patients who are suspicious and even if many are false alarms, we would get some extra cancers,” says Optellum’s Dr Potesil. “But the overall cost would be huge. We can improve outcomes while saving a lot of resources in the health system, which can then be deployed for treating and diagnosing better the patients who need it.”
Babylon’s Dr Butt can envisage a day when the computer-augmented clinician becomes commonplace. “In modern-day aviation, a pilot wouldn’t fly without the assistance of technology,” he says. “How do we progress healthcare to a place where the integration of AI is normal, standard practice?”
Declaration of interests
None to declare.
Sarah Houlton is a freelance science journalist