The hype around AI in medicine is ubiquitous. AI not only has the potential to optimise workflow and improve the administrative burden in medicine, but deep learning algorithms also offer remarkable scope for precise image recognition. Amidst the uproar, there remain residual concerns for what an increased dependence on AI will mean for Doctors. Questions surrounding liability and bias persist unanswered.
At first glance, the opportunities are endless. AI algorithms can spot subtle details too minute for the human eye. Cancer detection remains a hot topic. Google is currently developing AI algorithms that construct 2D XR images of lungs into a 3D lung to visualise the entire structure of a lung and locate a possible tumour.
Other scope for benefit include assistance with patient selection for surgical intervention. In managing aortic stenosis, AI-assisted ECHO, CT or MRI could give detailed insights into leaflet mobility and outflow tract in patients with less severe aortic stenosis. This can help determine if they will benefit from valve replacement rather than continuing medical therapy alone. In addition, identifying changes in left ventricular function and fibrosis could allow cardiologists to make earlier intervention. Basing such decisions on clinically valid input may enhance the ability of Doctors to stratify patients into risk groups.
Time efficiency is crucial in radiology and AI has the potential to transform our approach. AI can be used to automatically flag serious abnormalities such as brain haemorrhage for priority reporting. AI can also enable optimal and efficient allocation of workflow and faster, safer scheduling of patient scans.
The main drawback of heightened sensitivity is always the detection of subtle changes of indeterminate significance and overdiagnosis. Removing the patient from the centre of this endeavour may well create hugely sensitive systems but it risks increasing rates of subclinical or indolent findings.
To illustrate overdiagnosis – not all ground glass opacification on chest CT is COVID-19, but an AI model could persistently associate COVID-19 with such findings if that was the ground truth diagnosis most aligned at the time of the training of the algorithm. Therefore, any outcome assessment cannot solely rely on imaging. Just as is in clinical practice, the diagnosis is made using extra information such as patient age and symptoms.
Another big challenge is knowing what to do when early fine changes are detected. For example, machine learning has been used to analyse brain MRI for early ischaemic stroke within a narrow time window from symptoms onset with impressive sensitivity. Though this offers a promise of early diagnosis, we are yet to work out how very subtle parenchymal brain alterations detected by AI will evolve in terms of gross neurological outcomes. This may give rise to treatment being given in absence of a well-defined abnormality and such discordance can lead to morbidity associated with treatment as well as patient confusion and mistrust.
Medical liability issues may become inevitable if AI becomes the standard of care and it is found that early subtle. Will overreliance on neural networks render Doctors as mere agents of software? How will patients react to this and how will this shake-up impact the doctor-patient relationship? Is there an AI company so confident of its algorithm that they are willing to accept medical negligence claims, or will there always be a clinical backstop with AI used as an adjunct, not a replacement?
This scenario reinforces the fact that in many cases, input by a clinician will be needed. When looking at imaging, a radiological examination is not the be-all and end-all of the diagnostic pathway.
“An automated route forward relies on a vast, comprehensive data set” says Farzana Rahman, CEO of Hexarad.
Swathes of data cannot be gathered without running into ethical issues around the ownership of big data and how patient privacy will be protected. Selection bias can arise if training data is from a population that does not accurately reflect the population as a whole; in this way, we must learn from the obstacles faced by the facial recognition technology debate. Other sources of data bias include curation bias, where doctors can choose which angles to take images from and negative set bias, where data sets over-represent positive/interesting examinations.
Medical data constitutes fertile ground for profit and so exploitation and misuse of it needs to be guarded against. Radiologists will be on the frontlines when it comes to conflict of interest; we need institutional oversight to ensure honest commercial decisions are made. If we do find a perfect balance between harnessing the resources of AI, how can we then ensure that hypothetical drastic improvements in life outcomes are spread evenly across the globe?
On the precipice of an AI-assisted diagnostic imaging revolution, we must anticipate the potential unknowns of this technology, as well as its threats. The hype around AI demands our full attention and begs the creation of ethical standards that prioritise not technological advancement but the benefit to humanity.