Share This

 

Artificial intelligence concerns me. Generally, people still tend to be concerned that their livelihoods will be replaced by an all powerful, all knowing artificial intelligence (AI) system. This at a time when the world’s largest technology companies (Amazon, Google, Apple), with almost unlimited resources, still cannot make a device that works effectively as a voice activated media player.

Although voice activated devices like Alexa, Google Home and HomePod are improving each year, the software they use is still unable to adequately understand context or ‘learn’ from its previous mistakes. Human users still have to learn how to speak to be understood by these devices (rather than vice versa), in much the same way that we have to learn how to use any tool, be it a smartphone, cooker or a spoon.

In medicine, at least, we are technologically a long way from being replaced by an artificial intelligence system. The NHS is notoriously slow to change, in 2017 holding the dubious honour of still being the world’s largest purchaser of fax machines [1]. AI systems also tend to be most useful in predicting a single task e.g. is my shockwave treatment going to be successful? Is there an area of prostate cancer on my MRI scan? AI systems tend to only predict outputs based upon their previous experience, with the training outcomes decided by humans. Therefore, at least at present, I cannot comprehend a device or system that can replace a network of human specialist doctors by assimilating symptoms, signs and investigations to produce a differential diagnosis, and deliver and monitor a treatment.

Medical proponents of ‘artificial intelligence’ systems include companies such as Babylon. In June 2018, Babylon presented data that their ‘AI’ chatbot “outperforms an average GP” in the MRCGP exam [2]. This ‘study’ was not peer reviewed and was ‘published’ by the company on their own website (in the marketing assets section) [3]. To give the ‘study’ further credibility, the findings were presented at the Royal College of Physicians in London. Journalists lapped this up, creating a lot of free publicity for Babylon. However, when you delve into the detail, Babylon had not been given the MRCGP exam so had no knowledge of what the test paper contained. Babylon admitted to using sample questions from the Royal College of General Practitioners (RCGP) website, as well as created vignettes for testing against select general practitioners. The vignettes were also translated by Babylon employees, so that the Babylon chatbot system could understand them i.e. in a form it was able to understand rather than directly from the vignettes. At the time of writing, a peer reviewed paper is still to be published. For those interested, Enrico Coeira, Professor of Health Informatics, provides a good critical analysis of the ‘published’ paper’s shortcomings [4].

There have also been concerns regarding the use of user data without the authorisation of users. Infamously, Facebook has been using its AI systems to collect and then sell personal data of their users. This included data of users who have not even signed up to Facebook [4]. The Royal Free NHS Foundation Trust were warned by the Information Commissioner Office that they had failed to comply with the Data Protection Act when it provided patient 1.6 million patient details to Google DeepMind [5]. A privacy impact assessment was carried out after the data had already been handed over to Google.

In healthcare, we must ensure that artificial intelligence claims are robustly tested with data that is valid, and that system evaluations are not only repeatable, but independently validated prior to clinical use. This is potentially problematic with any AI system, especially those that are continually learning or produce patient tailored specific predictions. At present, most healthcare AI systems are machine learning systems based on a specific training data set and test data sets. Therefore, independent validation on external data sets is possible and should be mandatory prior to clinical use.

Just as a pharmaceutical company has a vested interest in not publishing their negative trials, or trials where harm occurred, an AI company will likely hold back data suggesting that people came to harm as a result of its algorithms or where there were negative trials. All organisations have a duty to protect their own staff and interests, however, in healthcare the costs of such protections are morbidity and mortality. This fundamental conflict of interest can easily lead to patient harm if the technology industry is left unchecked.

Clinically there are further potential complications with the use of AI systems. Currently most AI systems are used as an adjunct to help clinicians make a clinical decision or interpret an image. However, if the AI system in use is proven to be as good as an average clinician at interpreting a given clinical situation, this will likely alter the behaviour of the clinician to agree with the AI system. If the clinician disagreed with the AI interpretation and was later found to have missed a diagnosis or given a different treatment, leading to harm, would the clinician have to prove why they did not agree with the AI system? Yet, neither the AI system nor its designers can explain how the system came to its decision or output. The designers can only explain the logic of the system design along with the predicted accuracies on a given or test population of data.

The black box nature of AI systems could present new medico-legal scenarios such as if harm occurred to a patient as a result of an AI system e.g. a wrong image interpretation or misdiagnosis, who is liable for this error? The clinician using the AI system? The organisation who built the AI system? The organisation implementing the AI system or the people who validated the system in an external population? [6] This may be become even more complicated if we then see the advent of continually learning AI systems.

Artificial intelligence has the potential to greatly improve medical care. However, we have to be just as wary of new technological innovations, as we are of new drugs and new medical devices. Whilst there is already extensive regulation for new drugs and rigorous testing, there is very little, if any regulation of artificial intelligence systems. Knowledge of AI system design, evaluation and the limitations of AI systems will be vital skill for clinicians in all branches of medicine.

 

References

1. DeepMind Health Independent Review Panel Annual Report, 2017.
2. www.gponline.com/
babylons-ai-outperforms
-average-doctor-mrcgp
-exam/article/1486258

3. https://marketing-assets.
babylonhealth.com/press/
BabylonJune2018Paper
_Version1.4.2.pdf

4. https://coiera.com/2018/06/29/
paper-review-the-babylon-chatbot/

5. https://ico.org.uk/about-the-ico/
news-and-events/news-and-blogs/
2017/07/royal-free-google-
deepmind-trial-failed-to-comply
-with-data-protection-law/

6. House of Lords. ‘AI in the UK: ready, willing and able?’ 2018.

 

Share This
CONTRIBUTOR
Ivo Dukic

University Hospitals Birmingham, UK.

View Full Profile