In a prior post, I endeavoured to introduce the importance of patient and public involvement (PPI) in the delivery of cancer services, as well as the many personal reasons as to why this is something of which I am such a passionate and unashamed advocate.
Simply put, having received peer support for my own cancer journey I wanted to help others. To that end, today I’m an active and proud member of a volunteer army created to provide patient advocacy on new and existing projects impacting the patient. It is my experience that patient and public participation and oversight consistently improves the quality of research in a myriad of ways. This is a view shared and best articulated by Professor Dame Sally Davies, the former Chief Medical Officer for England:
“No matter how complicated the research, or how brilliant the researcher, patients and the public always offer unique, invaluable insights. Their advice when designing, implementing, and evaluating research invariably makes studies more effective, more credible and often more cost-efficient as well.”
So with everyone hopefully feeling sufficiently well-schooled and bought into the value of PPI, the next question is how it might be applied in the fast-changing world of machine learning and artificial intelligence (AI)? Similarly, mindful of the complexity involved in devising, developing, and applying new AI, how might we address a potential knowledge or expertise gap on the part of patient advocacy teams assigned to assist? I’ll start with expertise…
Before retiring, I spent thirty years as a telegraphy engineer repairing teleprinters and fixing line circuits. While a skilled profession, largely dealing with hardware, I never knowingly worked with any complicated algorithms or machine learning. You might therefore wonder, quite reasonably, how someone like me could meaningfully contribute to any AI evaluation? However, by the same token, until my own diagnosis and working for Guy’s, an equivalent question could have been asked quite legitimately of my understanding of oncology and clinical research. And yet, here I am, on an almost daily basis, advising some of the smartest people in healthcare academia, technology, and service delivery on what they need to do to ensure that their often hugely complicated new cancer-related project passes muster.
The reason why I feel so confident that patient groups comprised of lay individuals like me can meet this challenge is that the acid test for the effective utilisation of PPI - irrespective of the research, technology or process being evaluated - is essentially the same. In short, can the parties involved show the presence of shared decision-making and adherence to the simple patient principle of ‘no decision about me, without me’ in their project’s design and execution? If patient groups have been consulted throughout and their feedback incorporated into the final product, then this is quite easily assessed. It also introduces an environment in which the onus is not on patient advocacy groups to understand the AI necessarily, more for those proposing its introduction to appropriately articulate and qualify the benefits. And this is imperative.
According to Ipsos Mori, the British public apparently sits somewhere between ignorance and suspicion when it comes to automation. Recent polling revealed that 54%^ would not feel comfortable with AI making decisions that affect them, and although not measured in this study, it is highly likely that our risk appetite will be at its lowest in scenarios pertaining to our health. After all, very few of the possible consequences of an algorithm incorrectly predicting the next word in an email or text message (even to the boss!) are likely to carry the same hazard as if a machine was to make an unqualified call on an aspect of our healthcare.
In this context, and given how emotive a subject AI evidently represents, winning hearts and minds will require the clear and succinct articulation of the quid pro quo or what’s in it for me and others like me:
- “Your algorithm needs my data? Forgive me, but do you need all of it, and how do you intend to use it and why?”
- “Your AI will enable my doctor to take on more patients? Okay, but explain why that won’t dilute the quality of my care?”
- “Your automation saves the hospital time and money? Great, where’s the patient dividend?”
- “Your technology will improve patient outcomes? Fabulous, show me your workings!”
By consulting with patient representatives at each step of product development, and acknowledging, documenting, and addressing in easy-to-understand terms the types of questions presented above, discourse will naturally move beyond often intimidating mathematical formulas and technical terminology, to one in which everyone is speaking the same language. Likewise, this essential process of demystification and layering what I regard as patient empathy into our AI should culminate in reduced cynicism and an improved likelihood of acceptance. This will mean less returning to the drawing board, shorter delivery times, and a greater chance of overall project success.
And this fundamentally, is what we are all looking to achieve. Each one of us living with cancer possesses our own individual and unique disease. Furthermore, for each new test performed, image taken, or clinical report produced, the number of data points or permutations requiring analysis and potential action by our care teams increases exponentially. AI, if executed properly, promises an exciting future in which our hard-working clinicians are able to make more timely sense of this complexity so that we can all benefit from a higher standard of care, and with it, improved outcomes. Let’s work together to realise this promise.
This is a guest post by Alan Quarterman. He serves as a patient advocate at Guy’s and St Thomas’ Hospital in London, United Kingdom and is an integral member of the Guy's and St Thomas' NHSFT, King’s Health Partners and Inspirata steering committee and project team evaluating how oncology AI can improve patient clinical trial matching. For more information on this collaboration click here.
^ "AI, Automation and Corporate Reputation" 2018 H Archer, R Writer-Davies, Mark McGeoghegan