Rules of the road: The need for new quality standards for AI technology in healthcare

Dr Dónal Landers and Dr Gareth Price

Shining a light on the patient’s diagnosis

In recent years, we have seen a profusion of AI, algorithm, and machine-learning technologies enter clinical practice. This has happened across the health and care sector, and cancer testing and treatments have been no exception, with some proven benefits. For example, AI has been shown to be capable of recognising patterns in scan images that the human eye would have difficulty detecting. These developments could open a new horizon for earlier diagnoses, as well as informing treatment choices. However, there are risks as well as rewards, which begs the question: are the policies and regulations we need in place?

Scientists Working on Computer In Modern Laboratory

Data matters

One core concern with the development of algorithms in clinical practice is the quality of the datasets upon which they are built. If an algorithm has old datasets, or low quality and low fidelity data at its foundation (in other words, the information is potentially out of date or inaccurate), an algorithm (or any other use of AI in the healthcare context) cannot consistently reach the best decisions for patients. If such an algorithm were implemented into clinical practice, the technology may even do more harm than good, and the confidence and trust of patients and practitioners will be lost.

Despite the huge numbers of algorithms developed by researchers and commercial companies, there are relatively few finding their way into clinical use. In part this may be due to a lack of trust in the way algorithms are trained (for example, the underlying data quality) and validated (for example, if compared against existing standard practices of care). Furthermore, it’s important to know why an algorithm is making a particular recommendation, and for the algorithm to contain knowledge that is not already well-known to clinical teams. These are the challenges facing algorithm developers, as they must prove the usefulness of their technologies before they are likely to enter clinical practice.

New research, greater transparency

The University of Manchester’s digital Experimental Cancer Medicine Team (digital ECMT) connects patients, clinical teams, technology and science. The team brings researchers, clinicians, patients and technology together to innovate in early clinical trials. Our aim is for patients, carers and families to work in partnership with researchers on clinical trials and new technologies.

Our team at the digital ECMT, as part of a collaboration with The Christie, developed an ethically designed algorithm as part of the CORONET.AI Decision Support System. This is an online tool to support decisions regarding hospital admissions or discharge in cancer patients presenting with symptoms of COVID-19 and the likely severity of illness. The tool, utilises real-world patient data relating to the admissions and discharge of cancer patients presenting with symptoms of COVID-19. In addition to satisfying the General Data Protection Regulation requirements relating to the transparency of decision-making, our algorithm also meets the wider ethical needs for clearly interpretable and explainable results.

As a collaborative team, we have worked hard to ensure that the algorithm is ‘transparent’, so that the clinician can clearly interpret the results and, in turn, that these can be explained to the patient in ways they can understand. The ethical imperatives that underpin this are based on the key medical ethical principles of autonomy, beneficence, non-maleficence and justice.

Clearly, no ‘black box’ algorithm (one that cannot show the reasons behind its decision based on the data) can be acceptable for making clinical decisions. Firstly, clinicians will quite rightly refuse to work with these algorithm types. Secondly, patients will be leaving vital decisions about their health and treatments to a process that they cannot understand and therefore trust. However, these considerations also need to be seen in the context of the rapid proliferation of algorithms in both the formal and informal health spaces.

It is for these reasons that leaders in our health and care system need to put urgent thought into the standards that we, as a society, should demand from any deployment of artificial intelligence into all clinical decision-making processes.

Despite the huge numbers of algorithms developed by researchers and commercial companies, there are relatively few finding their way into clinical use.

The need for new ‘rules of the road’

Crucial to the successful deployment of AI and algorithms into our health and care system is the development of minimum acceptable standards for their construction and use. This is essential both to ensure the quality of the task that they carry out and to maintain the confidence of the clinician and the patient.

These standards must include:

  • A robust framework setting standards for the age, quality and accuracy of the underlying datasets
  • Clear standards for the transparency of the operation or the use of algorithms in clinical decision-making – decisions that are clear, interpretable, and explainable
  • An agreed pathway for auditing and contesting decisions where AI has been used to determine a course of action We expect a rapid proliferation of AI tools over the coming years, making it vital that regulators and health system leaders act now to establish the ‘rules of the road’ for new entrants to this market.

Three national regulators have a key part to play:

  1. The Medical and Healthcare products Regulatory Agency (MHRA) are responsible for ensuring the safety of medical devices, which includes the use of software such as algorithms
  2. The National Institute for Health and Care Excellence (NICE) have responsibility for recommending the ways that tools (medicines, diagnostic tests, treatments, etc.) are used in healthcare
  3. The Care Quality Commission (CQC) has an extensive role in monitoring and evaluating the deployment of tools and technologies in healthcare settings.

All have a role to play in the development of a minimum standards framework for the entry of AI/algorithms into the lucrative clinical market. Working together, and with the advice of experts in both the health research and commercial sectors, they can ensure that only the safest and most effective products are made available to patients and clinicians.

Beyond the healthcare sector, there is evidence of this need. For example, a new industrial standard for AI is being developed by the British Standards Institute, and substantial work is going on in the European Union to enhance their own regulation of AI. Clinicians and researchers should partner with industry and regulatory bodies to keep quality and patient care at the heart of health innovation.

We expect a rapid proliferation of AI tools over the coming years, making it vital that regulators and health system leaders act now to establish the 'rules of the road'.

Huge promise

Ultimately, the use of AI and algorithms to improve healthcare holds huge promise. If done well, we will see better outcomes, quicker decisions, lives saved, and years added to lives, because of these new tools. In cancer, we can expect AI to continue to increase our ability to detect tumours earlier and to treat them more effectively.

With a responsible and ethical approach to the development and deployment of AI in healthcare settings, we can expect these new technologies to revolutionise cancer testing and treatments.

We now need robust and clear leadership from policy-makers and regulators to develop new national standards and build in the safeguards we need, to grow clinical confidence and public trust in these remarkable new tools.

About the Authors

Dr Dónal Landers

Dr Dónal Landers is Director of the digital Experimental Cancer Medicine Team, Cancer Research UK Manchester Institute. He is a Clinician and a Fellow of the Faculty of Pharmaceutical Medicine, with over 25 years of experience and achievement in early clinical development, clinical practice, healthcare and pharma consulting, and delivering digital health innovation and solutions.

Donal Landers headshot

Dr Gareth Price

Dr Gareth Price is a Senior Lecturer in the Division of Cancer Sciences at The University of Manchester and a clinical scientist at The Christie. His research focuses on the use of data collected during routine care to derive clinical insight, identify potential improvements in treatment, and provide evidence of the impact of innovations and changes to practice on patients’ clinical outcomes.

Gareth Price headshot

On Cancer

Navigate to On Cancer home page

Advanced radiotherapies: What are the challenges and opportunities?

Professor Karen Kirkby, Professor Ananya Choudhury

Policy@Manchester

Navigate to Policy@Manchester's home page

Data directly from our patients: How is improving patient data the key to better cancer care?

Professor Corinne Faivre-Finn, Professor Niels Peek, Dr James Price

Cancer Beacon

Navigate to The University of Manchester's Cancer Beacon page