The ALETHEIA framework in radiation oncology

A collaboration with Rolls-Royce

Man Receiving Radiotherapy

The ALETHEIA Framework is a toolkit developed by Rolls-Royce that ensures ethics and trustworthiness are integral in the use of artificial intelligence. This toolkit is freely available to everyone, and Rolls-Royce are currently collaborating with many different sectors including oncology, music and education to ensure the greatest deployment of this framework.

Artificial intelligence ethics is an area of great complexity, and the ALETHEIA Framework hopes to build public trust in technology through reassuring organisations, people and communities that all ethical implications of AI are taken into consideration, and that the framework is fair and makes trustworthy decisions.

Dr Marianne Aznar, a Senior Lecturer in Radiotherapy Physics at The University of Manchester, is currently working in collaboration with Rolls-Royce. We speak to her about this ongoing collaboration and what it could mean for patient outcomes.

When I heard about the ALETHEIA Framework and that Rolls-Royce was working with artificial intelligence and ethics, I thought this was yet another chance for us in the radiotherapy community to learn from that field and to apply their work to our own processes.

Dr Marianne Aznar

Current collaboration 

 

Dr Marianne Aznar has been working with Rolls-Royce to develop a version of the ALETHEIA framework for use within radiation oncology, which involves a large amount of quality assurance processes. This collaboration hopes to provide oncologists and broader healthcare workers with the tools to prepare for the digital future, through interacting with Rolls-Royce who are models in this area.

The inception of this collaboration began on Twitter. Dr Matthew Katz, a Surgical Oncologist from the US picked up on a tweet from Rolls-Royce, asking if the ALETHEIA Framework could be applied to radiation oncology. Rolls-Royce replied, suggesting those within the oncology setting try the framework. With a new collaboration in view, Dr Katz reached out to Corinne Faivre-Finn at The Christie for a physicist interested in AI. She then set up introductions with Marianne, and from here Rolls-Royce and Marianne began working together on an oncology specific version of the framework.

 

 

Manchester Cancer Research Centre | The ALETHEIA framework in radiation oncology

How the ALETHIA Framework is helping guide international oncology experts

Hear from Matthew Katz and Marianne Aznar explain the Alethia Framework (3.26 mins)

ALETHIA framework

The role of AI in radiation oncology 

Modern radiotherapy is a process whereby the treatment is highly personalised and built on a 3D image of each patient’s individual anatomy. Marianne envisages that the use of AI within radiation oncology and wider oncology could help to increase efficiency and quality in patient treatments. Currently, administering modern radiotherapy is a time-consuming project whereby numerous different experts are required per each patient.

Through the implementation of AI, it is hoped that some tasks could be picked up and performed automatically, relieving the radiotherapy professionals (clinicians, radiographers, physicists) who design and deliver treatments. By potentially reducing workload burden, those professionals will then be free to focus on other important tasks, e.g. spending more one-on-one time with the patients.

The use of AI in radiotherapy can also act to standardise treatments. Currently, there is a certain amount of subjectivity in the treatment and thus a certain amount of human variation. It is hoped AI could decrease these inconsistencies in the clinic and ensure a more efficient and consistent quality of treatment for patients.

What this could mean for patient outcomes 

By freeing up time, it is hoped that we will be able to introduce treatments that are even more sophisticated than those that are currently used clinically. It is also hoped that these treatments will offer a greater individualisation catered for each patient.

At present, the majority of treatment individualisation occurs at the start of a patient’s treatment journey. An efficient and AI driven workflow would open up opportunities to repeat this individualisation every time a patient attended their treatment, in accordance with how their body has changed and responded to previous treatments.

Using AI in radiotherapy would open doors to radiation treatments that are even more refined with less toxicity and improved survival.

Dr Marianne Aznar

Current challenges

There are numerous challenges that currently face the safe implementation of AI into the clinic. Such challenges involve continually monitoring the software and overcoming and limiting any kind of bias. Additionally, assurances around job displacement are needed: NHS professionals need to be confident there will be no job loss, and that training will be provide to use this new technology safely and confidently.

The ALETHEIA Framework is unique as it considered the impact of AI implementation on the workforce.

Additionally, to meet the minimum criterion for the use of AI, a package must consider everyone involved, that being both patient and professional. To ensure the trustworthiness and governance of the AI, data must be collected from patients’ treatments to be built into the system. This must be done in a way that is safe and respects data privacy. This requires trust from both health professionals and the public that these data will be used in a way that is acceptable to them.

“Currently, even though solutions are becoming better all the time, they’re not at the stage where humans can just do a quick review, so it still takes almost as much time to review by a human as to do it from scratch so.”

 

Until we can trust the software is behaving accurately and without bias and without dropping performance, will have to review everything, but it is becoming better all the time and will save us valuable time in the very near future.

Dr Marianne Aznar

Why Manchester 

The diverse and large population in Manchester lends itself to the development of a framework that assesses bias in the data and addresses inequalities that might have been found in previous data.

“It is essential that work is done to communicate with these patient populations and get them involved in projects such as these to ensure everyone, whatever their background, is represented in the AI model.”

Additionally, The Christie has a world leading reputation in radiotherapy and implementation of new technologies. Through this cornerstone and strong links to patient communities through the Manchester Cancer Research Centre, Manchester can continue to contribute to important collaborations such as these.

What can others learn 

Another lesson to be learnt from other industries such as aerospace engineering is how to handle ‘automation bias’. Humans tend to become over-reliant on the computer to perform automated task, as soon as they feel it performs well. One example is self-driving cars, where studies have shown the drivers completely trust the car to drive itself and do not maintain their attention.

“Currently, even though solutions are becoming better all the time, they’re not at the stage where humans can just do a quick review, so it still takes almost as much time to review by a human as to do it from scratch so. Until we can trust the software is behaving accurately and without bias and without dropping performance, we will have to review everything, but it is becoming better all the time and will save us valuable time in the very near future.”

Don’t be afraid to look at other domains and learn what they have done. I was so stricken by the similarities between aerospace engineering and radiotherapy and I think that shows how translatable and universal those issues surrounding AI are.

Dr Marianne Aznar

Future of AI 

It is essential that AI can be continually evaluated and refined. Both treatments available and populations change, and the AI needs to adapt to reflect this.

“When we implement AI at a larger scale, we need to keep evaluating it to very critically make sure there’s no automation bias and to enable it to be more performant. We need to continue working with it and critically appraise it.”

Most AI models are built from very specific populations in huge academic hospitals. These tend to be groups of the same ethnic status and not so much marginalised communities. Therefore, it is essential to counteract any bias that data collected represents all patient populations and is not subject to bias.

There’s a lesson to be learnt from Radiology; parallels can be drawn from our implementation of AI in increasing trustworthiness around the ethics and reducing any associated bias.

Dr Marianne Aznar

Big data for small patients

Building “child-size” individual predictive models for life after childhood cancer

Rules of the road: The need for new quality standards for AI technology in healthcare

Dr Dónal Landers and Dr Gareth Price