A new code of conduct for artificial intelligence (AI) and other data-driven technologies will ensure only the best and safest systems are used by the NHS.
The code, launched this week, encourages technology companies to meet a gold-standard set of principles to protect patient data to the highest standards.
Artificial intelligence has the potential to save lives, but also brings challenges that must be addressed
Drawn up with the help of industry, academics and patient groups; the aim is to make it easier for suppliers to develop technologies that tackle some of the biggest issues in healthcare, such as dementia, obesity and cancer. It will also help health and care providers choose to safe, effective and secure technology to improve the services they provide.
The code will:
- Promote the UK as the best place in the world to invest in healthtech
Provide evidence of what good practice looks like to industry and commissioners
Reassure patients and clinicians that data-driven technology is safe, effective and maintains privacy
Allow the Government to work with suppliers to guide the development of new technology so products are suitable for the NHS in the future
Ensure the NHS gets a fair deal from the commercialisation of its data resources
The code will also mean the NHS is fairly rewarded for giving companies access to its data pool to build life-saving artificial intelligence systems.
The code is made up of 10 principles that set out how the Government will make it easier for companies to work with the NHS to develop new technologies, and what the NHS expects in return.
AI must be used responsibly and our code of conduct sets a gold-standard set of rules to ensure patient data is always protected and the systems we use are some of the safest in the world
It will be regularly updated in partnership with industry and stakeholders to ensure it keeps pace with the market.
AI technology is already being used across the NHS to improve the early diagnosis of heart disease and lung cancer, to reduce the number of unnecessary operations performed due to false positives, assist research by better matching patients to clinical trials, and support the planning of care for patients with complex needs. Examples include:
- Moorfields/Deepmind – one million anonymised eye scans were shared with Deepmind under a research agreement that began in mid-2016. Deepmind’s algorithm is designed to find early signs of age-related macular degeneration and diabetic retinopathy
- John Radcliffe Hospital worked with their partner, Ultromics, to use AI to improve the detection of heart disease and lung cancer
- Imperial College London developed a new AI system that can predict the survival rates for patients with ovarian cancer
Launching the guide, Health and Social Care Secretary, Matt Hancock, said: “Artificial intelligence has the potential to save lives, but also brings challenges that must be addressed. /p>
“We need to create an ecosystem of innovation to allow this type of technology to flourish in the NHS and support our incredible workforce to save lives by equipping clinicians with the tools to provide personalised treatments.
We believe the life science and healthcare industries must now come together to solve these AI challenges so we can develop solutions that are understandable to both regulators and patients
“AI must be used responsibly and our code of conduct sets a gold-standard set of rules to ensure patient data is always protected and the systems we use are some of the safest in the world.”
Dr Simon Eccles, NHS chief clinical information officer for health and care, added: “Parts of the NHS have already shown the potential impact AI could have in the future of the NHS in reading scans, for example, to enable clinicians to focus on the most-difficult cases.
“This new code sets the bar companies will need to meet to bring their products into the NHS so we can ensure patients can benefit from not just the best new technology, but also the safest and most secure.
The publication has also been welcomed bv healthtech companies.
Speaking to BBH, Dr Jabe Wilson, consulting director for text and data analytics at Elsevier, said: “The new AI guidelines for the NHS are to be welcomed as we see growing interest from governments and organisations in the ethical use of AI, and providing standards for auditing it’s use is one key aspect of this.
“It’s important we understand how training data is generated and gathered, so we can take steps to eliminate potential bias.
“Additionally, when AI is used in decision support systems within the NHS, it’s critical to provide a transparent and understandable rationale for the decision, to guarantee fairness and allow for an appeal or review to be granted.
“In doing this, the big challenge for the NHS will be identifying the provenance of data and understanding what the selection process was for the original research.
“Work is being done on creating artificial training data and looking at how to make decisions transparent, but it’s important the NHS puts in place systems capable of gathering and normalising data to ensure analysis can be conducted accurately.”
This new code sets the bar companies will need to meet to bring their products into the NHS so we can ensure patients can benefit from not just the best new technology, but also the safest and most secure
And Dr Nick Lynch, a consultant for The Pistoia Alliance and an expert in AI in healthcare and pharma, added: “The ethics of AI will be a key challenge over this next decade.
“To date, AI's success has been on solving intellectual challenges, but the real test arrives as AI takes on ethical decisions.“We must be able to show there are no unintended consequences in an AI-driven healthcare decision.
“The NHS’s guidelines on AI are a positive step in the right direction and we believe the life science and healthcare industries must now come together to solve these AI challenges so we can develop solutions that are understandable to both regulators and patients.”