Report
AI and healthcare
There are various applications of Artificial Intelligence (AI) in healthcare, such as helping clinicians to make decisions, monitoring patient health, and automating routine administrative tasks. This POSTnote gives an overview of these uses, and their potential impacts on the cost and quality of healthcare, and on the workforce. It summarises the challenges to wider adoption of AI in healthcare, including those relating to safety, privacy, data-sharing, trust, accountability and health inequalities. It also outlines some of the regulations relevant to AI, and how these may change. As healthcare is a devolved issue, policies on healthcare AI differ across the UK. This POSTnote focusses on regulations and policies relevant to England.
There is increasing interest in the use of AI in healthcare among academics, industry, healthcare professionals, and policymakers. AI has the potential to improve health outcomes and offer cost savings through reducing the time spent by staff on routine work. While some AI systems are commercially available, few are currently used widely in the NHS. Most AI products for healthcare are still at the research or development stage, with some being trialled or evaluated in NHS settings.
In the 2017 Industrial Strategy the UK Government stated its aim to use data and AI to “transform the prevention, early diagnosis and treatment of chronic diseases by 2030.” In 2018, it invested £50m in five new centres of excellence for using AI to improve diagnostic imaging and pathology, with a further £50m allocated as part of its long-term response to the COVID-19 pandemic.
Improved use of AI and digital healthcare technologies is identified as a priority in the 2019 NHS Long Term Plan. In 2019, the Government established NHSX, a new unit responsible for setting policy and best practice around the use of digital technologies in England. This included the creation of an AI Lab with £250M of funding to support the development and deployment of AI technologies in the NHS and care system.
Key Points
- The capabilities of AI systems have improved in recent years due to increasing computing power, greater availability of training data, and development of more sophisticated algorithms using techniques like deep learning.
- Automation of administrative and clinical tasks using AI could reduce the costs of healthcare and increase productivity. AI systems have the potential to make diagnoses more accurately and quickly than clinicians. This could allow patients to access earlier treatment, improving health outcomes, and reducing treatment costs.
- Despite these potential benefits, some stakeholders have raised concerns that the use of AI risks dehumanising the healthcare system. In addition, real-world operating conditions may differ from those expected in development, leading an AI system to perform worse than expected, or to give dangerous recommendations.
- Few studies have examined the performance of AI systems in real-world clinical settings.
- Healthcare staff may need new skills and technical knowledge to operate and understand AI systems. New, more specialised roles may be created.
- Large, high-quality datasets are needed to develop AI systems. Developers often use patient data, such as medical images, gathered by healthcare providers. Surveys suggest a lack of awareness among the public of how patient data is used, and scepticism towards sharing it.
- The need to share large datasets with external developers during AI development may increase the risks of a data breach. There are additional cyber-security risks which are specific to AI systems.
- Various laws and principles govern the use of patient data, including the EU General Data Protection Regulation, Common Law Duty of Confidentiality, and Caldicott Principles.
- The quality and organisation of data varies widely between different NHS services, with some parts of secondary care still using paper records. Many IT systems used in the NHS are unable to communicate with other systems, making it difficult to gather data in a consistent way.
- Currently, most AI systems provide recommendations to clinicians, who balance these against their knowledge and experience. If a recommendation produced by an AI system led to a patient being harmed, there could be legal consequences for the clinician, healthcare provider, and AI developer. There is a lack of precedent for how such a case would be resolved.
- AI systems could provide more consistent recommendations of treatments or diagnoses, reducing health inequalities. However, there is a risk of AI systems exhibiting ‘algorithmic bias’, providing recommendations that discriminate against certain demographic groups.
- Various organisations oversee regulations and standards in the development, implementation, and use of AI systems. Some stakeholders view existing regulatory processes as difficult to navigate, and attempts are being made to streamline these processes.
- Future UK regulations relevant to AI systems used in healthcare will be developed under provision of the Medicines and Medical Devices Bill 2019-21.