In August, the NHS announced that it will be investing £250 million into a new national AI laboratory, with a focus on earlier cancer detection, developing new treatment and lightening NHS staff workloads. The news was welcomed by many, with NHS Cancer Director, Professor Sir Mike Richards, predicting its “instrumental” role in urgently needed upgrades to unsafe screening IT systems. Meanwhile, others have accused UK Health Secretary, Matt Hancock of being “obsessed with technology” and dismissed the announcement as tech for tech’s sake.
Now the dust has settled, human+ consultant Oliver Cook provides an overview of why we should embrace the NHS’s new focus on AI.
Innovating AI Developing AI and implementing RPA in healthcare is not about creating technology for the sake of technology. The primary focus of any project funded by the NHS AI lab should always be solving problems faced by patients and staff, more efficiently, more effectively, and as a byproduct, likely more cost effectively.
The potential benefits of innovative AI and automation in the NHS are substantial. From analysing the data of thousands of patients to diagnose conditions developing in patients earlier than previously possible, to apps designed to encourage users to lead healthier lifestyles: AI also has the power to increase the speed at which new drugs are developed and tested, processing data far faster than humans are capable of. But, as previously mentioned, for these changes to continue to be made, we will have to learn to trust the NHS with the storage and use of our medical data.
Eliminating Bias from Healthcare One of the biggest challenges the NHS will have to navigate to ensure the success of the AI lab is overcoming deep-rooted biases prevalent in healthcare. While automation theoretically lacks the subjectivity responsible for medical biases, it can only be as objective as the data it is ‘taught’ with. It is paramount that AI is taught to work objectively, analysing symptoms, making diagnoses and providing treatment without taking into account factors such as gender or race, which all too often have an effect on healthcare. Ensuring the data used to ‘teach’ automated machines is sourced from a wide range of socio-economic groups is the key to ensuring its effectiveness.
Exciting Innovations AI is already having a positive impact on healthcare, and the number of new and exciting innovations being introduced to the industry is quickly growing. For example, Google’s DeepMind AI is now used to diagnose 50 common eye problems as effectively as a doctor. Another exciting development made by the Google Health team is an algorithm trained by 45,000 patient CT scans which diagnosed 5% more cancer cases, while making 11% false positives than a team of six radiologists.
Companies such as BenevolantAI are making impressive strides in AI-driven clinical research improvements. Capable of reading the two million peer-reviewed research papers published each year, designing new molecules, and creating hypotheses for the repurposing of existing clinical trial data; its new technology demonstrates how AI removes human limitations and opens up the healthcare to far greater data analysis and faster development of new treatments.
The government’s commitment to investing £250 million into innovating healthcare with AI will undoubtedly lead to more exciting discoveries designed to improve the prevention, diagnosis and treatment of disease. From preventing future data breaches, to ensuring data using to power automation products is not tainted by un/conscious bias, there are a few challenges the NHS, and the HealthTech industry as a whole, must work to overcome before everyone can fully benefit from these innovations. But as long as these issues are continually highlighted and addressed, we should embrace the NHS’ new focus on AI, knowing the potential positive impact these innovations could go on to have on millions of lives.
Protecting Patients’ Personal Data Last, but certainly not least, it is also worth addressing the issue of AI and data security. Over the last few years, the NHS has demonstrated that it is not immune to data breaches, leaving some concerned about how AI could impact the privacy of patients’ data. However, it’s important to remember that AI’s (or more specifically automation’s) purpose is to imitate tasks previously completed manually. For this reason, as long as the NHS are taking sufficient measures to ensure data protection in existing tools, automation will be just as safe as traditional methods, if not safer (thanks to the elimination of human error).
With strict data regulations such as GDPR already safeguarding our data, we can rest easy knowing AI and automation will be held to the same standards as its human counterparts. Furthermore, that trust in the NHS’ safeguarding of patients’ data is necessary in order to ensure automation can be utilised fully. Without large amounts of diverse patient data to learn from, AI will be unable to identify new treatments and diagnose medical conditions with the efficiency machine learning makes it capable of.