Introduction
CAD (Computed Aided Detection) AI, tools that support detection, triage and classification of medical image diagnoses, have been the focus of $5.6B investment in a bid to combat the myriad of pressures facing the medical imaging industry. Medical imaging service lines are struggling in the face of increasing procedure volumes, intensifying workforce shortages coupled with a fundamental need to do more with less resource. Evident by Royal College of Radiologists (RCR) 2023 clinical radiology workforce census report, that “the radiology workforce grew by 6.3% in 2023. Yet the rate of CT and MRI scans surged by 11% in the same timeframe.”
CAD AI tools are intended to support radiologists be more efficient and deliver higher quality diagnoses, with some use cases offering significant improvements to patient care outcomes, such as stroke triage and solutions in FFR-CT creating efficiencies in the cardiac care pathway. However, beyond these specific use cases, on average, time savings from CAD AI can often be relatively small and do little to mitigate the major challenges radiologists and institutions are facing.
To drive significant improvements, institutions and radiologists alike should be evaluating solutions that build on CAD, such as leveraging autonomous AI technology. Autonomous AI is defined as “AI that can act without human intervention, input, or direct supervision.” Automating the triages of clinically insignificant studies from the radiologist’s worklist and into the preliminary or final report stage could free up a significant amount of reading time currently required by an already over-stretched workforce.
Some AI vendors claim that autonomously reporting on CXRs with no abnormalities could automate 15% to 40% of daily reports, allowing radiologists to focus on cases with pathologies.
In medical imaging, autonomous AI remains a relatively new concept. Interestingly, institutions are already leveraging this technology outside of medical imaging, such as for administration, finance and within EHR systems. Regulatory guidelines for some imaging types, such as the Ionising Radiation (Medical Exposure) Regulations (IR(ME)R) has been one of the key barriers in the adoption of autonomous AI in medical imaging so far. Under IR(ME)R, “trained and entitled operators” (medical personnel e.g. radiologists) must be involved in the clinical evaluation process for each exposure and AI software cannot be used autonomously to perform image interpretation. This applies to x-ray, CT, or any imaging when radiation is used, therefore, limiting autonomous AI adoption thus far.
To discuss the potential of autonomous AI and the prospective roadmap in medical imaging and diagnosis, as well as potential use cases to drive adoption, Signify Research hosted panel discussions with a range of stakeholders.
Use cases and opportunities for autonomous AI
It’s undeniable that technology needs to continue innovating to support healthcare institutions overcome the challenges they’re faced with today, pressures that will only continue to compound without major intervention.
Looking across medical imaging, there are countless use cases whereby autonomous AI could be explored. When considering the volume of imaging completed, chest x-ray is the second largest use case for imaging globally, with over 875m scans completed in 2022, second only to extremity x-ray. However, the complexity surrounding chest x-ray diagnosis also creates some hesitancy in the minds of radiologists as to the appropriateness for autonomous AI use.
There were several use cases discussed amongst the panel in terms of “low-hanging fruit” opportunities. These use cases would allow a hospital to deploy autonomous AI quickly, whilst reaping the significant operational efficiencies. Moreover, these would provide evidence for the value of autonomous AI in clinical and diagnostic settings. See chart below.
How to drive adoption of autonomous AI?
Clinical adoption of AI (CAD or autonomous) is reliant on evidence. Robust evidence will be one of the cornerstones to the success of autonomous AI adoption – however, the type of evidence will also play a role.
Pilot schemes will be the most favourable, allowing providers an opportunity to test the solution first and demonstrate its success and value to the team. Positively, there have been several pilots taking place in markets across Europe, however, many were on a retrospective basis. Although this is a solid first step to understand how the autonomous AI performed against a human, the greatest value of evidence will come from prospective studies, creating evidence of the success in real-time.
As noted with CAD AI, AI software vendors underwent a transition from presenting evidence demonstrating the tools specificity and accuracy, to now, whereby the emphasis is on evaluating the health economic benefit of the technology’s adoption. Whether that is measured in time saved, financial gains or improvement in patient outcomes and the opportunity to evolve and improve the care pathway.
In the example of autonomous AI, there are several improvements if adopted beyond that observed for CAD AI:
- Reduction of reading backlog. Offering financial benefits to the institution, as well as an improvement to patient care, with examples in public markets whereby patients can be waiting months for their scans to be read.
- As a result of the extensive backlog, both public and private markets have increased the use of outsourced reading services (teleradiology) to manage workload. With autonomous AI reducing the case load of in-house radiologists, institutions can reduce reliance on outsourcing to imaging centres or teleradiology.
When an institution is seeking the adoption of technology that has the potential to evolve how radiologists work, one of the main considerations for providers and vendors alike, is to ensure appropriate change management, training and the availability of resources to support that the
deployment of AI.
Against the pressures already faced by institutions, it can be a challenge to prioritise the adoption of technology. Anecdotal examples were shared within the panel that highlighted, even when funding was made available to invest in new AI technology, resource constraints (both clinical staff and IT teams) made new innovation projects challenging to kick-off.
Therefore, buy-in from across the institution and the patient pathway – including practitioners, the IT team, hospitals and medical insurers, will be critical to not only drive market interest and adoption, but also support in institutions deployment. From the panellists that had adopted AI solutions across their organisations, one of the pivotal components noted was having dedicated IT resources to support integration and the different workstreams required to optimise the AI solution.
The liability and ethical considerations
The evaluation of autonomous AI also creates additional debate around liability and ethical considerations for healthcare providers, not only for use of autonomous AI today, but longer-term around the liability of not using AI if it can deliver a higher standard of care when in use.
Short-term
Providers will need to assess possible risks of autonomous AI such as, what happens if the solution “overcalls” what it deems a “clinically-significant” finding. This classification would have downstream impact for organisations, in terms of increased testing for follow-ups and the required resources to support image acquisition, plus additional reporting. Importantly to consider, however, is the absolute minimum viable threshold for autonomous AI. AI, like humans, are not perfect. However, there is a misconception in the market by many that AI should be near perfect. Realistically, the minimum threshold should be to match a human in performance, with the ideal scenario having the AI outperform the radiologist. A 2023 study reported that approximately 30% of CT reports had diagnostic errors, a rate that has remained relatively unchanged for 75 years. Given the previously discussed complexities of chest X-ray, autonomous AI tools that can outperform human reading standards can have a substantive impact on the quality and efficiency of care for screening.
With autonomous AI, the question of where the liability risk sits is also important to understand. At present, the hospital and radiologist hold responsibility for patient care due to the final sign-off requirement. However, as AI begins to make clinical decisions, does it become the responsibility of the technology vendor?
Long-term
As autonomous AI gets adopted at scale, there becomes a question on a provider’s responsibility to use proven technology available in the market to provide the best care to the patient. From a negligence perspective, this argument wouldn’t be a formal malpractice issue until autonomous AI becomes the standard practice in healthcare guidelines. However, informally, in private markets, does this impact a patient’s choice when they decide where they want their care received? Given the choice of a long wait for a scan to be read by a human, or much quicker results from autonomous AI, patients may look to the latter assuming the technology is regulated correctly.
From an ethical perspective, with the adoption of new technology such as autonomous AI, a question arises as to how informed the patient should be?
In normal practice, it is not common for radiologists to inform the patient of every system they use in the reporting and decision-making process, including CAD AI. In many regards, this is because there is an element of trust in the qualifications and experience of the radiologist, as well as stringent regulations surrounding the use of Software as a Medical Device (SaMD).
However, for autonomous AI – are explainers and patient consent required, or will patients trust healthcare providers to ensure the ongoing audit of autonomous AI results?
Evaluating the language used
Throughout this paper, we have assessed considerations when driving the adoption of autonomous AI. Fundamentally, vendors, alongside the industry, need to evaluate the best language to use – both when referring to the solution of “autonomous AI” and the classification it makes.
Throughout the panel discussions, there was a debate based on the use of “normal” and “abnormal” when classifying scans. As it is often rare to have an imaging scan truly “normal,” a less definitive categorisation, such as “clinically relevant” and “non-clinically relevant,” might be a better representation of the AI conclusion.
Another consideration is the terminology for the solution and whether “autonomous AI” creates uncertainty and hesitancy across institutions and radiologists. As a new technology in medical imaging, substantial effort is required to improve market education and embed a deeper understanding across different stakeholders globally. Over the last 8 years, institutions have become comfortable with CAD, understanding what CAD solutions are expected to do and the standards they need to meet. Therefore, there is a consideration as to whether vendors can leverage the “comfort” created with CAD and instead position autonomous solutions as “advanced CAD.” Additionally, other aspects of the operational side of healthcare provision are increasingly comfortable with leveraging AI tools and analytics for “autonomy” in processes and workflows. Therefore, the panel also felt that cross-department guidance and case studies highlighting the benefit of “autonomy” in healthcare will help to demystify the “autonomous” tag.
When will autonomous AI be in use?
As mentioned in the earlier section, there are several obstacles that surround the adoption of autonomous AI. However, what’s significant to consider is that the panellists were extremely positive about the role and opportunity autonomous AI has to offer.
The timeline for adoption is not a simple question and will depend on a few variables being addressed and overcome. The speed of adoption is also anticipated to vary depending on whether it’s a private or public healthcare market.
As shown in Figure 3 below, the panel anticipates that autonomous AI adoption could occur within 3-7 years, or potentially even sooner, if vendors rapidly build prospective evidence for their solutions and accelerate market education efforts across the healthcare ecosystem.
For private markets, it’s expected that the timeline to adopt autonomous AI will be faster, driven by the clear financial benefits that can be derived through the technology’s adoption. In comparison, public markets are less motivated by the financial bottom line and will instead prioritise metrics associated with patient outcomes. The recent focus on healthcare economics of new autonomous AI tools can emphasise the cost of inaction, evaluating the impact on patients, staff, and overall organisations from a care outcomes and service provision perspective.
Often, public markets are hesitant when adopting new technology, requiring a higher threshold of evidence alongside more complex procurement frameworks, administration, multiple stakeholders, and complex organisations, making it challenging for vendors to navigate.
In all markets, regulatory barriers will still need to be overcome to allow for adoption in the clinical market. Although the regulatory push for autonomous AI is expected around 2026-2027, with vendors and first adopters working through regulatory hurdles throughout this period, making substantial progress. The main swing in influence will occur as evidence and buy-in from reference sites and a range of academic and industry stakeholders build.
Importantly for autonomous AI, there is a growing requirement for new AI tools to allow for customisability. This should include the capability for radiologists to review “non-clinically relevant” studies and adapt the parameters of AI tools around specific care pathways unique to the provider organisation and downstream care pathway.
There also needs to be an element of trust, alongside robust audit and post-market surveillance; otherwise, physicians checking many or all cases reported by autonomous AI will reduce the cost and efficiency savings intended through adopting the technology.
How does the radiologist role change once autonomous AI is adopted at scale?
One topic prominently discussed amongst the panel was how the role of a radiologist will change once autonomous AI is adopted. Autonomous AI is not intended to replace radiologists but to free up their capacity and optimise their workload to ensure timely reporting on clinically significant cases.
Radiologists have long been technological pioneers; the adoption of autonomous AI provides an opportunity to increase the influence radiologists can have on outcomes and improve wider care quality.
Importantly, the adoption of autonomous AI should not be a binary all-or-nothing change. Instead, adoption should allow for a gradual augmentation over time, whereby the solution has a degree of configurability to meet the different needs of each user, setting, and care pathway.
There are concerns in the industry about the impact that removing “non-clinically relevant” studies will have on radiologist training, including when autonomous AI should be introduced into a radiologist’s career path. Ensuring that each radiologist has the training to classify cases effectively without AI, across a spectrum of normal and abnormal, is still viewed as vital.
Conclusions
In most use cases in imaging, CAD AI alone is not enough to combat the fundamental challenges facing healthcare institutions today, with most CAD solutions in the market currently analysing fewer than 20 findings and remaining narrow in focus. Moving beyond CAD is therefore inevitable for healthcare providers to address current and future challenges. Whether termed “autonomous AI” or repositioning solutions as “advanced CAD” or “automation,” the future direction of adoption is clear.
Importantly, this does not remove the need for CAD AI or make past innovation redundant. Successful adoption at scale today can be the foundation of advanced clinical decision support tools in the future. The main question for the industry is how and when.
Importantly, this does not remove the need for CAD AI or make past innovation redundant. Successful adoption at scale today can be the foundation of advanced clinical decision support tools in the future. The main question for the industry is how and when. Positively, the interest and support for autonomous AI amongst the panellists were high, with confidence in its potential for diagnostic use in mature markets. Fundamentally, to drive adoption, the challenge is less about a technical gap and more about resource constraints and the difficulties of deploying technology that shifts radiologists’ workflows and institutional practices drastically.
Therefore, vendors, healthcare providers, payers, academic stakeholders, and regulators need to focus investment in three areas: (1) building evidence based on health-economic benefits of adoption, with studies focused not only on retrospective performance but also on prospective outcomes; (2) strengthening the case for “buy-in” beyond radiologists to encompass broader institutional support. Given the current regulatory barriers for autonomous AI in applications like X-ray and CT, building momentum and comfort with autonomous tools is essential. This includes leveraging use in non-regulated settings and further educating radiologists, clinicians, IT teams, providers, payers, and patients. As evidence and understanding grow, this will pressure healthcare commissioning bodies to recognise the need for autonomous AI; (3) ensuring customisation and configurability in each AI solution to help radiologists build trust in the technology, while ensuring its value and application can be replicated across various users, settings, and care pathways. Robust post-market surveillance and audit will also be critical to ensure appropriate and effective use.
It’s important to note that while this discussion on autonomous AI was centred on mature markets, there is substantial potential for autonomous AI in emerging and developing markets. With significant disparity in access to care and shortages of skilled workforce, autonomous AI could support institutions and countries in working towards consistent care provision across populations. Unlike in mature markets, regulatory barriers are less stringent, creating a lower threshold for entry. If the technology were available, emerging markets could leapfrog and adopt autonomous AI in less than 3 years.
There remain hurdles to overcome, but the potential of autonomous AI is undoubtable. With collaboration across the healthcare ecosystem, autonomous AI deployed at scale in mature markets could be as early as 3-5 years away.
White paper originally published by Signify Research on 31 October 2024.