What is ‘autonomous reporting’ in medical imaging?
Autonomous AI reporting in medical imaging means that an AI application produces the final report of the imaging study – without any involvement from a human doctor. The AI produces a final diagnosis on its own, on which the future treatment decisions will be made.
What is a CE mark?
CE mark certification (think FDA for the US) indicates that the product complies with relevant EU legislation. In the case of ChestLink, it is certified as a Class IIb medical device, ensuring the quality of performance, traceability and accountability of results and processes associated with the product. The CE mark certificate for ChestLink states that the mark is issued for a ‘standalone computer assisted diagnostic medical device for chest X-ray analysis and reporting’. It is the first certificate issued for a fully autonomous AI medical imaging application. The certification paves the way for clinical ChestEye deployment in 32 European markets.
Why does ChestLink only produce reports for healthy patients?
To provide fully autonomous patient reports, a few clinical aspects have to be taken into account.
- Inter-radiologist subjectivity. Radiologists are making diagnostic assumptions based on a two-dimensional grayscale X-ray image, where subtle pathologies may be hiding ‘in the shadows’. Even the most like-minded specialists tend to disagree. The inter-radiologist subjectivity has been the center of multiple academic studies. To put it simply, medical imaging – especially in the field of X-rays – is not a 100% exact science. Radiologists make diagnostic assumptions based on their skill and experience level, as well as their reporting habits or reporting policies at a medical institution.
- Wider diagnostic context. When reporting on a patient study, radiologists are aware of additional patient information, such as medical history and clinical information, giving them a wider diagnostic context, whereas an AI application will only focus on a given X-ray image.
These aspects pose additional challenges for a wider diagnostic automation scope.
In other words, like in most cases for full AI autonomy (e.g. autonomous vehicles), a certain degree of autonomy in ‘a perfect world’ or a ‘laboratory’ setting can be achieved. Yet ‘good enough is not good enough’ for the real world, when you take edge cases and other aspects into account, especially in the healthcare setting.
ChestLink provides a first step into autonomy in medical imaging. The application automates the scope of radiologist invariant chest X-rays, which are of normal appearance to any given radiologist. It is also the first diagnostic autonomous application that can operate in a ‘real world’ clinical setting.
How is ChestLink sure that the X-ray image features no abnormalities?
ChestLink autonomously reports on X-ray studies where the application is highly confident that the image features no abnormalities. This means that the application has to be sure that the patient is healthy without any doubt.
This is a huge technological and healthcare achievement, as the application needs to discard any potential subtle cases ‘hiding in the shadows’ to give the patient a clean bill of health.
In case there is a slightest suspicion that an X-ray may feature an abnormality, the application leaves the reporting of the study to the hospital radiologist. In most of these cases, the radiologist will validate that the patient is healthy – however, in lower confidence cases the application cannot produce a report with 100% certainty.
What is the clinical value of ChestLink?
In certain healthcare settings the absolute majority of patients are healthy. This is especially relevant in primary care, where up to 80% of daily chest X-ray scope may feature no abnormalities.
Even in cases where an X-ray features no abnormalities, the radiologist is required to produce a final study report. It is a mundane task, which adds little clinical value for patient treatment.
Even in the developed world there is a global shortage of radiologists. Radiologists are overworked and overstressed. ChestLink provides a workable ‘here and now’ framework to automate up to 30% of daily X-ray workflow, allowing radiologists to devote more time to cases featuring abnormalities – where their medical expertise truly brings value to the patients.
So when will we see the first diagnostic reports produced by Artificial Intelligence?
We expect the first clinical deployments of ChestLink in early 2023.
Prior to CE certification, ChestLink has already been operating in multiple pilot locations across Europe – albeit in a supervised setting (producing preliminary instead of final reports). The CE certification allows these institutions to move from a supervised to a fully autonomous mode of operations.
Such logic also follows our suggested framework for ChestLink deployment.
- Retrospective analysis. We begin ChestLink deployment with a retrospective analysis of chest X-ray images at any given medical institution. Working with real-world data in a retrospective setting allows to estimate what part of the work scope can be automated at this particular healthcare institution.
- Supervised operations. The application produces preliminary reports in a real-time setting. Reports are monitored by the staff at the medical institution and Oxipit medical staff. ChestLink performance is monitored via an analytics dashboard.
- Autonomous reporting. The application moves into autonomous prospective reporting, automatically reporting on high-confidence cases featuring no abnormalities. Real-time reporting data and periodic reporting summaries are provided in the analytics dashboard for full transparency and traceability of application actions.
But when will AI make human radiologists obsolete?
Rest assured, it will not – or at least not in the coming 20-30 years.
As previously outlined, radiology automation is not solely a technological task. Medical imaging is not a 100% exact science. For a full work scope automation, many aspects need to be addressed from a healthcare diagnostics perspective.
The autonomy in radiology will not come with a bang. It will be a step-by-step process, automating an ever-increasing scope of diagnostic findings. For instance, we are working to improve our models to report on more healthy patient studies with a high degree of confidence. Automated reports for certain pathologies, where inter-radiologist subjectivity is less of an issue, will follow suit. So are the reporting automation for other modalities.
Some healthcare experts outline that in the near-future radiologists will operate more as quality controllers of systems, responsible for diagnostic imaging; ensuring that these systems operate within the norm and manually address edge cases.
This is comparable to the current state of aviation, where pilots are supported by a wide variety of aids and systems, with most routine tasks automated. Yet still they leave pilots in the ‘driver’s seat’ for critical decision making.
In this parallel, radiology still lacks the same level infrastructure of systems to aid decision making or automation. Thus the development of the AI ecosystem for radiology will follow a similar path.
So when will AI outperform a human radiologist?
It already does. Multiple studies have shown AI performance to be on par or surpass that of an expert human radiologist, especially in detecting subtle findings such as pulmonary nodules.
Studies have also shown that employing AI as a quality assurance tool can mitigate the risk of radiologist mistakes and improve early detection of lung cancer.
However, this is akin to comparing apples and oranges. There are tasks, in which AI is better, and certain cases, where humans excel.
It’s not a question of mathematical performance. It’s more of combining both worlds – how to utilize the advantages of AI (low cost, always-on, can analyze vast amounts of data, may identify very subtle pathologies) with human expertise (medical training, critical thinking, executive decision-making).
Thus the success of AI medical imaging applications is not the result of technological determinism (‘AI for the AIs sake’). It is about the productization of AI capabilities – fitting artificial intelligence into clinically valuable workflows (e.g. quality assurance or automation, leading to improvements in diagnostic quality or productivity).