

Dr. Aditya Kale
Radiologist and Medical Algorithmic Audit Lead
at University Hospital Birmingham (UHB)
How can AI revolutionize radiology and address critical challenges in healthcare? In this compelling interview, Dr. Aditya Kale, a radiologist and medical algorithmic audit lead at University Hospital Birmingham (UHB), shares his experience leading the largest retrospective study in UK, which was conducted on Oxipit’s ChestLink AI solution. From tackling the NHS backlog in chest X-rays to redefining reporting efficiency and patient care, Dr. Kale unveils key findings and offers valuable perspectives on AI’s potential to transform radiology workflows. Dive in to discover the groundbreaking results, lessons learned, and what the future holds for autonomous AI in healthcare.
Q: What inspired your team at Birmingham to evaluate Oxipit’s ChestLink AI solution through this study?
The main inspiration for evaluating ChestLink was the issue of a significant backlog in chest X-rays, a problem prevalent across the NHS. This backlog stems from a substantial workforce gap and a mismatch between the demand for and supply of radiologists. Chest X-rays, despite being one of the most commonly conducted imaging procedures, often get deprioritised, leading to delays in patient care. We were interested in ChestLink’s potential to automate the reporting of chest X-rays that are normal.
Q: Can you share some of the key findings and benefits highlighted by the study?
We were able to gather nearly 200,000 chest X-rays conducted in a single year. Of these, 140,000 were within scope for ChestLink processing, and just over 63,000 were normal studies that could be reported by ChestLink. Within that, just under 15,000 were determined to be high-confidence normals and therefore a report was generated by ChestLink. That amounted to 23.4% of all of the normal scans. If we think of all studies, we’re actually managing to reduce 10.5% of the workload each calendar year. These numbers are pretty significant and I think it has the ability to significantly reduce the radiology department’s workload.
Q: Were there any surprising or particularly significant results that stood out during the study?
We were quite surprised and impressed by the sensitivity of the algorithm. From the 140,000 scans that were within scope, there was a discordance rate of only 4%. Following a consultant radiologist’s review of all these scans, that discordance rate dropped to 1%. So, what we’re finding is that ChestLink is only missing 1% of cases that otherwise should be deemed abnormal.
Q: Did the study reveal instances where Oxipit’s AI might have prevented a diagnostic oversight or improved the process of prioritising cases?
We’re still completing the final part of the study to understand what additional gains there might be over and above reducing the workload for reporting staff. I think there probably will be some gains in the time to report. With autonomous reporting, you can reduce that time lag from, let’s say if it was arbitrarily three days, you can reduce that to a few minutes if you’re using ChestLink.
In terms of prioritisation, if ChestLink is able to analyse these 140,000 scans, it might be able to not only report high-confidence normals, but also triage cases. So, we’d be able to understand which cases should be prioritised, which should be deprioritised.
Q: From your study findings, how do you think Oxipit’s AI could impact reporting times or workflow efficiency?
A: I think ChestLink could impact both reporting time and workflow efficiency. If we look at workflow efficiency, if you have that triage module in addition to the high-confidence normal reporting, you’re going to be able to maximise the time and effort of reporting staff, the radiologists, and radiographers, so they’re looking at scans that might have some pathology.
In terms of time saving, I think that there’s definitely significant gains to be had because at the moment NHS trusts aren’t able to report every single chest X-ray that’s done in a day. Whether or not that will then translate into significantly improved patient outcomes is a different question and that’s probably something that needs further evidence. But in theory, yes, I do think that the prioritisation module could be of significant benefit.
Q: What clinical or operational metrics did you use to evaluate Oxipit’s performance during the study and how did it measure up?
It’s very difficult to provide an accurate benchmark, and this is not just specific to ChestLink. What we’re finding is that it’s very difficult to find the resources and funding to conduct a robust benchmarking study. The approach that we took in this case was to look at a historical data set. We created our data set from 2018. We were looking at the reporting rates that I just mentioned, but we were also looking at the characteristics of the discrepancies.
So, if there was a discrepancy between the ChestLink report and the report that was written by UHB staff back in 2018, then we would actually do more of a deep dive into those cases. We are currently in the process of modelling what would have happened if ChestLink were to miss a certain diagnosis.
Q: Based on the study results, would your team consider exploring real-world implementation in the future? And did conducting the study change your views on AI’s potential in healthcare?
I would call myself a bit of a sceptic optimist when it comes to AI implementation. Implementing AI needs to be done right. It needs to be done with robust validation, which is what we’re trying to do in UHB. I think that the findings definitely warrant the work with Oxipit. I think that jumping straight into a full-scale active deployment might be a step too far at this point, but I do see that in the future. I think what we would want to do is have a shadow mode deployment for a certain period of time, see how that works, and do a little bit of qualitative work with the department. I’m pretty optimistic about this translating into a real-world deployment in the future.
Q: If similar AI technology was implemented nationally for the NHS, what impact do you think it could have on radiology services and patient care?
If this kind of technology was deployed NHS-wide, there would be pretty significant impacts. We would have a reduction in workload. It would certainly help with the supply-demand mismatch that’s there at the moment. We would also see more rapid reports being generated for patients and so we might catch diagnoses before they transform into something that patients would have to visit the A&E department for.
On the flip side, we also need to be wary that if we’re deploying something NHS-wide, if there is an error rate, it will be multiplied. If you think about even, if we arbitrarily say a 5% missed rate for some certain diagnosis, that will be magnified at a population level. The different parts of the UK can be very different in terms of demographics, clinical workflows, and that does translate into the prevalence of diseases likely to be found in a chest X-ray data set. They can look very different if you compare Birmingham to Glasgow, for example. There will need to be some work done to ensure that there is generalisability of the device’s performance. But overall, I do think it could be quite transformative for the NHS.
Q: What’s your perspective on moving from computer-aided diagnosis (CAD) to autonomous AI in radiology?
I think there’s a really big discussion at the moment around CAD versus autonomous reporting. One of the things I find most interesting is whether or not CAD actually translates into a patient impact. I would be really interested in seeing controlled clinical trials that show that there is a significant patient benefit if you deploy CAD compared to a standard radiologist reporting workflow. I think that one of the things we are seeing is that the reports are that the diagnostic workflow times might be improved, or the diagnostic accuracy might be improved, but I think there’s less evidence showing how that actually translates into improvements in patient outcomes.
When you then compare CAD to autonomous reporting, I think that autonomous reporting could actually, given that it’s automating part of the workflow, have a much larger impact in terms of freeing up staff time. Again, what I’d be really interested in is the comparison between CAD, autonomous reporting, and the standard workflows to see firstly how the timings change, but also how that translates into patient impact. That’s one of the most difficult questions, this whole conversation about patient impact, because it can be difficult to figure out which metrics we’re thinking about. It can be difficult to have an adequate follow-up time to actually capture the benefit to the patient. But I think it is really important.
Q: What advice would you offer to other healthcare providers or researchers considering retrospective studies on autonomous AI?
I think the first thing is really thinking hard about what kind of dataset you’ll be looking at, trying to capture a mix of patients that appropriately reflect your local population. If the AI performs well on that dataset, you can hopefully be somewhat confident it’ll perform in a similar way in your local area.
The second thing is to try and capitalise on the advantages of what a retrospective study offers. The main disadvantage is that prospective studies, randomised control trials, actually allow you to see how the algorithm performs in real time in the real world, and it allows you to track patient impact and generate more of a data set as you go along. The power with retrospective studies is that you already have that picture of the patient journey. If you start at point A, you can see what happened to the patient at point B towards the end of their care. I would encourage researchers to try and capitalise on that and think about modelling what would happen if an AI was to miss something, but also conversely, if the AI was to provide certain gains, you can compare that to your historical data.
Visit Oxipit at ECR to see how AI can transform your radiology workflow. Explore our Chest Suite, experience our solutions in action, and connect with our team. Schedule your meeting today >>>
