
Artificial intelligence is transforming industries worldwide, but nowhere is its impact more profound than in healthcare. As AI-driven automation advances, ensuring transparency, validation, accuracy, and implementation of AI ethical principles becomes paramount—especially in high-stakes fields like medical imaging.
Last week, Gediminas Pekšys, Chief AI Officer and one of co-founders of Oxipit, had the privilege of participating in the AI Action Summit in Paris—a landmark gathering of global leaders, ethicists, and policymakers dedicated to shaping the future of AI governance.
As part of AIOLIA—an initiative committed to operationalizing the EU AI Act and fostering responsible AI development—we joined a critical dialogue on the ethical and regulatory challenges that will define AI’s role in medicine.
AI regulation: a global imperative
The increasing autonomy of AI applications, particularly in healthcare, requires a new paradigm of governance. While the EU AI Act lays the foundation for AI oversight in Europe, the impact of these regulations extends far beyond the continent.
AIOLIA’s approach is resolutely global, fostering collaboration with leading universities and think tanks in China, South Korea, Japan, Canada, and through UNESCO’s reach into Africa and South Asia. By developing an analytical toolbox for AI governance, AIOLIA aims to set international benchmarks that balance innovation with accountability.
The role of AI in medical imaging: a case for transparency and trust
At Oxipit, we are at the forefront of AI-powered autonomous medical imaging—a technological breakthrough that enables AI to independently report on normal chest X-ray studies with unprecedented accuracy. Our work with ChestLink, world’s first CE-certified autonomous AI medical imaging product, has demonstrated the potential of AI to improve efficiency and reduce radiologist workloads while maintaining safety and compliance with stringent regulatory standards.
However, as AI takes on a more active role in clinical decision-making, questions surrounding auditability, bias mitigation, and validation remain at the heart of AI adoption in healthcare. The discussions in Paris reinforced the importance of:
- Regulatory clarity: providing structured pathways for AI implementation in clinical settings.
- Interdisciplinary collaboration: engaging ethicists, regulators, and healthcare professionals in AI development.
- Long-term accountability: establishing transparent validation frameworks to ensure AI remains a reliable, explainable, and unbiased tool in medicine.
Looking ahead: AI governance and the future of healthcare AI
As AI continues to evolve, its ethical and regulatory landscape must evolve alongside it. The AI Action Summit underscored the collective responsibility of industry pioneers, policymakers, and medical professionals to ensure that AI’s potential is realized in a way that prioritizes patient safety, equity, and long-term sustainability.
Oxipit remains committed to leading this charge—developing AI solutions that not only push the boundaries of innovation but also set new standards for responsible AI in healthcare.
AIOLIA at a Glance
Project Duration: 2025–2028
Funded by: Horizon Europe
Focus: Ethical AI aligned with human cognition and behavior
Goal: Bridge the gap between AI ethics principles and real-world applications
Key Activities:
- Developing guidelines for ethical AI
- Training researchers and engineers
- Interpreting EU AI Act and global AI regulations
Countries Involved: EU, Canada, China, Japan, South Korea
Read more about the project here: AIOLIA Horizon Europe project
Funded by the European Union. Views and opinions expressed are, however, those of the author(s) only and do not necessarily reflect those of the European Union or the granting authority. Neither the European Union no