About JustHealth
Artificial Intelligence has great potential in healthcare, but it also raises deep ethical concerns as it often discriminates vulnerable groups, amplifying systemic inequalities and preventing equitable access to healthcare.
Trustworthy AI is not sufficient to avoid this risk. Bias-free AI systems can indeed still be deeply unjust, if they do not embed social justice principles; and the current ethics and governance frameworks that steer the design, development and use of AI tend to perpetuate forms of injustice both at the local and at the global level. Indeed, they mostly neglect the consideration of perspectives and values other than the Western ones, especially those from the Global South (Colonialist AI).
JustHealth aims at filling this gap by creating the first doctoral program in the EU and beyond to ensure social justice is adequately considered in the design, development and deployment of AI in healthcare. By integrating ethical, governance, engineering, clinical, and co-creation expertise, JustHealth will contribute to the deployment of just and trustworthy AI systems that can effectively contribute to just healthcare both at local and global level.
8 Doctoral Candidates (DCs) will be offered an ambitious doctoral programme and exposure to leading EU, South African and US organizations, including non academic ones.
JustHealth will create a brand-new profile of responsible researchers, with a transdisciplinary expertise, and able to address the complex challenges posed by AI.
Research Objectives
The overall research and innovation objective of JustHealth is to develop, test, validate and valorise a robust ethics and governance framework for AI in healthcare based on decolonized values and principles and on a co-creation approach. This framework is intended to guide the design, development and deployment of just and trustworthy AI in healthcare.
To achieve the overall goal, JustHealth will pursue the following specific research objectives (ROs):
RO1
Based on a co-creation approach, map and critically analyse the interdisciplinary (ethical, legal, technical, clinical) gaps and obstacles that hinder the achievement of social justice in AI in healthcare both at the local and at the global level and identify innovative co-creation strategies, methods, instruments and skills to boost just and trustworthy AI in healthcare (WP1).
RO2
Elaborate a robust decolonized ethics and governance framework for AI in healthcare that ensures social justice is effectively considered in the design, development and deployment of AI systems in healthcare. Considering the opportunities and risks posed by FMs in healthcare, the framework will be tailored for FMs in two paradigmatic use cases: freezing of gait (FOG) severity monitoring and rheumatic heart disease (RHD) screening and staging, both involving vulnerable people, and respectively affecting the older population of Western countries, and the younger population of countries in the Global South (WP2).
RO3
Guarantee the quality and robustness of the decolonized ethics and governance framework for AI in healthcare through a testing and fine-tuning validation process. This will lead to the validation of the framework, as well as other outputs developed in the context of the two above mentioned use cases: self-assessment tools for ethical and legal compliance, care pathways, FMs-based applications for FOG severity monitoring and RHD screening and staging, co-creation tools, methods and instruments (WP3).
RO4
Boost the valorisation of the decolonized ethics and governance framework for AI in healthcare, through recommendations for decision- and policy-makers to boost just and trustworthy AI in healthcare; insights for standardization; guidelines for the transferability of the framework and other outputs to other health use cases (WP4).
Pursuing these objectives, JustHealth will be the first doctoral program in the EU and beyond to ensure social justice is adequately considered in the design, development and deployment of AI in healthcare, and just and trustworthy AI systems effectively contribute to just healthcare both at the local and at the global level.
Individual research projects (IRPs) of the recruited DCs focus on the ethics (DC1), governance (law/policy) (DC2), technical/engineering (DC5&6) and medical/clinical (DC7&8) aspects of AI and FMs in healthcare and are integrated in the research program thanks to a challenge-oriented and truly transdisciplinary perspective ensured by a co-creation approach. Moreover, the co-creation approach itself is a further element of innovation, as research on co-creation strategies, methods, instruments and skills are the core of two IRPs (DC3&4).
In this way our doctoral program will create a new generation of responsible researchers, who are aware of the societal implications of their research and activities on AI, and specifically on FMs, and contribute to the design, development, deployment and governance of just and trustworthy medical FMs for global health.
FOG Case Study
RHD Case Study
Project Profile
Coordinator
University of Macerata
Type
Horizon Europe Project
Duration
4 Years (2026-2029)
Partners
16 Institutions from Multiple Countries
Beneficiaries
Core partner institutions leading the JustHealth project.
Associated Partners
Supporting organizations contributing expertise and resources.