Research project

Responsibility Gaps in Human-Machine Interactions: The Ambivalence of Trust in AI

The research project investigates the potential vulnerabilities of relying on machines in medical decision-making. It will evaluate the adequate level of trust for physicians to benefit from the use of AI-based recommender systems during the interpretation of medical images for diagnosis and research human-centric AI system designs that reduce inducing biases into the medical decision process.

Project description
Project team

Project description

The project team investigates the potential vulnerabilities of relying on machines when making medical decisions. It will concentrate on the interaction between physicians and AI-based recommender systems during the interpretation of medical images for diagnosis. The adequate level of trust for human decision-makers will be evaluated to benefit from the use of AI advisors and to reach well informed professional judgements. Furthermore, the project team will study the causal effects of institutional, situational, individual, and technological parameters on this trust level. The informative value of the findings will be significant for an abundant number of AI applications in other fields and add a new layer to the public debate on AI advisory systems. In line with the requirements of human-centric design principles, the scientists will put the physicians themselves, as well as the physician-patient relationship, at the centre of their research. In addition, research design paradigms for AI advisory systems that allow for a calibration of trust, will be considered. While structures and processes in the medical domain are increasingly adapted to machines, they focus on the question to what extent recommender systems can be integrated into the existing structure of accountability and responsibility in the medical practice instead. In doing so, the project team will complement its approach with the perspective of organisational ethics.They hence broaden the scholarly debate about the use of recommender systems in the medical domain.

Project team

Prof. Dr.-Ing. Marc Aubreville
Professorship for Image Understanding and Medical Application of Artificial Intelligence, Technische Hochschule Ingolstadt
Profile
Prof. Dr. Alexis Fritz
Chair of Moral Theology, Catholic University of Eichstätt-Ingolstadt
Profile
Prof. Dr. Matthias Uhl
Professorship of Societal Implications and Ethical Aspects of Artificial Intelligence, Technische Hochschule Ingolstadt
Profile
Sebastian Krügel
Research Associate
Chair for Societal Implications and Ethical Aspects of AI
Technische Hochschule Ingolstadt
Profile
Angelika Kießig
Research Assistent at the Chair for Moral Theology, Catholic University Eichstätt-Ingolstadt
Profile
Jonas Ammeling
Research assistant at the AImotion Institute of the Ingolstadt University of Applied Sciences
Profile