You are conducting joint research at the bidt as part of the project ‘Ethics in Agile Software Development’ – how are ethical considerations currently viewed in this area?
Alexander Pretschner: Software development is not something that one engineer engages in alone, but rather a social process involving many participants. These include the context in which a system is developed, the company for which developers work and the society within which this product is ultimately deployed.
When someone is programming a calculator, for instance, there may not be much to consider ethically. However, in other areas ethical considerations are already playing a big role and by this I don’t just mean software for motor control systems, say, which can be created with fraudulent intent. Even without fraud, ethical considerations play a role, for instance in companies that provide software for data integration. Hence, they sell systems that in some circumstances are being used in contexts, which could at the very least be considered questionable.
Questions arise, for example, as to whether facial recognition should be built in, knowing that this implies the acceptance of a certain recognition error rate, which for technical reasons cannot yet be reduced. These companies think a great deal about where to draw the line regarding software development. In other words, they decide what else should be done versus what should not be done and establish processes accordingly. But there are also companies that use ethics only as a ‘fig leaf’, which enables them to sell what they’re doing as ethically correct.
Society on its own isn’t able to control developments that have already assumed a momentum of their own in science and technology.
Julian Nida-Rümelin: I would like to put our project into a larger context. The ethical dimension is always present in all major technological and scientific developments.
When you think about the fierce arguments that raged over nuclear power, back then it was about setting a course for the future of energy production which would affect the whole of mankind. Almost the entire technical, political and scientific complex was initially of the opinion that this was the safest and most sustainable form of energy production for the entire globe. Then public resistance emerged, at first with weak arguments and seemingly irrational fears. But this gradually gained the support of a few scientists, who then spread the debate until eventually one had the feeling that society had created a basis for assessing energy scenarios more rationally than was possible before.
The same scenario repeated itself in human genetics. There, fears also existed, some of them completely overblown, that soon human-animal hybrids would be bred and that people would clone themselves to live forever. Consequently, a kind of critical resonance to the potential of human genetics emerged.
The question is always how to handle the ethical dimension. One possibility is to create a strict separation, so that on the one hand there is basic research and technology, whilst on the other hand there is society, churches or the legislator, who must then evaluate what is ethically appropriate and what is not. However, I don’t think it works that way and hence my motivation over past decades for engaging in these areas, including digital transformation.
Society on its own isn’t able to control developments that have already taken on a momentum of their own in science and technology. Different disciplines – economic, philosophical, legal, computer science, human genetics and so on – must be brought together in order to facilitate sensible control.
What does that mean for the individual, for example in software development?
Prof. Julian Nida-Rümelin. Photo: bidt/Diane von Schoen
Julian Nida-Rümelin: This is the other extreme that always occurs in such debates. Each individual scientist and each individual technician would have to recognise that there is a responsibility attached to ethically sensitive issues. This represents, though, a complete overload for most of these people in their trying to moralise about such activities and hence this approach also does not work.
Thus, the question is how to manage situations, so that on the one hand individuals don’t break under such responsibilities or cynically do only what they are told to do. However, on the other hand one must also ensure that developments are not set in motion without ethics, which could later lead to legislative restrictions. This is the context in which I see our joint project. It is about integrating the ethical dimension into software development itself and also into management methods, without overwhelming the individual actors, for example software developers.
How can this be imagined in practice?
Alexander Pretschner: We look at agile software development. Amongst other things, this calls for subproducts to be completed in short cycles. One of the core elements of digitalisation is that changes must be expected not only in contexts, users and needs, but also possibilities, both technical and organisational. Accordingly, in agile software development you work in so-called sprints to be able to react quickly to changing requirements.
It is at the very heart of ethical development that you always have to include such aspects into your considerations. For instance, can this or that be abused? It is an approach that is already being used relatively successfully in regard to data protection and security issues.
Julian Nida-Rümelin: There is the simple integration of technical, scientific and empirical questions on the one hand with ethical questions on the other, in other words consequentialism in philosophical terminology. Simply explained, this involves consideration of what the consequences of a certain practice are. Part of my academic work has been to demonstrate that this alone just doesn’t work.
It is absolutely crucial to consider the consequences of your own actions, but that’s not all. Other things come into play as well. Now during the Corona crisis, for example, every individual has the right to raise the question: can a society simply initiate a practice that minimises damage, but massively violates individual rights? And here I have to disappoint expectations that ethics can simply provide a criterion. In my opinion, it can’t. What it can do, though, is to encourage clear thinking and ask: how do you want to weigh this? Let’s ponder that. But then this pondering isn’t so simple, it is a bit more complex and there are also dilemmas, some perhaps insoluble.
Philosophy must then be modest and say: what we are contributing is a conceptual clarity. But in the end, society as a whole is called upon to weigh these ethical issues.
Prof. Alexander Pretschner. Photo: bidt/Diane von Schoen
Alexander Pretschner: So far, we have made it sound in this conversation as if the decision is always a straightforward yes or no. That is not so. It is certainly possible to build in mechanisms that may not be able to prevent terrible things from happening, but nevertheless act as deterrents. For example, we can incorporate logging mechanisms and are thus able to see who had access to certain online data and when. This is a deterrent momentum, even if it doesn’t prevent the possible abuse of systems. I think this is a promising approach.
Can you give examples where the ethical dimension wasn’t sufficiently taken into account in the development of software?
Julian Nida-Rümelin: There are examples of software systems that reproduce or reinforce certain social prejudices. For example, racial prejudices in facial recognition or systematic discrimination against women in hiring practices within the world of work.
This is because a large part of data-driven software development relies on correlations. Of course, you have to be very careful that in the end dynamics aren’t set in motion within this big data development that result in a completely skewed steering of society.
Alexander Pretschner: One of my favourite examples is from Austria, where artificial intelligence has been used to decide whether or not to fund training for unemployed people and the system automatically barred anyone over 50. Another example is when the postcode from your place of residence determines whether or not you qualify for a loan. There are certainly reasons for this, for example because the failure rates in certain districts are higher than others, but the question is then how to deal with this knowledge.
Julian Nida-Rümelin: Correlations are interesting only as indications of causality. If it can be shown that a correlation exists, but without reason, then it becomes irrelevant. Hence, you basically have to build a kind of filter into the software development, which is theory-laden and decides what is causally relevant. This is a difficult question that is not easy to answer. But you have to make that effort, otherwise grotesque results will occur over and over again.
In the end, developers will not be able to make the decision alone. You can’t put all the responsibility on them.
Is it always foreseeable whether or not a product could have negative effects in the future?
Alexander Pretschner: Of course, this is often unclear. It can be argued that the housing agency Airbnb is terrible because it yielded the consequences that can be seen today, in that city centre apartments are now rented only to day-trippers. If one takes the ethical argument too far, one might say for instance that this should have been known from the beginning, but wasn’t taken into account. But such an approach would clearly prevent innovation. For instance, if the end product perhaps in ten years’ time gives rise to effects that aren’t socially desirable, is this something that an individual engineer must consider in advance? Of course not.
I am not of the opinion that it is always sensible to decide from the beginning not to do something and yet there are clearly things that you might want to look out for. This includes, for example, the question: Should drones be able to fly over religious sites and film people? Developers can and must think about this, but in the final analysis developers will not be able to make these decisions alone. You can’t put all the responsibility on them.
But they can talk about it and if you have the appropriate mechanisms in companies, their voices will be heard when they say: We feel awkward about what we are doing – do we really want this?
Julian Nida-Rümelin: Perhaps I may add one more aspect. The idea that by using its own resources politics is able to answer these questions, is unfounded. This also connects to the distribution of competences. There are lawyers sitting in the ministries, highly qualified, but they usually don’t have any professional expertise, either regarding human genetics or computer science.
Given that we live in such a highly complex system during the modern age, where science and technology are so central, the impulses from these areas must be actively aimed towards the public and politics. In this sense, there are developments where we see risks and while we cannot in general regulate for society, we can at least advise how to deal with these situations.
There is one very famous example. The Asimolar conference of human geneticists, on their own initiative called for a kind of moratorium in order to establish safety standards, so that control over the whole process doesn’t slip away. I also see our project at the bidt in this context, acting as a kind of early warning system for software development.
Project ‘Ethics in Agile Software Development’
Under the direction of Professor Julian Nida-Rümelin and Professor Alexander Pretschner in the project ‘Ethics in Agile Software Development‘ a concept is being developed to integrate ethical considerations into the process of software development. The interdisciplinary team includes economist Dr. Jan Gogoll, computer scientist Severin Kacianka and philosopher Niina Zuber.
Professor Julian Nida-Rümelin teaches philosophy and political theory at the Ludwig-Maximilian University Munich and is a member of the bidt board of directors. He was appointed to the German Ethics Council on April 30, 2020.
Professor Alexander Pretschner is Chairman of the bidt board of directors, holds the Chair of Software and System Engineering at the Technical University Munich and is scientific director of fortiss.