Software systems play a progressively important role in our lives. In particular, systems that support decisions on highly emotive issues, such as the granting of probation or creditworthiness, are increasingly becoming the subject of public debate. The cry for “ethically assured” software becomes louder the more we rely on software-based decisions.
Software engineers and companies are finding themselves more and more often in situations where they are held accountable for undesirable results and irregularities resulting from the use of software or the way in which it is developed. While it may seem inappropriate and short-sighted to shift responsibility entirely onto the developers, nevertheless software companies still feel a stronger obligation now to address these questions and to promote ethically secure development.
This is true for two key reasons: Firstly, companies are confronted with setbacks caused by software that has ethically questionable features – both in legal terms and in terms of their reputations as trustworthy actors in society. Secondly, companies and their employees are intrinsically motivated to develop better and ethically sound software.
Ethical and behavioural codes as an orientation guide?
Software systems are inherently complex: they consist of different components, such as presentation (front-end) and data access (back-end), which must in themselves meet normative standards. Furthermore, their compositionality again generates further ethical requirements. Finally, the use of software systems must also be taken into account, as further normative tensions can arise in their application. Accordingly, the development of normatively appropriate software is by no means an easy endeavour that could be solved by appeals for developers to become more “ethical” and “skilled” at what they do.
Complex systems are characterised precisely by the fact that not all consequences are known ex ante. Moreover, certain counter-intuitive interactions and motivations arise and become visible only in the later application of a product. A popular way of supporting companies’ software engineers in identifying and discussing ethical questions is by issuing them with codes of ethics and conduct, which are intended as orientation guides.
Codes of conduct published, for example, by institutions such as the IEEE, the ACM, supranational institutions such as the EU High Level Expert Group on AI and UNDP or the high-tech industry itself, attempt to play a central, (self-)regulating role in discourse on the development of ethically appropriate software systems. They represent a more or less complete and well-engineered conglomerate of different normative positions, values or declarations of intent, which are to be implemented in the process of software development in an adequate form.
It takes more than codes of ethics and conduct for the handling of values and principles in software development
As mentioned above, there are indeed various and understandable reasons why it may seem sensible to provide engineers with instructions. They should provide guidance for those who are confronted with ethically relevant questions and give them an overview of desirable values and principles.
Most codes agree on key values such as data protection, transparency and security. Disagreement, though, arises as soon as we need to go beyond this abstraction level, when the technical design process forces us to go into detail. At that point, there are significant differences regarding the prioritisation of values and the derivation of focal points.
This raises the question: if there is broad agreement on core values, why do various codes differ in their statements?
The codes lack practical applicability
One explanation could be that this is mainly due to the nature of values themselves, that is their under-determination. Lack of clarity is in turn directly linked to the problem that codes are barely able to provide concrete normative orientation in software development.
In principle, most codes contain values that are crucial to the ethical handling of software and cannot easily be refuted. Examples are the call to respect the dignity of man or an aspiration to develop technology in the service of man (humanistic perspective). Although we by no means want to question or relativise these values’ normativity and these values can certainly claim normative validity, it should be clear that reducing an entire system of values to these central (meta) norms is neither sufficiently determined in theoretical terms nor does it lead to sensible practical implications. Furthermore, not all other values can be deduced from these central tenets. Rather, they tend to take on the role of general statements which, seen individually, cannot provide concrete and hence practical guidance.
As a result, codes lack practical applicability because they do not provide normative guidance for specific ethical challenges that occur regularly – in other words: they do not achieve what they were originally designed to do. To make matters worse, the sheer abundance of different values proposed in the codes makes it easy to assign any appropriate ethical value to justify possible action, precisely because no hierarchy of values is obvious in relation to specific cases.
Implementing values as a compromise: privacy versus transparency
Many codes contain a variety of values, which are presented simply as a kind of enumeration. Without sufficient clarification, reference and contextualisation, the point that must be emphasised here is that software engineers are on their own in weighing up and evaluating any compliance with each specific value.
The ontological nature of values leads to tensions that arise in practice. With software development, therefore, conflicts emerge between values to be considered such as privacy and transparency or autonomy/freedom and security – to name but a few. In most cases, the implementation of values must be seen as a compromise. For example, a conflict arises between transparency and privacy: both values are mentioned in the majority of codes, yet it is not possible to take them fully into account at the same time.
At a certain point, an increase in one value will necessarily result in the other value decreasing. Thus, maximum transparency leads to the disclosure of privacy. The following figure shows a graphic representation of possible compromises between the two values.
So long as the product to be developed is in the upper right corner of this figure, the technical object can be developed in an ethically improved way by increasing either or even both values at the same time. Up to a certain point you can move closer to the respective axis of the coordinate system in the direction of the origin. However, once the curve is reached, it becomes impossible to increase one value without decreasing the other (Pareto-efficient state or Pareto optimality). It is certainly undisputed that the goal of software design should be a product that efficiently optimises the values we want to consider. However, it is by no means clear or obvious which point on the line, that is which of the many possible compromises should be implemented. We know from mathematics that there are an infinite number of points on a line.
Ethical considerations and moral decision-making as a crucial approach
How do we now decide on the right weighing of conflicting values? This is exactly where ethical considerations and moral decision-making come in. Codes offer no help in answering this question. The joint ACM and IEEE code, for example, states:
“The Code as a whole is concerned with how fundamental ethical principles apply to a computing professional’s conduct. The Code is not an algorithm for solving ethical problems; rather it serves as a basis for ethical decision-making” (ACM, IEEE).
As long as a win-win-situation can be assumed, ethical codes can be applied, but are of little use. However, once a decision has to be made between different options, legitimate ethical reasons and values have to be weighed against each other. Hence, it is unclear how codes can serve as a basis for ethical decision-making, when in reality the normative deliberations of the development team justify the ethical design of technical objects.
Accordingly, it is essential that these ethical considerations be promoted and integrated into the software development process. This is exactly what we want to do with our approach “Ethical Deliberations for Agile Software Processes” (EDAP). Our intention is to contribute to the specific normative alignment of technical objects by facilitating a targeted, rational handling of values in the design of technology. Thus, we concentrate on the identification of values, their desirability and finally their integration into software systems.
The blogs published by the bidt represent the views of the authors; they do not reflect the position of the Institute as a whole.