| Aktuelles | Blog | Talking about the “What?” and “How?” of AI ethics

Talking about the “What?” and “How?” of AI ethics

As artificial intelligence (AI) continues to permeate our daily lives, it is becoming increasingly important for systems incorporating AI to be designed in an ethical and responsible manner. While the potential benefits of such systems are vast, at the same time it is imperative that we consider any underlying harm that could be caused should this technology not be developed with due attention to ethical considerations.


Hence, as AI applications progressively expand on a global scale, we must guarantee ethical input, bringing to bear a deep understanding of both the content and conditions which make ethical AI development possible.

The goal of this blog post is to shed light on two essential components of AI ethics: the content of ethical considerations; along with the governance of organisations and development processes. By examining both the “what question” (content of ethical considerations) and the “how question” (governance of AI development), we hope to help practitioners and researchers understand how to create AI products in a responsible and ethical manner.

The “what” and the “how” of AI ethics

The “what question” of AI ethics deals with content, namely the values, principles and ethical considerations that should be integrated into AI products and systems. At first glance, it may seem like a straightforward process to incorporate these values into AI designs. However, as AI is often used in contexts where different values come into conflict, it is vital that we understand how to achieve ethical trade-offs and arrive at well-justified acceptable decisions.

The “how question” of AI ethics is equally important, in that it deals with the necessary conditions of organisational governance that must be in place to make possible ethical AI development. This involves understanding the human, technological and organisational factors that should be brought to bear in such a way as to create an environment where AI developers, designers, managers and stakeholders are all trained, motivated and incentivised to incorporate ethics into their work.

Together, these two dimensions are critical for ensuring that AI is developed in a way that not only aligns with our values, but also benefits society as a whole. In the spirit of Kant, one could say that: Governance without content is empty, content without governance is blind.

In what follows, we will briefly explore both the content of ethical considerations in AI and the governance of AI development, providing insights and recommendations for a future which will ensure that AI is developed in an ethical and responsible manner.

The Content of Ethical Considerations

When designing AI systems, it is important to consider a wide range of ethical values and principles, such as privacy and transparency. However, these values often conflict with each other, making necessary the evolution of a solution which aligns with the application’s context. For example, there may be a question of how much privacy should be sacrificed in order to increase the transparency or accuracy of a model. It is essential that developers and designers are able to make well-justified decisions when balancing these trade-offs. Answering these questions is complex, as the outcomes of software tools and AI applications are heavily influenced by both the intended and potentially unintended contexts in which they are ultimately used.

To address the ethical challenges of AI, it is crucial to:

  1. recognise potential ethical opportunities and risks;
  2. engage in ethical deliberation, considering the specific context and fully weighing all related issues before reaching a justifiable conclusion;
  3. translate these conclusions into technical features for integration into the AI product.

It is clearly a challenging task that requires individuals with motivation, talent and ability. Hence, this leads to consideration of how we can make this possible: organisational governance.

The Governance of AI Development

In addition to considering the content of ethical considerations, AI development teams must be empowered to integrate these ethical considerations into the design of AI products effectively. This involves creating an environment in which developers, designers and managers can all be provided with the appropriate ethical climate and well-designed incentives to facilitate the most appropriate incorporation of ethics into the design process. Human factors play a crucial role in achieving this goal and thus call for: effective communication and collaboration among team members; a supportive organisational culture, especially in regard to high-level support of “ethics by design”; and clear ethical guidelines.

It is evident that addressing ethical issues in AI is a complex task that requires finding potential ethical concerns, evaluating them and implementing solutions. Achieving this requires well-designed motivation and incentives in place, otherwise developers and designers may not be prepared to tackle such complex ethical questions. For developers to undertake “ethics by design” with success, it is vital that they have access to education and resources, as well as the necessary support structures that bring these elements together, allowing them to seek expert help when needed.

In conclusion, ethics should be a key consideration in the design, development and maintenance of AI systems. By fully examining all considerations and the governance of AI development, we can work towards creating AI products that are not only beneficial, but also responsible and ethical.

The blogs published by the bidt represent the views of the authors; they do not reflect the position of the Institute as a whole.