Staufen magazine 2025 TU Hamburg Digitalization and Industry 4.0

“Ethics is not a hindrance, but a driver of innovation.”

A conversation with Prof. Dr. Maximilian Kiener about AI, responsibility, and the future of business decisions

Artificial intelligence (AI) is entering the corporate world – with force and speed. But how can the balancing act between innovation and responsibility be achieved? Philosophy professor Maximilian Kiener explains why ethics is much more than compliance and how AI can be sustainably integrated into business processes.

Professor Kiener, you are a philosopher. Why are you interested in artificial intelligence?

AI raises many of the classic questions of philosophy in a new, very practical context: What is intelligence? What does responsibility mean? What distinguishes humans from machines? I see a close connection here: Philosophy is open to any question, and AI is open to any application. For companies, this means that the challenges are both technical and ethical in nature. How can economic success be combined with social and ecological sustainability? This is precisely where my field comes into play: ethics becomes a strategic success factor.

How can companies use AI responsibly without losing their innovative edge?

This is not a contradiction—on the contrary. Ethics is not a cost factor, but a catalyst for innovation. Those who systematically examine values are quicker to recognize which developments are sustainable. Technological decisions—such as which training data to use or how algorithms set priorities—are never neutral. Companies that understand ethics as part of their innovation strategy increase both social acceptance and the quality of their results.

Many companies view ethics primarily as a compliance issue. Why do you think this understanding falls short?

Compliance is often understood as external control or a checklist. But ethics goes deeper: it asks about the principles behind the rules, about the “why” of transparency, fairness, or responsibility. This helps companies not only to comply with regulatory requirements such as the EU’s AI Act, but also to understand them and integrate them into their own innovation strategy. Companies that internalize these basic principles act in compliance with regulations and are at the same time more resilient and credible.

Will the EU’s AI Act be a brake on innovation or an opportunity?

The impact depends crucially on implementation. Of course, there are challenges, such as the bureaucratic burden on smaller companies. However, if we understand the standards early on and implement them in a targeted manner, we can create investment security and trust, similar to the GDPR. In this way, the AI Act can become a model if it creates clarity and makes innovation predictable. It is crucial that companies do not wait and see, but act proactively and use ethics as a resource. AI innovation is not a sprint, but a marathon that requires foresight and stamina.

It’s not about competition, but complementarity: AI should not replace humans, but empower them.
PROF. DR. MAXIMILIAN KIENER
Philosopher

AI systems are often regarded as black boxes, and even experts cannot always understand them. How can companies ensure transparency and accountability?

Two terms are of central importance here: transparency and explainability. Transparency means disclosing when and how AI is used, for example in customer service or decision-making processes. Explainability means making it possible to understand how AI arrives at its results. It is important that the explanation is tailored to the respective target audience. Developers need different information than customers or supervisory boards. Companies should invest specifically in training and define clear roles for dealing with AI. Only in this way can the necessary trust be established – both internally and externally.

A hotly debated topic is agentic AI, i.e., AI with decision-making autonomy. How do you assess this development?

Agentic AI has enormous potential: it enables a new kind of scalability and efficiency. However, if the increased independence or autonomy of AI progresses without sufficient human responsibility, this is not progress, but a risk. The biggest challenge is to formulate clear goals without provoking dangerous shortcuts. I like to compare these systems to a highly motivated intern with little judgment: infinitely capable, but still uneducated in ethics. Clear structures, responsibilities, and continuous monitoring are needed, keyword: human oversight. Only in this way can opportunities be exploited and risks minimized.

You talk about hybrid intelligence as a model for the future. What does that mean for companies?

Hybrid intelligence describes the productive interaction between human judgment and machine performance. It is not about competition, but about complementarity: AI should not replace humans, but empower them. To achieve this, it is crucial to distribute tasks wisely, combining human strengths such as contextual understanding, empathy, and intuition with AI capabilities in a targeted manner. Targeted training, new role models, and a strong ethical foundation within the company are essential in this regard.

What recommendations would you give to managers and founders who want to introduce AI but are unsure, especially with regard to ROI and compliance?

Firstly: act proactively and don’t wait for the perfect regulation. Secondly, ethics should be seen as a strategic resource and driver of innovation, not as a hindrance. Thirdly, think interdisciplinarily—technology, law, and philosophy must work together. Fourthly, the ROI of AI should be measured not only in terms of efficiency, but also in terms of risk minimization, reputation, and sustainable corporate success. Leadership today means shaping responsibility, not just delegating it. That requires courage and an eye for the big picture.

About the person

Prof. Dr. Maximilian Kiener

The doctor of philosophy heads the Institute for Ethics in Technology at Hamburg University of Technology and is a research associate at the Uehiro Institute at the University of Oxford. His research focuses on the ethics of digitalization, AI, responsibility, and technological regulation. In addition to his role as a member of the Responsible AI Alliance, he advises companies and politicians on the ethical design of artificial intelligence.

print edition

Order magazine

Our magazine focuses on success stories that show how companies are addressing these challenges in order to remain competitive in their market environment. Our articles are written by customers, partners, and colleagues who share their practical experiences with us.

You may also be interested in:

Magazine

Workforce Excellence with AI:

Whether new processes, occupational safety, or the proper operation of machinery – training is indispensable in production. To ensure optimal learning without interrupting operations, Augmented Industries relies on artificial intelligence (AI). The training sessions are integrated directly into everyday work. For this innovation, the start-up was awarded the prestigious WECONOMY Prize. In the interview, Dr. Elisa Roth, co-founder and CEO of the company, explains the advantages of this new form of knowledge transfer.
Read more

REQUEST THE MAGAZINE NOW AS A DIGITAL VERSION OR PRINT EDITION

    Fields marked with a * are required.

    I would additionally like to order the Magazine as a print version

    Privacy Policy*

    I agree that Accenture can process my personal data in accordance with the Accenture Privacy Statement. Staufen was aquired by Accenture on 28 February 2025.*

    Staufen Back To Top Button