
Official Kick-Off: KAI Center Launch Event
Join us for the official launch of the Kristiania AI Center (KAI). Welcome to an inspiring evening of insights on responsible AI, followed by networking, drinks and snacks.
Date: May 6th 2026
Time: 17.30 - 19.00 (Doors open at 17.00)
Venue: Kirkegata 24, room KAU-B1-02
Register here (registration closes May 1st)
Welcome to the official kick-off of the KAI Center
We are proud to host Camila Lombana Diaz, recognized as one of the world’s 100 Brilliant Women in AI Ethics. She will share valuable insights on responsible and trustworthy AI - an increasingly critical topic across industries and disciplines.
Program:
17.00: Refreshments
17.30: Official opening of Kristiania AI center, by Pedro Lind, professor and rector Trine Johansen Meza, Kristiania
17.35-18.20: The code doesn’t decide, you do: Responsible AI, beyond the automation myth by Camila Lombana-Diaz
18.20-18.30: Q & A, Synne Tollerud Bull, professor, Kristiania
18.30–19.30: Join us for snacks and drinks! This is an excellent opportunity to network with colleagues, partners, and professionals across fields, and explore future collaboration around AI innovation and ethics.
We look forward to celebrating this important milestone with you.
Read more about Kristiania AI Center.

Camila Lombana-Diaz
Camila Lombana-Diaz is an international expert in ethical artificial intelligence and has been named among the top 100 women in AI ethics globally. She works at the intersection of technology, policy, and society, focusing on how AI can be developed and governed in responsible and inclusive ways. Lombana-Diaz has contributed to global AI ethics frameworks through her work at SAP, including co-authoring the SAP AI Ethics Handbook. Lombana-Diaz is also a recognized voice in debates on “pluriversal” approaches to AI, emphasizing the need for diverse perspectives in technology development.
About Lombana-Diaz' presentation:
The rapid expansion of AI systems and their growing societal impact have intensified the need for AI ethics as a field concerned with aligning AI with human rights — especially principles such as fairness, accountability, transparency, and societal well-being.
Responsible AI has emerged as an effort to operationalize these principles through governance frameworks, technical tools, and organizational processes. At the same time, a key challenge remains: bringing institutions, policymakers, and the public “up to speed” with both the pace and the complexity of AI development.
Current AI ethics debates are often structured around two dominant approaches: principle-based frameworks, grounded in ethical guidelines, and risk-based approaches, focused on governance and regulation.
Within both perspectives, it is essential to understand AI as a socio-technical system — one that is deeply embedded in social, political, and economic contexts. This includes challenges such as algorithmic bias and discrimination, impacts on labor, environmental costs, epistemic harms, and the concentration of power.
At the same time, an epistemological gap persists between complex ethical values and the technical metrics and benchmarks commonly used in AI safety. These metrics often struggle to fully capture broader ethical concerns.
To address this gap, alternative perspectives can be helpful. Drawing on ayni — the Andean Indigenous principle of reciprocity and relational responsibility — relational and pluriversal approaches offer new ways of thinking about AI governance. These perspectives aim to bridge the divide between technical safety metrics and broader forms of ethical accountability, by emphasizing relationships, mutual responsibility, and the wider societal context in which AI operates.