Ethical Risk Assessment of AI in Practice Methodology: Process-oriented Lessons Learnt from the Initial Phase of Collaborative Development with Public and Private Organisations in Norway

Natalia Murashova, Diana Saplacan Lindblom, Aida Omerovic, Heidi E. I. Dahl, Leonora Onarheim Bergsjø

Research output: Contribution to conferencePaperpeer-review

Abstract

Artificial Intelligence (AI) and its ethical implications are not new for academia and business. Challenges of
embedding principles for ethical AI in practice are obvious and
even though the gap between theory and practice is decreasing,
it does not meet the urgent need for responsible technology
development and deployment. Embedding ethical principles in
existing risk assessment practices is a novel, process-oriented
approach that can contribute to operationalising AI ethics in
organisational practice. This paper elaborates on initial phase
of collaborative development of ethical risk assessment of AI
methodology, involving private and public organisations in Norway. We reflect upon our experience and present key takeaways in a form of three lessons learnt from embedding a
Model-based security risk analysis method (CORAS) and a Story
Dialog Method (SDM) in the initial phase of the collaborative
methodology development. This study concludes that ethical risk
assessment of AI in practice is feasible and explores design issues
related to cross-sectoral settings, flexibility of the methodology,
and power-relationship.
Original languageEnglish
Pages8
Publication statusPublished - 2025
Externally publishedYes

Fingerprint

Dive into the research topics of 'Ethical Risk Assessment of AI in Practice Methodology: Process-oriented Lessons Learnt from the Initial Phase of Collaborative Development with Public and Private Organisations in Norway'. Together they form a unique fingerprint.

Cite this