Blog post

AI Governance

By Iiris Lahti
blog image

This blog post is based on a AI governance and AI ethics study conducted by the author. We will share here the most recent AI governance related research and the most interesting findings of the study. We are also committed helping your organization build data governance practices. 

How to govern your AI

AI governance is a topic that is still abstract for many organisations and business stakeholders. Some might have heard of it when reading about biased algorithms, ML development practices, or data governance. Indeed, AI governance can be seen as an extension of data governance. Many of the data governance practices can be extended to include AI/ML related processes, such as review of use cases, as well as data collection and processing and information security and data protection practices.
 
Why is AI governance relevant?

Few of the main reasons why organisations should consider implementing AI governance are brand and reputational risks, corporate responsibility, and future regulation. The media is filled with grim predictions of out-of-control robots, biased decision-making, unfair treatment of minority groups, privacy violations, adversarial attacks, and challenges to human rights caused by ungoverned algorithms. Moreover, there have been instances of biased algorithmic decision-making and unfairness, mostly related to discrimination of minority groups or gender. These may cause unrepairable damage to organisations brand image and reputation. Moreover, these days it is common for organisations to have responsibility on their agenda or have responsibility objectives incorporated in their overall strategy and values. Thus, it makes sense to have control over their AI systems to avoid the costly mistakes of uncontrollable AI. Lastly, future regulation, such as the European Commission’s recent proposal for AI regulation, will eventually make AI governance from a “nice-to-have” subject to “must-have” operations for many organisations using AI.

 
What is AI governance?

AI governance is a topic closely related to AI ethics. A common way to express the challenges of AI is to consolidate them into a more understandable form of ethical principles, such as fairness, privacy, accountability, transparency, and explainability. Governmental bodies and international organisations, as well as private companies have published their own guidelines of ethical AI. These include the IEEE “Ethically Aligned Design” document and the European Commission’s AI HLEG “Ethics Guidelines for Trustworthy AI”. The guidelines generally address the ethical challenges by proposing various ways to increase fairness, transparency, or accountability of AI systems. The European Commission’s AI HLEG suggests that the methods used to implement ethical principles should encompass the AI’s entire life cycle and include both non-technical and technical methods. The non-technical methods include certificates, standardization (such as ISO Standards and IEEE P7000), education and awareness, multi-stakeholder inclusion, team diversity, and governance frameworks, whereas the technical methods consist of rigorous model validation and testing as well as continuous monitoring.

Fairness is closely related to the prevention of bias and non-discrimination. AI systems are subject to various distortions, which may lead to unfair decisions. Every ML model is trained and evaluated using data, which can reflect existing biases in religion, gender, or race. The dataset’s characteristics will therefore influence a model’s behaviour and reinforce existing discriminations. Thus, the use of high-quality and representative datasets is recommended to address the issue of “garbage in, garbage out”. Furthermore, the development and design phase of AI systems may also suffer from bias, and thus, it is suggested to employ diverse development teams.

Accountability refers to being responsible for the AI system, its potential impacts and behaviour. AI’s complex nature can place further distance between the results of an action and the actor who caused it, raising the question about who should be held accountable and under what circumstances. This is also referred to as the “responsibility gap”, whereby it is unclear who is ultimately responsible. The accountability and responsibility should be allocated between those who deploy, develop, design, and use the AI systems, as we cannot hold the technology itself responsible for its actions. Organisations can create internal review boards to oversee the use and development of AI. The review boards can use impact assessments to document, assess, identify, and minimize the AI system’s potential negative impacts, and use it to decide whether an AI system should be deployed or developed in the first place. It also is suggested that AI systems should be able to be independently audited by internal or external auditors. Moreover, accountability is strongly connected to human control. AI systems must be designed and implemented so that humans can retain control of the system or intervene in its actions. Furthermore, organisations must provide an option for individuals to opt out of automated decisions related to them.

Transparency can be understood as the transparency of the organisations using and developing the AI systems, or as the transparency of the system itself. The former refers to the understanding of by whom, why and what decisions were made during the AI design and development, whereas the latter refers to the understanding of how the system reaches a decision and how it is designed. The greatest challenge of AI governance is the complexity and opacity of the technology itself. It is possible to increase transparency by releasing the algorithm’s source code, or providing information about the variables affecting the decision-making. Moreover, organisations can also consider minimizing the use of black boxes or abandon them altogether. Transparency can also be achieved by providing explanations of the processes that lead to the decisions. Explanations can be described as either global or local ones. Global explanations refer to the general rules and overall behaviour of the model (i.e., how the system generally reaches a decision, what drives the system’s decisions, which factors are the most important ones, etc.), whereas local explanations look at the specific decision (i.e., how it reached a particular outcome).

In addition, it is suggested to implement ethical principles into the AI system’s design, generally referred to as “X by Design”, such as Privacy by Design and Security by Design.

 
How are organisations conducting AI governance today?

According to a study conducted by the author of this blog post, organisations are using various AI and data governance practices to govern their AI, such as review boards and impact assessments for AI use cases. In addition, organisations have in place many AI design and development practices as well as education and stakeholder communication activities to support AI governance.

The study was conducted as a part of a research project called Artificial Intelligence Governance & Auditing (AIGA), which explores how to execute responsible AI in practice. The AIGA project is coordinated by the University of Turku and funded by Business Finland. Overall, 13 expert interviews were conducted with 12 Finnish organisations. The organisations either created AI systems for clients, used AI systems as a sales product or in organisation’s own business activities, or provided AI-related services. Eight interviewed organisations were operating in the private sector and four in public. Moreover, the participants were from a range of managerial positions, from lead data scientists to CEOs.

In the figure below, you can find the key themes of the study.

Organisations built their AI governance practices on top of data governance. Indeed, data governance functioned as a foundation for AI governance, and some of the practices covered both data and AI related use cases. Therefore, organisations should have a solid data governance base, which can be extended to include AI systems. 
 
Our services in Data Governance

AI Roots has successfully helped organisations to develop and implement data governance practices suitable for the company’s unique needs and business environment. We can support our customers for example by:

  • developing data strategy and setting target state for data governance
  • creating guidelines for responsible and ethical use of data and AI
  • defining and onboarding data stewards and other data related new roles (i.e., people who are responsible for data flow and quality, data usage and understanding, and managing access to data)
  • designing data architecture to cover governance needs
  • aligning data governance with privacy policies
  • designing data management tools
  • supporting data catalogue requirement gathering and building a business case, evaluating tool options, and planning and supporting successful deployment
  • developing data quality management practices and tools
  • organizing data governance related trainings

We at AI Roots are committed to assist companies to develop data governance practices all the way from technical data quality management to helping the business take ownership and govern the responsible use of data and AI in the organisation. Please, feel free to contact us in case you need help in this area.

Iiris | iiris@rootsof.ai | +358 40 5188 207

Did the article spark some thoughts? We'd love to hear!