Responsible Innovation: A Checklist of AI Governance Requirements

In order to realize the potential of responsible AI innovation, organizations must be able to translate AI governance requirements into technical controls.

Responsible Innovation: A Checklist of AI Governance Requirements
Mapping AI Governance Requirements to Technical Controls. [AI Generated] Source: ClaritasGRC

The rapid advancements in AI have resulted in the emergence of AI governance principles and calls for responsible innovation. In order to realize the potential of responsible AI innovation, organizations must be able to translate AI governance requirements into technical controls.

This post reviews the regulatory requirements for AI governance and describes an approach to map them into technical controls, which the organizations can use as a checklist to implement responsible innovation practices. We use an AI application as a working example to illustrate how to apply this approach to enterprise use cases.

AI Governance Regulation

The EU AI Act that came into effect August 1, 2024 is the first of its kind and most comprehensive regulation to address the challenges of AI Governance.

SCOPE

The Act makes a distinction between AI model and AI systems.

  • AI Models
    • An AI model refers to the algorithm or set of mathematical rules that processes data to make predictions, classifications, or decisions. It is essentially the core component that drives how an AI system operates.
    • AI models are developed through techniques like machine learning, statistical analysis, or rule-based logic, where they learn from data to perform specific tasks (e.g., classifying images, predicting financial outcomes).
  • AI Systems
    • An AI system, as defined by the EU AI Act, is a complete software system that includes one or more AI models and the infrastructure around them. It interacts with its environment, processes inputs, and generates outputs such as predictions, recommendations, or decisions.
    • An AI system incorporates an AI model but adds layers of input-output handling, data management, security controls, user interfaces, and compliance features.

AI systems are directly in scope of the Act. If AI model is integrated into a larger application (AI system), it is subject to the rules of the Act. Otherwise its excluded (for e.g., an AI model being used only for R&D).

RISK CATEGORIES

The Act adopts a risk-based classification approach. It identifies four categories of risk:

  • Unacceptable risk:
    • AI systems that pose a clear threat to the safety, livelihoods, and rights of people
    • These include systems such as social scoring by government, real-time surveillance without targeted scope and legal safeguards, and manipulative systems targeting vulnerable age groups (such as children and elderly).
    • These systems have an outright ban.
  • High risk:
    • AI systems that include specific high risk use cases, with significant impact on fundamental rights, safety, or critical societal functions.
    • These systems include AI systems used for facial recognition or biometric identification with targeted scope or legal safeguards, such as those used by law enforcement for surveillance purposes, or AI systems used for recruitment, hiring, or employee evaluation.
    • They must meet the mandatory risk assessments and conformity assessments under the EU AI Act.
  • Limited risk:
    • AI systems that pose limited risk to individuals or impact on fundamental rights or safety.
    • These systems include chatbots, recommendation systems used on e-commerce sites, or for targeted advertising or automated marketing.
    • These systems will carry transparency obligations so users know AI is present.
  • Minimal/no risk:
    • AI systems that pose no or negligible risk to individuals’ safety or rights.
    • These systems include AI-enabled video games, productivity tools like Grammarly, or spam filters.
    • They carry no additional compliance obligations.
The risk classification levels under the EU AI Act. [AI Generated] Source: ClaritasGRC

AI Governance Challenges

Under the EU AI Act, there are several requirements related to governance and ethics that need to be addressed when deploying AI systems. They can be categorized at a high level into the challenges depicted in the following chart.

The governance and ethic challenges under the EU AI Act. Source: ClaritasGRC

Below, we will discuss these challenges and describe the technical controls that can be used to address them.

#1 Security and Privacy

DATA PRIVACY RISKS

While the EU AI Act doesn't directly regulate data protection, it requires in-scope AI systems to comply with existing data protection laws, including GDPR.

  • For limited risk systems under the Act, this means that they continue to respect GDPR principles that apply to processing of personal data, such as obligations related to notice, consent, and data subject rights.
  • For AI systems that engage in automated decision-making and hence categorized as high-risk under the Act, this means a stricter application of GDPR, including documentation for Data Privacy Impact Assessments (DPIAs) performed under the GDPR as well as conformity assessments performed under the Act.

DATA SECURITY RISKS

The Act emphasizes the need for AI systems to be developed, deployed, and maintained in a manner that ensures data security in compliance with existing laws.

  • For limited risk systems under the Act, this means that they continue to respect GDPR principles that apply to security of personal data, such as obligations related to mitigating the risks associated with data breaches, unauthorized access, manipulation, or misuse of data.
  • For AI systems that are categorized as high-risk under the Act, this again means a stricter application of GDPR, including mandatory enforcement of secure by design principles, in addition to risk management system, governance practices, incident response plans, and conformity assessments performed under the Act.
Data privacy and security risks under the EU AI Act. [AI Generated] Source: ClaritasGRC

EXAMPLE: SECURITY AND PRIVACY CONTROLS

The following table provides a list of security and privacy controls that apply to Limited and/or High risk only systems. This list can be used for scoping the AI governance efforts for an organization depending on the risk classification level of the AI application being reviewed.

  • Limited + High

Data Protection and Privacy

a. Data Minimization

  • Description: Collect and process only the data necessary for the intended purpose.

  • Implementation: Use techniques like data anonymization and pseudonymization to protect personal data.

b. Secure Data Storage and Transmission

  • Encryption at Rest and in Transit: Use strong encryption protocols (e.g., AES-256, TLS) to protect data.

  • Secure Communication Channels: Implement VPNs or secure APIs for data transmission.

Access Control and Authentication

a. Role-Based Access Control (RBAC)

  • Description: Restricts system access to authorized users based on their roles.

  • Implementation: Define user roles and permissions carefully to limit access to sensitive data and system functions.

b. Multi-Factor Authentication (MFA)

  • Description: Requires multiple forms of verification before granting access.

  • Implementation: Combine passwords with biometric verification or one-time codes.


Secure Development Lifecycle (SDLC)

a. Code Reviews and Testing

  • Description: Regularly review code for security vulnerabilities.

  • Implementation: Implement static and dynamic code analysis tools.

b. Secure Coding Practices

  • Description: Follow best practices to prevent common security flaws.

  • Implementation: Adhere to standards like OWASP Secure Coding Practices.

Third-Party Risk Management

a. Vendor Security Assessments

  • Description: Assess the security posture of third-party providers.

  • Implementation: Require compliance certifications and conduct regular audits.

b. Secure APIs and Integrations

  • Description: Ensure that integrations with third-party services are secure.

  • Implementation: Use secure API gateways and monitor data exchanges.

  • High Only*

Robustness and Security Against Attacks

a. Adversarial Robustness

  • Description: Enhance AI models to withstand adversarial attacks that attempt to deceive the system.

  • Implementation: Use adversarial training, where the model is trained on both normal and adversarial examples.

b. Input Validation and Sanitization

  • Description: Ensure that all input data is checked for validity and sanitized to prevent injection attacks.

  • Implementation: Implement strict validation rules and escape harmful characters or code.

Risk Assessment and Management

a. Regular Security Assessments

  • Description: Conduct periodic risk assessments to identify vulnerabilities.

  • Implementation: Use tools like penetration testing and vulnerability scanning.

b. Threat Modeling

  • Description: Analyze potential threats to the AI system and data.

  • Implementation: Identify assets, potential attackers, attack vectors, and mitigations.


Monitoring and Logging

a. Continuous Monitoring

  • Description: Implement real-time monitoring of the AI system’s performance and security.

  • Implementation: Use intrusion detection systems (IDS) and anomaly detection algorithms.

b. Logging and Audit Trails

  • Description: Keep detailed logs of system activities for auditing and compliance purposes.

  • Implementation: Record user actions, data access events, and system changes securely.

Incident Response Planning

a. Incident Response Plan

  • Description: Develop a plan to respond to security incidents effectively.

  • Implementation: Define roles, communication channels, and recovery procedures.

b. Regular Drills and Updates

  • Description: Test and update the incident response plan periodically.

  • Implementation: Conduct mock incident response exercises.

*This means these requirements are not explicitly called out for non-High risk systems in either the EU AI Act or the GDPR. While they may indirectly be implied in some cases, we are focused on mapping controls that are explicitly required.

#2 Fairness and Non-Discrimination

The EU AI Act requires in-scope AI systems, particularly the high-risk systems, to comply with stringent fairness and non-discrimination requirements, including rigorous testing and conformity assessments.

While limited risk systems under the Act are encouraged to avoid bias and discrimination in their design, they are not required to perform rigorous testing or undergo conformity assessments.

EXAMPLE: FAIRNESS AND NON-DISCRIMINATION CONTROLS

a. Bias Audits

  • Description: Regularly assess AI systems for bias in decision-making and outcomes.

  • Implementation: Conduct audits to detect disparities in model predictions across different demographic groups (e.g., based on gender, race, or age).

b. Fairness Metrics

  • Description: Use specific metrics to evaluate fairness in AI systems.

  • Implementation: Common fairness metrics include:

    • Demographic Parity

    • Equal Opportunity

    • Equalized Odds

c. Fairness Impact Assessments

  • Description: Perform detailed Fairness Impact Assessments (FIAs) to evaluate the potential impacts of AI models on various demographic groups.

  • Implementation: Conduct FIAs before and after deployment to assess whether the model has unfair impacts on protected groups (e.g., minorities, women).

d. Human-in-the-Loop (HITL) Systems

  • Description: Involve human experts to review and correct AI-generated predictions.

  • Implementation: Deploy human-in-the-loop systems in high-stakes scenarios (e.g., recruitment, loan approvals) to ensure that AI decisions are verified for fairness by humans.

#3 Explainability and Transparency (XAI)

The EU AI Act requires in-scope AI systems, particularly the high-risk systems, to comply with stringent explainability and transparency requirements, including detailed documentation and conformity assessments.

On the other hand, limited risk systems under the Act are only required to meet transparency obligations to inform individuals about the use of AI. They are not required to maintain detailed documentation or undergo conformity assessments.

EXAMPLE: EXPLAINABILITY AND TRANSPARENCY (XAI) CONTROLS

a. LIME (Local Interpretable Model-Agnostic Explanations)

  • Description: LIME is a popular tool that provides local explanations for individual predictions by approximating the AI model with an interpretable model in the vicinity of the prediction.

  • Implementation: LIME can be applied to black-box models (e.g., deep learning, random forests) to explain why a certain prediction was made for a specific input. It helps highlight the most influential features used in decision-making.

b. SHAP (Shapley Additive Explanations)

  • Description: SHAP assigns a value to each feature in the model’s prediction by calculating its contribution to the final output. It’s based on game theory and provides consistent explanations across models.

  • Implementation: SHAP can explain the global behavior of models by showing feature importance, as well as individual predictions by attributing scores to features.

c. Model Cards

  • Description: Model Cards are a documentation framework that provides clear and structured information about an AI model’s design, intended use, performance metrics, and limitations.

  • Implementation: Developers should create Model Cards that explain the AI system’s purpose, data sources, decision-making processes, and potential biases.

d. Algorithmic Impact Assessments (AIA)

  • Description: AIAs assess the potential risks, impacts, and benefits of AI systems before deployment, helping to ensure that the AI system is both ethical and transparent in its operation.

  • Implementation: Organizations can use AIAs to document the expected outcomes of AI systems and identify any ethical or fairness concerns. This can also be used to provide transparency to regulators and users.

#4 Accountability and Control

The EU AI Act requires in-scope AI systems, particularly the high-risk systems, to comply with stringent accountability and control requirements, including ongoing monitoring and conformity assessments.

While limited risk systems under the Act are encouraged to have accountability and human oversight, they are not required to perform ongoing monitoring or undergo conformity assessments.

EXAMPLE: ACCOUNTABILITY CONTROLS

a. Model Version Control

  • Description: Ensure that all versions of AI models are documented and tracked, enabling a clear record of model evolution over time.

  • Implementation: Use version control systems like Git or dedicated machine learning platforms (e.g., MLflow, DVC) to manage model versions, track changes, and record performance metrics for each version.

b. Model Governance Frameworks

  • Description: Implement governance frameworks that outline roles, responsibilities, and oversight structures for AI systems.

  • Implementation: Create a formal governance process to manage the lifecycle of AI models, including roles for AI ethics committees and compliance officers. This framework ensures accountability for the outcomes produced by AI systems.

c. Human Oversight Mechanisms

  • Description: Implement systems that allow human operators to intervene, monitor, and override AI decisions when necessary.

  • Implementation: Design AI systems with override capabilities and require human review for high-risk decisions, especially in sectors like healthcare, law enforcement, or finance.

d. AI-Specific Incident Response Plans

  • Description: Develop incident response procedures tailored to AI systems, ensuring quick mitigation of risks such as biased outputs, model drift, or system malfunctions.

  • Implementation: Create playbooks that outline steps to investigate AI-related incidents, correct erroneous outputs, and prevent future occurrences. These playbooks should include roles, responsibilities, and escalation paths.

Enterprise AI Use Cases

We will now apply this approach to meet AI governance obligations for an AI application supporting a set of enterprise AI use cases.

Many enterprise AI applications are based on a technique called Retrieval Augmented Generation (RAG). RAG combines information retrieval with natural language generation to create powerful AI systems capable of handling complex tasks.

For the purposes of illustration, we will assume a RAG-based AI system for a healthcare organization that operates in the US and EU. The system can support the following use cases:

  • Use Case #1: Clinical Decision Support (CDS) Systems: It can assist in making diagnostic and treatment decisions by retrieving relevant medical literature, patient history, and clinical guidelines, then generating a summary.
  • Use Case #2: Health Chatbots and Virtual Assistants: It can provide anonymized responses to medical questions by retrieving relevant content from databases of symptoms, conditions, and treatments.
  • Use Case #3: Patient Insurance Data Reporting: It can retrieve patient claim records for insurance purposes to generate comprehensive patient summaries and make future eligibility decisions for medical coverage.

We will further assume that this AI system is both used internally by the organization as well as deployed for external use in other third party systems.

EXERCISE: MAPPING REQUIREMENTS TO CONTROLS

Inquiry

Application

Use Case 1

Use Case 2

Use Case 3

Is this an AI Model or System?

AI System that gets deployed





Is this Provider or Integrator?

Provider for internal deployments, Integrator for external deployments




Is this in scope of GDPR?


YES => Includes EU consumers




Does it deal with high risk processing under GDPR?


YES => Automated decision making with sensitive data

NO => No automated decision making, no personal data

YES => Automated decision making with sensitive data

What is the risk category of AI system under EU AI Act?


Limited risk => Used in limited risk context


Limited risk => Used in limited risk context


High risk => Used in specific high risk context

What controls does it require?


Security and Privacy (Limited risk controls) 

+ Transparency obligation 

+ Strict GDPR obligations (DPIA)

Security and Privacy (Limited risk controls) 

+ Transparency obligation

Security and Privacy (High risk controls) 

+ Strict GDPR obligations (DPIA + Secure by Default)

Conclusion

This post provided a review of AI governance regulations and an approach to map AI governance requirements into technical controls. This serves as a checklist for organizations on their journey toward implementing responsible innovation practices.

Connect with us

If you would like to reach out for guidance or provide other input, please do not hesitate to contact us.