top of page

How Information Governance Best Practices Promote US Businesses' Compliance with the EU AI Act (and why they should care)





The European Union’s AI Act introduces a regulatory framework that imposes strict compliance requirements on AI systems, particularly for businesses operating in or interacting with the EU market. The Act classifies AI systems based on their risk levels, ranging from minimal to high risk, with specific requirements for transparency, accountability, and data governance.


US-based businesses developing AI models that process data from EU residents, are deployed in EU markets, or influence decision-making processes affecting EU residents will need to ensure compliance with these new regulations to maintain market access and avoid penalties. And keeping proper records is one of the cornerstones of that compliance!


Key Compliance Requirements Under the EU AI Act


  1. Rigorous Data Recordkeeping and Documentation


     Under the EU AI Act, businesses must maintain extensive documentation regarding AI model training, dataset sources, and data processing methods. This ensures transparency and allows regulators to audit the AI’s decision-making process. Unlike US state-specific laws that focus on privacy and consumer rights, the EU AI Act demands documentation for risk assessment, bias mitigation, and ongoing monitoring. From an information governance perspective, this means that they need to be able to source the right information quickly, dispose of inaccurate and obsolete records and data that may lead to AI hallucinations, and take other proactive compliance measures to promote the integrity of their AI models and systems.


  2. High-Quality Data for AI Model Integrity


     The Act emphasizes that AI models must be trained on datasets that are accurate, representative, and free from bias. This means businesses must establish data validation procedures, regularly audit datasets, and ensure compliance with GDPR principles regarding lawful data collection and processing. Simply put, maintaining quality data sources is critical for mitigating biases and ensuring fairness in AI decision-making. Practices like managing records based on a defensible and updated retention schedule help to ensure that only relevant, up-to-date, and high-quality data is retained for AI training, reducing the risk of outdated or biased data influencing model outcomes. Proper data retention practices also provide an auditable trail, reinforcing compliance with regulatory requirements and demonstrating accountability in AI decision-making.


  3. Mandatory Risk Assessments for High-Risk AI Applications


     The EU AI Act mandates ongoing risk assessments for AI applications deemed high-risk, such as those used in healthcare, finance, and biometric identification. US businesses must adapt to these requirements by conducting regular audits and impact assessments, ensuring their AI models align with European standards. Research findings suggest that inadequate risk assessments contribute to regulatory violations and reputational harm, making systematic risk evaluation a key aspect of compliance. By implementing information governance best practices like maintaining structured policies for data lifecycle management, organizations can proactively identify, address, and mitigate potential risks before they escalate into compliance violations.


  4. Vendor and Third-Party Data Compliance


     Businesses that rely on external data sources for AI training must ensure that vendors comply with the EU’s stringent requirements. Contracts must include provisions for compliance with data provenance, auditability, and alignment with the AI Act’s transparency mandates. Another concern is that AI systems built on unverified third-party data may introduce security vulnerabilities and amplify biases, making compliance with vendor documentation critical. Implementing robust information governance controls such as rigorous data verification protocols, maintaining comprehensive audit trails, and conducting periodic assessments of third-party data sources helps ensure that AI systems rely only on secure, validated, and unbiased information.


  5. AI Model Explainability and Transparency


     The Act requires that AI systems, particularly high-risk ones, provide explainable and interpretable decision-making processes. Businesses must develop mechanisms for making AI decisions auditable and understandable to regulators and users alike. Research highlights that black-box AI models contribute to accountability gaps, reinforcing the need for businesses to prioritize model explainability. Implementing information governance best practices, such as maintaining detailed documentation of AI model training data, decision-making processes, and algorithmic changes, help to promote transparency and accountability – and, ultimately, boost both the legal defensibility and reliability of these systems.


Implementation of Robust Governance Frameworks


 Companies must establish governance policies that ensure AI systems remain compliant over time. This requires organizations to define clear accountability structures, assign dedicated compliance officers to oversee adherence, and implement version control mechanisms for AI models to track changes over time. These types of proactive governance structures have the potential to significantly reduce regulatory risks and improve AI system trustworthiness. Regularly reviewing governance policies and updating them based on regulatory developments further strengthens compliance and helps to promote AI systems that remain aligned with legal and ethical standards.


Implications for US-Based AI Businesses


According to analysis by KPMG, the EU AI Act introduces significant extraterritorial obligations for US companies, requiring them to comply even if they do not have a direct operational presence in the EU. The Act applies to businesses that develop, market, or deploy AI systems that impact EU consumers, raising compliance stakes for global AI operations. Organizations that fail to meet the Act’s standards could face substantial penalties, with fines reaching up to 7% of their global annual turnover.


The extraterritorial nature of the EU AI Act means that even businesses outside the EU must reassess their AI governance frameworks. US companies must implement robust internal compliance mechanisms, including detailed risk assessments, bias audits, and data governance protocols to ensure alignment with EU regulations. Additionally, companies deploying high-risk AI applications, such as facial recognition or financial risk modeling, must establish clear documentation demonstrating their AI model’s fairness, transparency, and explainability.


KPMG’s analysis also highlights that the Act’s classification of AI risk levels places a significant burden on companies to monitor and manage their AI applications continuously. Businesses operating in multiple jurisdictions must also prepare for varying AI governance requirements across different regulatory landscapes. Aligning internal governance policies with EU AI Act standards will not only reduce regulatory exposure but also position US businesses as leaders in responsible AI development.


The EU AI Act’s extraterritorial scope means that US businesses developing, deploying, or supplying AI systems that impact EU citizens or businesses must ensure compliance, even if they have no physical presence in the EU. The Act applies to companies offering AI-driven services or products in the EU, processing data of EU residents, or indirectly influencing EU markets through algorithmic decision-making. This broad reach mirrors GDPR’s extraterritoriality, reinforcing the need for US companies to align their AI governance frameworks with European regulatory expectations.


Failure to comply with the EU AI Act can lead to substantial fines, market access restrictions, and reputational damage. Businesses that integrate AI models into supply chain optimization, financial risk analysis, healthcare diagnostics, or biometric recognition must implement clear compliance mechanisms to demonstrate adherence to the Act’s transparency, accountability, and risk management requirements. Furthermore, since the Act classifies AI systems based on risk levels, US companies must evaluate whether their AI-driven operations fall under high-risk categories and proactively conduct the necessary impact assessments.


To mitigate compliance risks, US businesses should incorporate information governance best practices that ensure AI model transparency, traceability, and bias mitigation. This includes maintaining structured documentation of AI development, ensuring robust data retention policies, and embedding accountability measures within AI system design. Additionally, collaboration with EU-based partners or third-party vendors should include contractual safeguards to ensure compliance with data provenance, algorithmic fairness, and governance reporting requirements.


The EU AI Act is also expected to shape global AI regulatory frameworks, influencing emerging AI laws in other jurisdictions, including the US. Businesses that align early with the Act’s requirements can establish themselves as leaders in responsible AI deployment, gaining a competitive edge in international markets.


Here are some practical steps that businesses (not just in the US) can and should take:


  • Develop extensive documentation of AI training data and model behavior.

  • Enhance transparency and explainability in AI-driven decisions.

  • Conduct ongoing risk assessments and audits to maintain compliance.

  • Strengthen vendor due diligence to ensure third-party data aligns with regulatory requirements.


Why Businesses Should Care: Adapting to the EU AI Act for Global AI Governance


The EU AI Act sets a new precedent for AI governance, influencing regulatory trends worldwide. US businesses must proactively adapt their AI governance frameworks to remain compliant and competitive in global markets. By prioritizing transparency, risk assessment, and ethical AI practices, businesses can not only meet EU compliance standards but also strengthen the integrity and trustworthiness of their AI systems.


Ultimately, businesses must recognize that AI compliance is not just about avoiding fines—it is about fostering trustworthy AI models that meet global regulatory expectations.

 

See: KPMG. "How the EU AI Act Affects US-Based Companies." KPMG US, 2024, https://kpmg.com/us/en/articles/2024/how-eu-ai-act-affects-us-based-companies.htm

 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page