top of page

Compliance by Design: Using Information Governance to Avert AI Catastrophes

ree

In March 2025, Iranian state-owned Bank Sepah suffered a devastating cyberattack orchestrated by the hacker group Codebreakers. The attackers exploited weaknesses in the bank’s AI-driven security and data management systems, gaining unauthorized access to 12 terabytes of sensitive customer data impacting approximately 42 million individuals. The compromised records included banking details, personal identifiers, and, most explosively, information about high-ranking military officials.


When the bank refused to pay a $42 million ransom, the hackers released portions of the stolen data publicly, triggering international outrage, political tensions, and urgent calls for systemic reform. While the full financial penalties from regulators have not yet been disclosed, the reputational and operational costs have been severe. The breach demonstrated the speed and scale at which AI vulnerabilities can magnify damage, underscoring that poorly governed AI is not simply a technological risk but a structural one that can cripple an organization’s compliance posture.


This incident reflects a broader pattern of increasing regulatory pressure on AI failures in the financial services sector. In the European Union, enforcement under the AI Act has rapidly gained momentum, with fines for AI-related data breaches now reaching into the hundreds of millions of euros for major firms. Coupled with GDPR enforcement, which has already resulted in cumulative fines exceeding €4 billion since 2018, the message from regulators is unambiguous: AI systems must be transparent, explainable, and subject to rigorous human oversight.


Globally, regulatory agencies are emphasizing that financial institutions will be held to higher standards, not only for the accuracy and security of AI outputs but also for the governance processes behind them. In fact, research indicates that human error still accounts for approximately 82 percent of data breaches, a figure that becomes even more concerning when AI systems operate with minimal human supervision, magnifying the impact of a single lapse in judgment or oversight.


Information Governance (IG) best practices offer a direct and practical framework for creating compliance by design—embedding controls into the very fabric of AI system development, deployment, and monitoring. At its core, IG is about managing information throughout its lifecycle with the same discipline applied to financial auditing or operational risk.


This begins with ensuring that the data feeding AI models is classified, controlled, and retained according to both regulatory requirements and business needs. Inadequate classification, as seen in the Bank Sepah case, allows sensitive data to be unnecessarily retained, poorly secured, or inappropriately accessed, all of which multiply the risks of AI exploitation. Aligning retention schedules with data minimization principles—central to both GDPR and the AI Act—ensures that AI models are not trained on outdated, irrelevant, or unlawfully stored datasets.


Equally important is governing the AI model lifecycle itself. Just as IG manages documents from creation to disposal, AI models require version control, change management, and planned retirement of outdated or insecure systems. Without these processes, AI can drift from its intended purpose, making outputs unpredictable and compliance obligations harder to fulfill. For example, a model trained on unverified datasets can develop decision-making biases or vulnerabilities that go unnoticed until exploited, leaving institutions exposed to enforcement actions and litigation.


Human oversight remains a cornerstone of both effective IG and responsible AI deployment. The principle of “human in the loop,” enshrined in the EU AI Act and supported by the NIST AI Risk Management Framework, demands that individuals with appropriate expertise evaluate high-impact AI outputs before they are acted upon. This is not a mere formality—studies have shown that organizations with robust oversight mechanisms reduce the likelihood of compliance-related incidents by as much as 60 percent. In the Bank Sepah breach, the absence of adequate human monitoring allowed attackers to operate undetected within the system for weeks, exacerbating the scope of the compromise.


Proactive risk assessment and auditability are equally critical. Regular AI impact assessments—similar to data protection impact assessments under GDPR—allow organizations to identify systemic weaknesses before they are exploited. Audit trails that document the reasoning behind AI decisions are no longer optional; they are essential evidence in regulatory investigations and can mean the difference between a manageable enforcement process and crippling sanctions. The Bank Sepah case was complicated by the absence of reliable logs, which hindered both internal remediation and external defense, further eroding trust among customers and regulators.


Building a culture of compliance by design also requires comprehensive training and awareness across all levels of the organization. It is not enough for AI engineers to understand regulatory obligations; frontline staff, compliance officers, and executives all need to recognize how their decisions and behaviors intersect with AI governance. This is particularly important given that human error remains the most common root cause of breaches. Scenario-based exercises, where teams walk through AI-related incident responses, have been shown to significantly improve both detection speed and mitigation effectiveness, reducing breach impact by up to 30 percent.


The advantages of embedding IG principles into AI operations go beyond risk reduction. Institutions that prioritize compliance by design often see measurable gains in customer trust, which translates directly into stronger market positioning. They are also better equipped to respond to regulatory inquiries, reducing the time and resources needed for investigations. Moreover, by standardizing governance processes, these organizations increase their operational resilience, enabling them to adapt quickly to evolving regulatory frameworks and emerging technological risks.


The Bank Sepah breach should be a clear warning to financial institutions worldwide: AI without governance is a liability waiting to be exploited. The pathway to prevention lies in treating IG not as a compliance checkbox but as a design requirement, integral to every stage of the AI lifecycle.


By applying rigorous data discipline, lifecycle oversight, human accountability, proactive risk assessment, and organizational training, AI can be transformed from a potential threat into a controlled, transparent, and trustworthy asset. In a sector where the cost of failure is measured not just in fines but in the erosion of public trust, compliance by design is not simply a regulatory preference—it is a business imperative.

 
 
 
bottom of page