Stairway to Sanctions: How a $2.5M AI Communication Breakdown in Massachusetts Exposed the Cost of Weak Governance—And How IG Best Practices Can Prevent the Fall
- Max Rapaport
- Aug 7
- 3 min read

In July 2025, the Massachusetts Attorney General reached a $2.5 million settlement with a student loan company that had deployed an AI underwriting system without adequately testing it for bias or ensuring transparency in its decision-making process. The company’s models made adverse lending decisions based on group characteristics—such as immigration status and school default rates—and generated vague, drop-down adverse action notices that violated the Equal Credit Opportunity Act (ECOA). The underlying issue wasn’t just flawed AI logic; it was the absence of documentation, oversight, and lifecycle controls. In short, it was a governance breakdown.
This case illustrates why artificial intelligence cannot function in isolation. Without controls for how models are built, used, monitored, and explained, AI becomes a liability. ISO/IEC 42001, released in 2023, addresses this directly. As the first international AI Management System standard, it offers a clear framework for embedding transparency, accountability, and continuous oversight into every phase of the AI lifecycle. But implementing ISO 42001 in a meaningful way requires more than a checklist. It demands mature Information Governance (IG) systems that operationalize transparency—day to day, document by document, decision by decision.
One of the most critical IG tools for supporting AI governance is version control. ISO 42001 calls for full lifecycle documentation—including the version history of models, training datasets, configuration changes, and policy updates. In practice, that means organizations must not only retain snapshots of their AI models over time but also track when and how changes were made, who approved them, and why. Without version control, it becomes impossible to demonstrate that a specific model version was appropriately tested, approved, or updated in response to a known risk. In the Massachusetts case, the company’s inability to identify which model version had made a harmful decision likely contributed to the severity of the regulatory response.
Effective and compliant IG ensures that model versioning is integrated into everyday workflows. Each model update should be tagged, documented, and reviewed under a formalized change control process—just like any other high-risk business system. This includes versioned documentation of training data, bias mitigation efforts, risk assessments, and stakeholder sign-offs. These aren't theoretical requirements; they are compliance essentials. When a decision is challenged—whether by a regulator, a customer, or a court—an organization must be able to point to a specific model version and produce the evidence of how and why it was used.
Information Governance also addresses other ISO 42001 requirements, such as metadata management, access controls, and role clarity. IG professionals establish systems that tag data sources, manage permissions, document decision logic, and retain logs for defined periods. Retention schedules ensure that audit records, risk assessments, and test results are preserved long enough to withstand investigation, without holding onto unnecessary or sensitive data past its compliance window. Well-defined IG roles ensure accountability over model deployment, monitoring, fairness testing, and decommissioning.
The business case is clear: organizations with strong IG foundations are better equipped to defend their AI decisions and avoid costly mistakes. According to the International Association of Privacy Professionals (IAPP), over 70% of companies cite documentation gaps as a major barrier to responsible AI deployment. Another 64% struggle with tracking the data inputs behind their models—something IG is designed to handle. We have long argued that accurate metadata, verified data inputs, and lifecycle versioning are not overhead—they’re strategic enablers. They support better insights, lower risk, and more resilient AI systems.
Ultimately, ISO 42001 provides the governance blueprint—but Information Governance supplies the structure, policies, and tools needed to build it. The Massachusetts case shows what can happen when AI systems are deployed without a record of what’s been built, why it was built that way, and how it’s changed over time. Transparency isn’t something that can be added after the fact—it must be built into the design. And for that, there’s no substitute for robust, well-structured Information Governance.
Comments