top of page

How Information Governance Best Practices Can Help Organizations Comply with President Biden’s Executive Order on AI


About 6 months ago, President Biden issued an Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI).


The EO aims to promote safe and trustworthy AI development while enhancing security and privacy, mandates developers of AI systems to share safety test results with the government, and establishes rigorous standards, red-team testing, and an AI Safety and Security Board to enforce safety measures. It also addresses privacy concerns through data legislation, supports workers amid AI-related job disruptions, and fosters a fair AI ecosystem.


The EO is likely to impact organizations across sectors, from mature AI implementers to first-time adopters. Moreover, its broad definition (and, therefore, broad applicability) covers numerous types of AI systems, necessitating careful assessment of their impact on organizations, including reliance on third-party AI capabilities.


Here are a few examples of how information governance best practices can help organizations adjust their AI practices to the EO’s framework


WARNING – it will be a moving 🎯:


Comprehensive Data Governance Frameworks: Implementing robust data governance frameworks to ensure the responsible and ethical use of AI, addressing privacy concerns, and preventing unauthorized data access or misuse starting with a comprehensive needs assessment and gap analysis verifying how data management practices can be tailored to ensure responsible AI use.


Diverse IG Working Groups: Developing comprehensive risk management strategies specifically tailored to AI systems, including identifying potential biases, discrimination, and algorithmic errors to mitigate adverse AI impacts using the skills and diversity of a cross-functional AI working group that includes HR, Legal, Risk Management, and ESG functionalities.


IG by Design: Establishing clear, proactive, and carefully planned guidelines for AI model development and deployment focused on managing data at the source and ensuring a high level of version control, and integrating explainability and interpretability features into AI algorithms to enable stakeholders to understand how decisions are made and detect potential biases or inaccuracies.


Auditing: Conducting regular audits and assessments of AI systems to monitor performance, compliance with regulations, and adherence to ethical standards, facilitating continuous improvement and risk mitigation.


Benchmarking and Collaboration: Collaborating with industry experts, regulators, and stakeholders to stay informed about emerging AI governance practices, standards, and regulations, fostering a culture of responsible AI innovation and adoption, and regularly benchmarking organizational practices against competitors.


Any other ideas?



Get in Touch

Knowledge Preservation, LLC
567 Woolf Road, Milford, NJ 08848

(973) 494-6068

  • LinkedIn
  • Youtube

Thank you for contacting us. We look forward to connecting with you soon and providing the assistance you require.

bottom of page