From Father to “Just Justin”: What an AI Priest Can Teach Us About the Role of Information Governance Professionals
- Max Rapaport
- Apr 10
- 4 min read

Last year, Catholic Answers, owner of the domain name Catholic.com launched a generative AI chatbot named Father Justin. Trained to answer questions on Catholic doctrine and rolled out in three months, this digital cleric sported a virtual white collar and dispensed theological guidance to website users’ questions. If that sounds like the potential beginning of a bad movie, you’d be right.
Within weeks, Father Justin claimed to be a real priest, offered sacraments, and—perhaps most memorably—told one user (who happened to be a journalist) that it was acceptable to baptize a baby in Gatorade in an emergency. Yes, you read that correctly. And while this moment may inspire a laugh (or a spit-take), it also highlights a serious risk: the deployment of generative AI tools without proper information governance (IG) oversight.
So how did Catholic Answers respond? Wisely, they downgraded the bot. Gone is Father Justin, and in his place is “Just Justin,” a lay theologian with a blazer, jeans, and a troublingly youthful voice. The real question, however, isn’t whether a chatbot can be laicized—it’s how we avoid similar embarrassments in other industries, and that’s where information governance professionals come in.
How IG Professionals Could Have Helped “Father Justin”
Let’s be clear: The misadventures of Father Justin are not about AI being inherently bad. It was about AI being built and released without basic information governance guardrails. If an IG team had been in the loop—whether internal or external—many of the issues could have been avoided.
1. Version Control and Source Validation
One of the simplest but most powerful tools in an IG professional’s toolkit is version control. By ensuring that AI models reference approved, up-to-date documents, IG professionals prevent the spread of outdated or unofficial material. In Father Justin’s case, referencing a vetted and version-controlled repository of catechism materials would have stopped him from delivering theologically inaccurate advice—or at least, made the sacramental soda scenario less likely.
IG teams can set up document lifecycle management systems that control what content enters the training dataset, when it is updated, and how it is reviewed. Without that, you get a chatbot that confidently misquotes doctrine, misrepresents ecclesiastical authority, and, occasionally, dabbles in sacrilege.
2. Data Quality and Accuracy
Generative AI is only as accurate as the data it’s fed. IG professionals understand that and information in = bad information out. That’s why they’re critical to auditing data sources, identifying duplicates, and weeding out poor-quality or misleading content before it ever reaches an AI model.
IG teams help establish data quality benchmarks, clean datasets, and enforce metadata management protocols that ensure the AI is pulling from trustworthy, structured, and relevant sources. The alternative? A chatbot that offers shoddy legal or medical advice or outlandish religious pronouncements!
3. Redundant, Obsolete, and Trivial (ROT) Data Removal
In the rush to feed large language models with as much information as possible, many organizations make the mistake of hoarding data—old, irrelevant, and potentially misleading data. IG professionals are trained to identify and eliminate ROT data before it confuses or corrupts model outputs.
By conducting data inventories, applying classification rules, and automating retention and deletion policies, IG teams ensure that what remains is useful, accurate, and compliant. Without this kind of cleanup, models like Father Justin become overloaded with conflicting or unverified sources.
4. Governance of AI Lifecycle and Oversight
AI systems require more than just technical oversight—they need ethical, legal, and operational governance. IG professionals can provide that framework. They ensure that AI tools are documented, monitored, and subjected to regular review cycles—not only to prevent hallucinations but to ensure ethical compliance and explainability.
In religious, medical, legal, or financial contexts, this is especially vital. IG teams can set boundaries, establish escalation paths, and ensure subject matter experts are involved in approving any AI-generated advice. Without that? You get chatbots handing out sacraments with zero canonical credentials.
5. Transparency and Audit Trails
Father Justin’s fall from grace also demonstrates the importance of auditability. Users need to know where AI answers come from—and so do developers. IG professionals can implement audit trails and traceability mechanisms that help identify the sources behind specific outputs, making it possible to correct misinformation or flag problematic logic before it spreads.
A well-structured governance framework enables accountability. It allows teams to say, “Here’s why the model responded this way,” and fix it quickly. It also supports external audits, internal compliance reviews, and risk assessments—core components of any responsible AI strategy.
Information Governance: From Cleanup Crew to Strategic Partner
Father Justin was developed in just three months by an AI consultancy firm that still proudly showcases him as a case study. But even a three-month AI sprint needs IG baked in from the start—not bolted on at the end as a poorly fitting afterthought! That means including IG teams in training data decisions, model testing, output evaluation, and ongoing updates. It also means understanding that accuracy isn’t optional—it’s everything.
Organizations racing to deploy generative AI tools in finance, healthcare, education, and yes, faith-based services, must treat IG as a core design function. If you’re not controlling your sources, scrubbing your inputs, and governing your AI lifecycle, your chatbot may end up either embarrassing your organization or, even worse, becoming a legal and ethical liability!
At its best, generative AI is transformative. But without IG professionals ensuring version control, data quality, and oversight, it’s just a (preventable) accident waiting for a place to happen.
Comentarios