Catholic Theologians Engage With Anthropic's Ethics
As Anthropic fights the U.S. government over AI safety guardrails, Catholic moral theologians are weighing in on the company's claim that its Claude model can be a virtuous AI.

As Anthropic fights the U.S. government over AI safety guardrails, Catholic moral theologians are weighing in on the company's claim that its Claude model can be a "virtuous" AI. The engagement is generating real dialogue—but it's not an unqualified endorsement.
A two-day conference titled "Artificial Intelligence: A Tool for Virtue?" was held March 5-6 at the Pontifical University of Saint Thomas Aquinas (Angelicum) in Rome, bringing together Dominican friars, Catholic philosophers, and AI researchers to examine whether AI systems can be designed to help humans grow in virtue.
Father Jean Gové, coordinator of the European AI Research Group within the Vatican's Dicastery for Culture and Education, acknowledged Anthropic's efforts while expressing skepticism about the claim that Claude is "virtuous."
"I appreciate the laughter," Father Gové told the conference, citing passages from Anthropic's internal guidelines that describe the company's aim for Claude to be a "good, wise, and virtuous agent" without defining those terms. "This is the company that is doing the most comparatively when it comes to ethics, safety, and governance when it comes to AI. This is where we are. This is the state of play."
The conference takes place against the backdrop of an escalating dispute between Anthropic and the Pentagon. In early March 2026, after Anthropic refused to loosen safety guardrails for use in autonomous weapons and domestic surveillance, the Defense Department designated Anthropic a "supply-chain risk"—the first time the U.S. government has taken such action against an American AI company. The company had been in talks for a $200 million contract that was awarded in summer 2025. Anthropic has filed lawsuits challenging the designation, which could slash its revenue by billions of dollars, according to Reuters.
Father Gové, who also serves as the Holy See's representative to the Council of Europe on AI matters, noted that Anthropic's guidelines "leave Claude with no definitions of what is the good, with no hierarchy of goods, and no end to which good actions are ordered toward."
"Does this make Claude a tool for virtue? Not exactly," he said. "I hope it makes Claude a safer tool. So that's already something, right?"
Other theologians were more cautious. Dominican Father Alejandro Crosthwaite, a professor at the Angelicum, argued that genuine virtue requires faculties no AI system possesses.
"Virtue is not correct output," he said. "It is right reason embodied in a self-determining agent." He emphasized that AI "is never a moral subject" and that "virtue ultimately belongs to persons."
The conference comes as the Vatican issued a document on AI ethics titled "Antiqua et Nova" in 2025, and Pope Leo XIV has made AI a focus of his pontificate.
Sources
- osvnews.com— OSV News
- ncregister.com— National Catholic Register
- reuters.com— Reuters
Share
Related Articles
Stay in the loop
Get the best frontier systems analysis delivered weekly. No spam, no fluff.
