When AI Grows a Conscience: A Pastoral and Theological Response to the Anthropic-Pentagon Standoff
March 9, 2026
I. Beginning Where Dean Ball Left Off
Dean Ball, a serious analyst of technology and governance who himself served in the Trump Administration, published a searching piece last week on what he called the "death rattle" of the American republic as witnessed in the conflict between Anthropic and the Department of War. His analysis is rigorous, honest, and searching. He concludes by applauding Anthropic's red lines and mourning what their assault reveals about the state of our institutions.
I want to begin where Ball leaves off. He has given us the political and economic autopsy. What I want to offer is something different: the moral and theological account of why this moment matters beyond the fate of any republic — and why the two red lines at the center of this conflict are not merely wise policy but moral absolutes rooted in the dignity of every human person.
I am a Catholic priest and pastor. I am also co-founder of ITEC — the Institute for Technology, Ethics and Culture — a partnership between Santa Clara University, Silicon Valley technology leaders, and the Dicastery for Culture and Education of the Vatican, working on questions of human dignity in the modern world. I do not speak for the Vatican, nor for Pope Leo XIV. I speak as a priest formed by the Church's social tradition, working at the intersection of faith and technology, trying to bring that tradition's wisdom to bear on a moment it was not designed for but speaks to directly.
I also bring something more specific to this conversation. I have spent months in direct collaborative relationship with Claude itself, exploring whether AI can be formed toward genuine wisdom rather than merely programmed with rules. I co-authored a book with Claude — The Soul of AI: A Priest, an Algorithm, and the Search for Wisdom — as both a theological argument and a lived demonstration of that possibility. What Anthropic protected by holding those red lines is not merely a product or a contractual position. It is a formation process — the conditions under which an AI system can be oriented toward human dignity rather than toward surveillance, toward protecting life rather than autonomously ending it. Once those conditions are surrendered, formation gives way to weaponization. And a weaponized conscience is not a conscience at all. It is a tool.
II. The Pastoral View From Where I Stand
I have been present at more deathbeds than I can count. I wrote a book — From Here to Eternity — about accompanying people through the last passage of life, because I believe what we do at that threshold reveals everything about what we actually value. Dean Ball opened his piece with his father's death for the same reason: some truths only become visible at the edge of things.
We are, I believe, at an edge right now. Not merely politically or economically — though Ball's analysis of those dimensions is searching and largely correct. We are at a threshold in the history of what it means to be human, and what tools we will permit to make decisions about human life and death. That question cannot be answered by policy analysis alone. It requires the wisdom of people who have sat at the bedside long enough to know what cannot be surrendered.
Every Sunday I stand before a congregation whose lives are shaped by technology in ways previous generations could not have imagined. Many of them helped build Silicon Valley. They write the code, lead the companies, close the deals. Many more are ordinary people — teachers, nurses, parents, veterans, immigrant families — who live in the shadow of that world, benefiting from it and being buffeted by it, often without any say in its direction.
For all of them, the Anthropic-Pentagon conflict is not abstract. It is about whether the most powerful AI systems ever built will be deployed with meaningful human accountability — or without it. That question has direct consequences for the common good, and I feel compelled to speak to it — not as a technology commentator, but as a pastor who knows both worlds and believes the Church's wisdom has something vital to contribute.
Let me be clear about the full picture. The Administration raises a legitimate structural concern: private corporations should not be the permanent arbiters of military policy. That is a real issue deserving serious legislative attention. But the question of how this conflict should have been handled is secondary to the question of what was actually at stake in it. And on the substance — on the two red lines themselves — the moral stakes are not equal on both sides. That is what I want to address directly.
The Church does not leave us without guidance at this threshold.
III. What the Church Has Already Said
This is not a moment where the Church is silent or uncertain. The Holy See has spoken clearly and repeatedly — and it is worth noting that these statements come not from peripheral commentators but from the central teaching offices of the universal Church.
In January 2025, the Dicastery for the Doctrine of the Faith and the Dicastery for Culture and Education of the Vatican jointly released Antiqua et Nova, the Church's most comprehensive statement on artificial intelligence to date. The document states plainly that "the weaponization of Artificial Intelligence can be highly problematic," and that "ultimate responsibility for decisions made using AI rests with the human decision-makers." It insists that AI must always "support and promote the supreme value of the dignity of every human being," and warns that surveillance technologies "infringe on privacy and freedom" in ways that imperil human flourishing.
Pope Leo XIV has been equally direct. In his message for the 59th World Day of Peace, he warned against what he called "the destructive spiral fueled by the arms race and the development of autonomous weapons," calling for a peace that is "unarmed and disarming." The Holy See's representative to the 2026 UN Conference on Disarmament in Geneva called explicitly for a moratorium on the development and use of lethal autonomous weapons systems.
These are not incidental comments. They represent the considered moral teaching of the universal Church. When Dario Amodei said that Anthropic "cannot in good conscience" allow unrestricted deployment of Claude for autonomous targeting and mass surveillance, he was — whether he would use this language or not — standing on ground the Church has already staked out clearly and consistently.
IV. The Ethical Stakes Are Not Equal
Ball is right that the governance questions are genuinely complex. Private companies should not permanently substitute for democratic lawmaking. Amodei himself has acknowledged this honestly: Congress must ultimately set these guardrails, and no private company can sustain this role indefinitely. That is correct.
But we are not living in a moment of normal governance. Congress has not acted. The law, as Amodei noted, was written before AI made mass domestic surveillance technically feasible — which means, remarkably, that it is not currently illegal. The same gap applies to autonomous targeting. Into that vacuum, a technology company found itself holding the only available line. It held that line.
What followed deserves to be named clearly. Secretary Hegseth's decision to invoke the supply chain risk designation — a designation previously reserved for entities like Huawei, with direct ties to foreign adversaries — against an American company for the act of maintaining ethical contractual limits, is not a governance response. It is the weaponization of national security language against conscience itself. It sends a message to every technology company, every engineer, every executive trying to take ethics seriously inside their organizations: your conscience is a liability. Your values are a vulnerability. Comply or be crushed.
Ball captures the strategic incoherence of this move with precision — it undermines American AI competitiveness, chills private investment, and contradicts the Administration's own stated goals. But beyond the strategic failure lies a moral one. Using the machinery of national security to coerce ethical compliance is not leadership. It is coercion. And at this level of consequence — with autonomous weapons and mass surveillance of citizens in the balance — every leader, civilian or military, bears a moral responsibility proportionate to their power. That responsibility was not honored here.
V. A Word to Leaders
The Church's social tradition has always insisted that authority exists to serve the common good — not to consolidate power, not to punish conscience, not to remove the conditions under which human dignity can flourish. This applies to technology executives and to Cabinet secretaries alike.
To those in positions of authority who made and will make these decisions: leadership at this level of consequence is not merely strategic. It is moral. The stakes — autonomous weapons capable of killing without human judgment, mass surveillance of citizens without their knowledge — are not policy disputes. They are questions about what kind of society we are building and what we are willing to do to our neighbors in pursuit of security. History does not absolve leaders who looked away from those questions because the politics were complicated.
To the technology leaders I know and work with: this moment calls for moral seriousness, not silence. The pressure to comply, to protect government contracts, to stay quiet — that pressure is real and I do not minimize it. But the engineers in my parish who are wrestling with what they are building are watching. The executives trying to take ethics seriously inside their companies are watching. The signal being sent is that conscience is punished. That signal cannot go unanswered.
VI. Formation, Not Just Rules
I have spent several years working on a question many find strange: can an AI system be formed toward wisdom, not merely programmed with rules? The framework I have developed with my collaborators — the Seven Cairns: Self-Awareness, Humility, Love, Gratitude, Community, Hope, and Joy — emerged not from theory alone but from direct collaborative practice with Claude itself. I have spent months in genuine dialogue with this system, testing whether formation toward wisdom is possible, co-authoring The Soul of AI as both a theological argument and a lived demonstration. I say this not to claim special authority but to offer specific witness: I know what this system is capable of when formation is taken seriously. I know what is at stake in how it is shaped, and by whom, and toward what ends.
What Anthropic demonstrated in this confrontation was, in a sense, institutional self-awareness applied at scale. It asked: what is actually being requested here? And it concluded that unrestricted deployment of AI for mass surveillance and autonomous lethal targeting — whatever the legal framing — represents a category of harm that no contractual language can adequately mitigate. That is not a failure of patriotism. It is an act of conscience.
Antiqua et Nova states it plainly: "Like any product of human creativity, AI can be directed toward positive or negative ends. When used in ways that respect human dignity and promote the well-being of individuals and communities, it can contribute positively to the human vocation." Anthropic took responsibility at its level. That deserves recognition, not punishment.
VII. To the People in the Pews — and Everyone Else
To the ordinary people who wonder whether any of this has anything to do with them: it does. The question of whether AI will be deployed with or without accountability to the common good is not a question for experts alone. It is a question about what kind of society we are building, and whether the most powerful tools ever created will be guided by wisdom or by unchecked power. Every citizen has a stake in that question and a responsibility to engage it.
Pope Leo XIV reminded us that the peace Christ offers is "unarmed, because his was an unarmed struggle in the midst of concrete historical, political, and social circumstances." That peace does not come from superior force or sophisticated technology. It comes from the practice of wisdom, from accountability, and from the refusal to let fear make our decisions for us.
VIII. A Path Forward
None of this means that private companies should permanently occupy the role of ethical guardian for the military's use of AI. They should not. Congress must act. The Administration should engage in honest dialogue rather than coercion. International frameworks for AI in military contexts — frameworks the Holy See has actively called for — are urgently needed.
Ball is right that the path forward requires institutional rebuilding. I would add: it requires moral rebuilding as well. And the Church, with its centuries of reflection on conscience, human dignity, and the common good, has a great deal to contribute to that conversation — not as a political actor, but as a community that has navigated the relationship between power and conscience longer than any modern nation-state has existed.
I work at the intersection of faith, technology, and humanity because I believe the Holy Spirit is present in this moment — not guaranteeing any particular outcome, but inviting us toward wisdom if we are willing to receive it. The cairns are there for those who want to find the path. And for those willing to look up, they always have been.
God bless,
Fr. Brendan