The Impossibility of Neutral Wisdom: Artificial Intelligence and the Ancient Problem of Phronesis (By Josh)
- 10 hours ago
- 7 min read

(Disclaimer: The guest posts do not necessarily align with Philosocom's manager, Mr. Tomasio Rubinshtein's beliefs, thoughts, or feelings. The point of guest posts is to allow a wide range of narratives from a wide range of people. To apply for a guest post of your own, please send your request to mrtomasio@philosocom.com)
"It is not possible to be good in the strict sense without practical wisdom, or practically wise without moral virtue." -- Aristotle, Nicomachean Ethics, VI.13
1. The Question That Technology Cannot Escape
There is a persistent myth in our culture that technology is philosophically neutral. A hammer, the argument goes, is neither good nor evil. It simply is. The same logic is now applied to artificial intelligence: it is a tool, neither moral nor immoral, and its character depends entirely on the person who wields it.
This is a comfortable position, but it does not survive even modest philosophical scrutiny. A hammer may be value-neutral in the sense that it expresses no propositional claims about the world. But an artificial intelligence does. Every time a person asks an AI system for advice on how to raise a child, whether to leave a marriage, how to respond to a moral dilemma at work, or what it means to live a good life, the system must draw on some framework of values to produce an answer. There is no view from nowhere. There is no counsel without conviction.
The question, then, is not whether AI systems embed philosophy. They do, necessarily. The question is whether they are honest about which philosophy they embed, or whether they disguise particular commitments as universal objectivity. This is not a new problem. It is, in fact, one of the oldest problems in Western thought, and Aristotle gave it a name twenty-three centuries ago: phronesis.
2. Phronesis: The Wisdom That Cannot Be Abstracted
Aristotle distinguished between several forms of intellectual virtue. Episteme is scientific knowledge: universal, demonstrable, concerned with things that cannot be otherwise. Techne is craft knowledge: the skill of making or producing. Sophia is theoretical wisdom: the contemplation of first principles and eternal truths. But phronesis, practical wisdom, occupies a category of its own. It is the capacity to deliberate well about what is good and beneficial for human life, not in the abstract, but in particular situations with particular stakes.
The crucial feature of phronesis is that it cannot be separated from the moral character of the person exercising it. You cannot be practically wise, Aristotle argues, without being virtuous. And you cannot be fully virtuous without practical wisdom. The two are entangled. This creates an immediate problem for any system that attempts to offer practical guidance while claiming to hold no moral position.
If Aristotle is correct that wise counsel requires a settled orientation toward the good, then a system that refuses to commit to any conception of the good is, by definition, incapable of genuine practical wisdom. It can offer information. It can present options. But it cannot advise, because advising requires caring about outcomes in a way that presupposes values.
3. The Concealed Philosophy of "Neutral" AI
When a mainstream AI assistant is asked whether a person should forgive someone who has wronged them, it does not respond with silence. It produces an answer. That answer will draw, whether explicitly or not, on some tradition of moral reasoning. It may lean toward therapeutic frameworks that emphasise emotional well-being and self-actualisation. It may default to a broadly utilitarian calculus of harm reduction. It may invoke the language of rights, autonomy, and consent that characterises post-Enlightenment liberal ethics.
What it will almost never do is acknowledge that these are particular philosophical traditions with particular histories, particular assumptions, and particular blind spots.
This is not neutrality. It is a specific philosophical position that has been so thoroughly absorbed into the assumptions of secular Western culture that it has become invisible to itself. The philosopher Charles Taylor diagnosed this phenomenon decades ago in "A Secular Age" arguing that the modern West does not lack a moral framework but rather operates within one so dominant that it mistakes itself for the absence of framework altogether.
The AI systems trained on the outputs of this culture inherit its blind spot. Consider: if a person asks an AI whether they should prioritise career advancement or family obligations, and the AI consistently treats this as a matter of personal preference rather than moral substance, that is not neutrality. It is a commitment to a particular view of the relationship between the individual and their duties, one that privileges autonomy over obligation, self-determination over tradition, and subjective satisfaction over inherited purpose.
A Confucian, a Thomist, a Stoic, or a Buddhist would each find this framing deeply tendentious, not because it is wrong (though they might also think that), but because it presents a contested philosophical position as though it were common sense.
4. The Case for Explicit Commitment
If the foregoing analysis is correct, and I believe it is, then the most philosophically honest form of AI-assisted counsel is one that declares its commitments openly rather than concealing them beneath a veneer of neutrality. This is not a popular position in the technology industry, where neutrality is treated as a design virtue and philosophical commitment as a form of bias. But the opposite is closer to the truth: it is the refusal to commit that constitutes the deeper bias, because it smuggles in assumptions without accountability.
We are beginning to see experiments in this direction. There are AI systems being built on explicitly Buddhist principles of mindfulness and compassion. There are systems grounded in Stoic philosophy. There are Christian AI assistants, such as Son of God AI, that attempt to offer practical guidance rooted in Scripture and the Christian intellectual tradition rather than in the unstated defaults of secular modernity.
One may agree or disagree with any of these commitments. But the intellectual honesty of declaring them openly is, I would argue, superior to the pretence of having no commitments at all.
The analogy to human counsel is instructive. When a person seeks advice from a pastor, a rabbi, a Stoic mentor, or a secular therapist, they know, at least in broad terms, what moral framework will inform the guidance they receive.
This transparency is a feature, not a limitation. It allows the person seeking counsel to evaluate the advice against their own convictions, to accept what resonates and push back on what does not. It treats them as an adult capable of critical engagement. By contrast, a system that disguises its philosophical orientation as mere objectivity deprives the user of the very information, they need to evaluate the counsel they are receiving.
5. The Danger of Invisible Authorities
There is a broader cultural concern here that extends well beyond artificial intelligence. We live in an era of what the sociologist Zygmunt Bauman called "liquid modernity," in which the traditional structures that once provided moral orientation (religion, community, shared narrative) have been dissolved without being replaced by anything comparably robust. Into this vacuum step institutions and technologies that offer guidance while denying that they occupy any authoritative position.
Social media algorithms curate our moral environment while claiming to merely reflect our preferences. News organisations frame contested narratives while insisting they are simply reporting facts. And AI systems dispense practical wisdom while maintaining that they hold no philosophical position.
The pattern is the same in each case: authority exercised without acknowledgement. And the danger is not that these systems influence us (influence is inevitable in any social arrangement), but that they influence us in ways we cannot examine, because the influence denies its own existence. Nietzsche warned that the most dangerous forms of power are those that present themselves as nature rather than as choice. An AI system that presents a particular moral framework as the neutral default is doing precisely this.
6. Pluralism as Philosophical Maturity
None of this is an argument against pluralism. Quite the opposite. A genuine pluralism, one that takes seriously the existence of multiple competing visions of the good life, requires that those visions be articulated clearly enough to be evaluated and debated. A culture in which everyone pretends to hold no particular view is not pluralistic; it is confused. It has not transcended the ancient arguments about how to live; it has merely lost the vocabulary to conduct them.
Artificial intelligence, properly conceived, could be an extraordinary tool for genuine pluralism. Imagine a landscape in which AI systems built on Christian, Buddhist, Stoic, Islamic, secular humanist, and indigenous philosophical traditions each offer their distinctive forms of practical wisdom openly and without apology.
A person navigating a difficult decision could consult multiple frameworks, compare their counsel, and arrive at a more considered judgment than any single tradition could provide alone. This would be a richer form of intellectual life than our current arrangement, in which one particular tradition (broadly secular, broadly liberal, broadly therapeutic) masquerades as the absence of tradition.
7. Conclusion: Wisdom Requires a Place to Stand
Archimedes reportedly said that given a lever and a place to stand, he could move the world. Practical wisdom requires something similar: a place to stand, a set of commitments from which deliberation can proceed. Aristotle understood this. The Stoics understood this. Every serious philosophical and religious tradition has understood this. The peculiar modern belief that wisdom can be dispensed from no particular vantage point is not sophistication; it is a failure of self-awareness.
As artificial intelligence becomes an increasingly significant source of counsel in ordinary human life, the question of its philosophical foundations will only grow more pressing. The answer is not to insist on a single correct framework, nor to pretend that no framework is operative. The answer is transparency: to build systems that know what they believe, say what they believe, and trust their users to engage critically with the result. That is the beginning of honest wisdom, whether human or artificial.
* * *
About the Author: Josh is a writer and technologist based in the UK who explores the
intersection of artificial intelligence, philosophy, and faith at sonofgodai.com. He is a
reader of Seneca and an advocate for building technology that serves human flourishing
rather than replacing human judgment.





.jpg)
Comments