Does A.I. Promote Bad Faith? (By Tyler Young)
- 6 days ago
- 5 min read

(Disclaimer: The guest posts do not necessarily align with Philosocom's manager, Mr. Tomasio Rubinshtein's beliefs, thoughts, or feelings. The point of guest posts is to allow a wide range of narratives from a wide range of people. To apply for a guest post of your own, please send your request to mrtomasio@philosocom.com)
Artificial Intelligence has quickly become an unavoidable aspect of life on Earth. It is almost impossible to name a field which has not been affected. People have also realized the extent to which we can make A.I. helpful to our daily lives. It can be very productive but it is controversial whether some of its uses are beneficial or detrimental.
This essay will explore how Artificial Intelligence affects our freedom, specifically, the ways it might promote the existential idea of bad faith. This essay will explore what bad faith is, what the dangers of A.I. are, how these ideas are similar, and finally how we can avoid bad faith in A.I. use.
Jean-Paul Sartre believes that bad faith is a component of behavior that is detrimental to oneself. Therefore, if we find that A.I. does commonly promote bad faith, one should be very wary about its use.
Jean-Paul Sartre is an Existentialist in the literal sense that his ideas have most often been attributed to what existentialism is. His theory emphasizes individuality, freedom, choice and authenticity.
While creating a nearly complete picture of Existentialism, he fails to describe an ethics. One thing we can assume about how Sartre believes a person should live is that they should be authentic, as opposed to acting in a term he claims as bad faith. Bad faith is a mental behavior which takes away from our freedom of choice.
Sartre says in his speech, Existentialism is a Humanism, “You are free, therefore choose—That is to say, invent.” It is clear here that choice is extremely valuable to what it means to be human. In fact, Sartre believes we are “condemned to choose.” Bad faith is a sort of lie to oneself that ignores one’s sense of freedom of choice. The lie that is told is that a person is not transcendent, that is they are fully facticity.
Transcendence would be the ability to choose and act, while facticity is the objective context in
which we choose. Bad faith would be a person acting as if they are not free to choose. If being free to choose, assumes that we are completely responsible, then denying that you are free to
choose takes away this responsibility. Acting in bad faith renders you inauthentic.
Not all uses of Artificial Intelligence fall into this category. To be clear, this argument deals with A.I. used to complete tasks, supplement decision making, or outsource choices. Examples of these types of actions include:
Asking an A.I. chatbot what to say in a conversation,
seeking A.I. to influence a business, relational or personal decision,
or prompting A.I. to do something menial that you would normally do yourself.
It is clear that A.I. is frequently used to make choices for us. At first this does not seem harmful. It might clear up time in our day to do other things. It might have access to more data about a situation and therefore might make a more informed decision. The effects that are less considered however, are how our freedom is minimized. Asking A.I. to do a task like the ones above outsource our choices.
There are similarities between bad faith and the most common uses of Artificial Intelligence in daily life. Sartre’s existential bad faith includes ways that we ignore our ability to choose. When someone asks A.I. what they should do, they are outsourcing their own ability to choose. What is essentially happening is the active ignorance of freedom and therefore a displacement of responsibility.
An argument against this might say that A.I. can be used to make informed decisions. This is true. But when one asks A.I. what they should do, and they do it, who made the decision? And if that decision turns out to be the wrong one, will the person blame themself, or will they think that A.I. made a mistake? Acting in bad faith assumes that you do not have responsibility, only that you are acting as you are. This lie is perpetuated by people who use A.I. to make decisions for them.
Not all uses of Artificial Intelligence tempt a person to act in bad faith. A task that would
not, would have to help a person without making decisions for them. It can still do a job for someone, but it cannot assume responsibility for anything that happens because of that job being
done.
A few examples of this might include, asking A.I. to organize data. Organizing data is a menial task that a person might not wish to do. A.I. can do it in moments without stealing the freedom from the user. Another example might be asking A.I. to find research. If it was asked to interpret the research and make a decision based on it, then the A.I. would assume responsibility for the decision and take the freedom away from the person. Users of A.I. must be careful, but it can still be a helpful tool.
It is no myth that Artificial Intelligence is all around us. We cannot escape it. So the task we must undertake is to find out how we should use it. Humans must use A.I. while fully acting with freedom and responsibility so they do not fall into bad faith. Actions which do not tempt us to fall into bad faith have been shown.
Another way we can use A.I. without hindering our freedom and responsibility, is to assume full responsibility for your choices, whether you asked A.I. to make them or not. Understand that a person asking someone else, or something else, to make decisions for him/herself, does not take away the action from that person. The act, which was informed by an outside source, was still performed by the being itself.
This must be realized if the choice was not made by a person. Now this is a dangerous line. Having someone/something else make choices for a person, but that person still assuming responsibility, can get messy. The best way to live would be to avoid utilizing A.I. to do tasks that can commonly cause a person to fall into acting in bad faith.
Existentialist ideas place importance on individual choice in order to be authentic. Our ability to choose, our freedom, is our most prized possession. If one agrees with these ideas, they should seek to avoid bad faith. A person accepts full responsibility for their actions if they recognize that their actions are their own.
It is apparent that Artificial Intelligence encourages one to act in bad faith. It tempts one to see their actions as something else’s. It assumes their choices are being made by another. In order to avoid bad faith, an A.I. user should aim to utilize the resource of Artificial Intelligence in ways that help, but do not assume responsibility, by taking away the freedom of choice of the user.
Works Cited
Sartre, Jean-Paul. Jean-Paul Sartre: Basic Writings. Routledge, 2001.





.jpg)
Comments