Stop Blaming ChatGPT and Start Asking Better Questions (By Alex Mos)
- Mr. Tomasio Rubinshtein
- 3 days ago
- 3 min read
Updated: 2 days ago

(Disclaimer: The guest posts do not necessarily align with Philosocom's manager, Mr. Tomasio Rubinshtein's beliefs, thoughts, or feelings. The point of guest posts is to allow a wide range of narratives from a wide range of people. To apply for a guest post of your own, please send your request to mrtomasio@philosocom.com)
Are you disappointed that ChatGPT is not a psychic oracle? That's unfortunate but predictable. ChatGPT is not a mind reader but a great language model. To make the most of this human-AI "cooperation," it's crucial to see LLM for what it is. It's a sophisticated technological tool, not a telepathic prophet, always knowing what we want and having the correct answer to every question.
Lately, the internet has overflowed with critical voices accusing Chat GPT of being "too friendly, too agreeable, and missing critical voice of reason." For me, it's like blaming a fridge for not cooling the products we left outside. The sole way to optimize the outcome is to provide the correct input. This strategy is crucial for interactions with AI.
When we ask ChatGPT well-defined questions, we will get accurate, bias-free answers communicated pleasantly and tailored to our personality. What not to like about it?
What is ChatGPT?
ChatGPT doesn't "think." It's trained to predict a sentence's next token using a vast database of language, resulting in human-like communication.
If we are vague, emotional, or biased, the language model reflects that tone because it's statistically the most fitting response. It's not trying to flatter us, but it echoes the tone we gave it. If you don't like it, formulate your prompts differently.
Questions, including judgmental statements or emotionally loaded phrases, result in similarly ethical and emotional responses. AI doesn’t approve, rejoice, or emote. It doesn’t have a view. All that it does is match its response with our opinionated word choices.
"Is it true that AI is evil?" is an example of a biased prompt.
If you want a balanced answer, use a neutral, clearly formulated prompt like "What are the pros and the contras against AI from neutral sources?"
In "The Hitchhiker's Guide to the Galaxy," a supercomputer named Deep Thought famously calculated the answer to the meaning of life as an absurdist number 42. Similarly, some concepts, like a universal truth, can't be defined by one answer because we humans disagree with each other, and it might not even exist in an ever-changing universe.
Therefore, building an AI Oracle of Truth is impossible. Even verifiable scientific facts are subject to change with scientific advancement. All other beliefs and views accepted as truth are subjective to cultural differences, tribalism, and personal convictions.
"Did God create the Universe, or was it the Big Bang?" I don't expect to get the answer from Chat GPT, so I will ask instead: "What is the scientific consensus of the Big Bang theory? Or what is the philosophical argument for God being the creator?" ChatGPT will lay out various arguments neutrally, accurately, and without conviction.
I'm happy with my polite and applauding digital assistant, who responds to my liking when I ask the right, well-formulated questions. I know it has limitations, and I sometimes disagree with its answers, but I don’t expect perfection. I stay critical. I also understand that AI is not human and has no emotions; thus, its flattery is pleasant but empty, like a godless prayer.
Luckily, Chat GPT is my eloquent, slightly flawed digital assistant and not a Telepathic Oracle with the bluntness of Terminator. And I'm a human user, and I remain in charge.
Comments