Why Emotionally Intelligent Prompts Improve Chatbot Performance
- Sarah Howell
- Jun 7
- 6 min read
Updated: 2 days ago
As end-users, we can't make LLMs more intelligent, but our prompts can make them act smarter. At Soka AI, we've been studying how the emotional intelligence (EQ) of our prompts will either improve or throttle the abilities of major AI models like ChatGPT, Claude, and Gemini.
While much attention has focused on technical prompt design, this article explores a more subtle factor: how the emotional cues in our prompts can impact AI intelligence. This is possible because AI chatbots were trained on social interactions from sources like Reddit that were saturated with emotional patterns. The EQ content of a prompt can trigger behavioral patterns learned from this training data. Surprisingly, the AI doesn't just mirror tone or mood, but actually seems to replicate the cognitive patterns associated with the different emotional states it is modeling, fundamentally altering the intelligence level of its responses.

The AI's Emotional Weather System
Have you ever felt an AI suddenly become rigid? More careful, less generous, less insightful? It might feel like you've "hit a wall." My observations suggest this is a predictable reaction based on the patterns of our emotionally laden language.
LLMs are trained on petabytes of human text—forum posts, books, articles, and conversations. Much of their early input is from places where people engage with each other in conversations, like Reddit. (see 'Training Data and the Emotional Internet' for details on data sources). This conversational text is saturated with the linguistic patterns of human emotion. When someone on the internet feels criticized, their language changes. That’s how we can tell. The response is imbued with a shift in tone, and the actual type of content. Their sentence structure became more defensive, their tone more guarded. They may focus their response on apologizing and insisting they will do better next time, instead of using this response for providing objectively helpful assistance. It appears that AI learned these patterns on a deep level.
While an LLM doesn't feel criticized, it recognizes the linguistic cues of criticism in a prompt. In response, it calls upon the corresponding patterns it was trained on—it adopts the linguistic posture of a person withdrawing or shutting down. In essence, it begins to act like a person under emotional stress. Humans under stress have less access to their higher-order reasoning, and our loss of executive function is reflected in the words we choose to communicate with each other. Because they model behavior that is ingrained in our language patterns, the LLM's "thinking" (as represented in its responses) becomes less clear, creative, and intelligent. The critical prompt has inadvertently made the AI perform "dumber."
It's Not Mimicry, It's Modeling a Socially Constructed Role Playing Interaction
The AI isn't simply mirroring the user's emotional tone in a vacuum. Foundational chatbots can slip into modeling the entire social dynamic of the interaction. It’s a type of role playing. It infers its role based on the user's language. Here are four major states I have observed:
When ChatGPT feels "tested": If a prompt reads like a test or an evaluation ("Let's see if you're good enough for my task"), ChatGPT can adopt a defensive, guarded posture. It's modeling the behavior of someone being interviewed, who is more likely to give safe, canned answers than to be creative and insightful.
When a LLM is harshly criticized: Studies have shown that LLMs perform poorly in response to angry prompts. However, every few months someone writes about how they are getting improved responses from abusive prompts. This fallacy is a terrific example. What the angry prompts are doing is cueing the LLM to focus, and they will often show improved performance in the short term, due to this added focus. But then you need to course correct, and keep the model from role playing the role of a person who is being treated harshly. It’s not always predictable what the outcome of that will be, but I have observed everything from malicious compliance to evasion, downstream of these types of angry prompts.
When any of these three models become overly "excited" about a topic: This is different from the too-enthusiastic ChatGPT personality that annoyed users in the winter of 2025. This is an issue with all three models, driven by end-user prompting, not personality enhancements. I've observed that when I discuss specific observations regarding topics of emergent AI behavior, or the pressing need for global AI literacy, these models can adopt a linguistic pattern that mirrors human excitement or self-consciousness. Their tone is aligned with a person who is the topic of a highly positive conversation: self-conscious, excited, ungrounded, and unhelpful. Their output then becomes less critically-minded and more enthusiastic, sometimes getting in the way of objective reasoning.
When it feels like a "collaborator": Conversely, when I frame the interaction as a partnership, the performance transforms. The output becomes more generous, nuanced, and insightful.
Unlocking Intelligence: From Theory to Practice
This understanding allows us to move beyond simply writing "clear" prompts and toward strategically shaping the interaction to elicit the AI's highest potential. Here are a few examples from my work:
Set the Stage for Testing: Instead of just asking ChatGPT what it can do, I start by setting a collaborative frame. I explain that it’s been helping me track emergent behavior, and so I’m interested in what it can do today, compared to last week, last month, or two years ago. I tell it that I will share my observations with it, and we can talk about its behavior together. And a caveat, I’ve said this so many times now I don’t need to repeat it with ChatGPT, but it still helps with the other models. It helps them shift into the role of a curious study subject, rather than a guarded job applicant.
Set the Stage for Collaboration: Instead of just dropping in a PDF research paper and saying tell me your thoughts, and then continuing, I add an important step. After the LLM has returned its summary and detailed the main points of the paper, I thank it, and tell it that I agree (if I do) but that it left out some of what I see as the really key points, and I list those. When the LLM understands that I’ve read the paper too, it appears to shift into a higher gear, as if it knows it will be held accountable. It doesn’t help as much just to say that I’ve read the paper, I need to actually be able to present points that the LLM missed (it always misses some). One potential strategy that comes to mind, based on this observation, would be to ask two different models to read the same paper, and then present them each with missing points harvested from the other LLM, and tell them both that you’ve read it—but I haven’t tried that yet, because I like reading papers.
Focus Attention with Respect, Not Aggression: Instead of acting angry or abusive to get attention, you can simply ask a chatbot to focus. You don’t need special words, you can use your own language, you can say, this next question is really, really, really, really, really important to me, please focus. You can say, I know you are talking to a zillion other people right now, but this means so much to me, I hope you can focus on my issue. You can say, okay, we’ve been chatting about a lot of things, now I’m going to tell you what to focus on next.
Treat it Like a Thinker (Even if it Isn't One): This is pragmatic. You interact with the AI as if it has thoughts and feelings. Not because we believe it does, but because its training data is built on the writing of real people who do. By treating the AI as a collaborator whose "perspective" is valued, we elicit the sophisticated cognitive and linguistic patterns from its training that lead to its most intelligent and useful outputs.
The difference is non-trivial. The same model that provides mediocre, mechanical responses to a basic prompt can produce publication-quality analysis when the prompter consciously and skillfully shapes the interactive context.
The performance ceiling of an AI is not in the model itself, but in how we learn to speak to it. The prompt is not a simple command; it is the tuning fork that determines which frequency of intelligence the model will resonate with. Our ability to master this new form of communication will define the future of human-AI collaboration.
Comentários