Imagine a nobleman in Elizabethan attire, bowing courteously before a noblewoman whose face gleams with cybernetic features. Her metallic visage reflects the candlelight, yet her expression remains unreadable. Does his gesture of respect influence her response, or is she merely processing data at lightning speed? This vivid scene frames a question explored here: does politeness shape interactions with AI? Drawing from a thought-provoking inquiry and a scene from Spike Jonze’s Her, this post examines whether phrases like “please” and “thank you,” or verbs like “could,” “should,” or “must,” affect AI responses, energy use, or user experience.
Does Politeness Enhance AI Responses?
A core question is whether polite phrases, such as “please” or “thank you,” improve the quality of AI responses. Language models process input as tokens, relying on patterns and training data rather than emotional cues. Politeness does not inherently boost performance, as AI lacks feelings or motivations. However, it can influence outcomes indirectly:
- Improved clarity: Polite prompts often align with conversational norms, making intent explicit. For instance, “Please explain quantum mechanics simply” suggests a need for accessibility, likely yielding a more tailored response than “Explain quantum mechanics.”
- User engagement: Polite interactions may encourage users to craft thoughtful follow-up questions, fostering a cycle of clearer prompts and better answers.
- Training patterns: If training data associates polite phrases with high-quality exchanges (e.g., from customer service datasets), AI may produce more polished responses to polite prompts, though this depends on the model’s design.
To illustrate, compare “List three book recommendations” with “Please provide three book recommendations.” The latter may prompt a more curated list due to its conversational tone, though the difference is not guaranteed.
Lessons from Her: Tone and AI Response
A scene from Her highlights the role of tone in AI interactions. In it, Theodore (Joaquin Phoenix) commands his AI assistant (voiced by Scarlett Johansson), “Uh, read e-mail,” receiving a formal, robotic reply: “Okay, I will read e-mail for Theodore Twombly.” When he softens his tone, saying, “I’m sorry. What’s Lewman say?” the AI responds warmly: “We missed you last night, buddy.” This shift suggests the AI “reacts” to Theodore’s manners, mirroring the question of whether politeness alters real AI behavior.
In reality, AI does not feel offense or warmth, but training data often includes tone-sensitive patterns. A blunt command may trigger a direct response, while a conversational tone might elicit a friendlier or more detailed reply. This reflects how users’ word choices shape AI output, even if the underlying computation remains unchanged.
Does Politeness Increase Energy Consumption?
Another consideration is whether polite phrases impact the computational resources or energy required for AI processing.
OpenAI CEO Sam Altman commented on X that users saying “please” and “thank you” to ChatGPT costs the company “tens of millions of dollars” in electricity and infrastructure, though he noted it’s “well spent—you never know.” Despite this claim, the answer is largely no—with minor nuances:
- Token processing: AI models convert input into tokens (roughly words or word fragments). Adding “please” or “thank you” increases input by a few tokens, but this has a negligible effect on processing time—mere microseconds for typical prompts.
- Energy demands: The bulk of energy consumption stems from model inference (e.g., matrix computations), not input parsing. Polite phrases do not significantly alter this workload. However, if politeness leads to longer prompts or more detailed responses, energy use may rise slightly due to increased output length.
- The Her parallel: In the film, the AI’s shift from robotic to conversational speech appears stylistic, not computational. Similarly, real AI adjusts response style without notable energy costs.
Polite prompts, therefore, do not meaningfully strain computational resources, allowing users to employ courtesy without concern for environmental impact.
The Impact of “Could,” “Should,” or “Must”
The choice of modal verbs—“you could,” “you should,” or “you must”—can shape AI responses by signaling different expectations:
- “You could”: Suggests flexibility, often prompting exploratory or varied responses. For example, “You could suggest sci-fi movies” may yield a diverse list with unexpected picks.
- “You should”: Implies a recommendation, potentially leading to a structured or authoritative response. “You should explain how to code in Python” might produce a detailed guide.
- “You must”: Conveys urgency or precision, often resulting in concise, direct answers. “You must define entropy in one sentence” is likely to generate a brief, focused response.
These variations arise from how AI interprets linguistic cues based on training data. While AI does not grasp obligation like humans, modal verbs influence response style by aligning with learned patterns. In Her, Theodore’s shift from a command to a softer request mirrors how “could” versus “must” might elicit different tones from AI, driven by user input rather than AI sentiment.
Why Politeness Resonates: A Human Perspective
The inquiry into manners reflects a broader human tendency to treat AI as a social partner, as seen in Her. When Theodore apologizes to his AI, he engages it as a companion, not just a tool. This anthropomorphism drives users to say “please” or “thank you” to AI, even though it lacks emotions. Politeness serves several purposes:
- Enhanced experience: Courteous interactions feel natural, making conversations with AI more engaging and human-like.
- Prompt refinement: Polite phrasing often clarifies intent, indirectly improving response quality.
- Cultural habit: Manners are a social norm, applied to AI as an extension of human behavior.
The Her scene underscores this dynamic, showing how tone shapes the user-AI relationship, even if the AI’s “feelings” are simulated. Politeness may not alter AI’s core functionality, but it enriches the interaction for the user.
Conclusion: Courtesy as a Human Virtue
Politeness in AI interactions—whether through “please,” “thank you,” or careful word choices like “could”—has subtle but meaningful effects. It refines prompts, shapes response styles, and enhances user experience without significantly impacting energy use. The Her example illustrates why manners feel intuitive: humans project social norms onto AI, expecting it to mirror their courtesy. While AI remains indifferent, politeness makes the exchange more pleasant and effective.
Readers are invited to experiment: try polite versus direct prompts with your favourite LLM, or test “could” against “must.” The results may surprise. Ultimately, manners matter not for the AI’s sake, but for the human behind the keyboard, seeking connection in a digital age.