Research shows that human-like language exaggerates AI’s perceived capabilities and shifts responsibility away from its developers.

A study has found that everyday phrases like these can make people view artificial intelligence as more human than it really is. When mental verbs such as “thinks,” “knows,” “understands,” and “remembers” are applied to AI, they can lead people to mistakenly attribute human-like beliefs and intentions to machines.
Researchers from Iowa State University published their findings in the international academic journal Technical Communication Quarterly.
According to the team, such language creates misunderstandings in two key ways. First, it exaggerates AI’s capabilities beyond reality. Phrases like “AI decided” or “ChatGPT knows” make systems seem autonomous and intelligent, raising trust and expectations beyond what is warranted. In reality, most AI systems based on large language models (LLMs) are tools that analyze patterns in data to generate responses—they do not think or make independent judgments.
Second, this kind of wording obscures accountability. Describing AI as if it has intentions can mask the roles of the developers and companies that design, train, deploy, and manage these systems. As the researchers noted, “Anthropomorphic language can distort how responsibility for AI is understood by shaping readers’ perceptions.”
How frequently do media outlets use such expressions? To find out, the research team analyzed the News on the Web (NOW) dataset, which contains more than 20 billion words from English-language articles across 20 countries. The findings suggest that journalists generally use cautious language. When AI was the subject, the most common mental verb was “needs” (661 instances), while “knows” appeared only 32 times when referring to ChatGPT. The researchers linked this restraint to editorial standards from major outlets, including guidance from the Associated Press, which advises against attributing human emotions or traits to AI.
The study also found that meaning depends heavily on context. For example, saying “AI needs vast amounts of data” is similar to describing the requirements of a car or a recipe. But phrases like “AI needs to understand reality” can lead people to project human qualities—such as reasoning, ethics, or consciousness—onto machines. In this way, identical wording can shape perceptions of AI very differently depending on how it is used. As the researchers noted, “The language media chooses quietly shapes how readers understand AI, what they expect from it, and how they perceive the humans behind it.”
