AI & The Quantity Maxim

Rania Bailey
3 min readJul 5, 2024

--

In a linguistic exchange between humans, there is a a listener and a speaker, and — generally — the exchange is intended to convey information or inspire an action. In daily life, this could be informing a colleague on the state of a shared project, or asking for a particular supporting piece of the project to inspire them to provide it. Informing the listener and inspiring action.

Photo by Kenny Eliason on Unsplash

In the 1970s, philosopher H. Paul Grice developed a set of maxims that further describe linguistic exchange. These four maxims include quality, quantity, relevance, and manner. Human communication almost invariably follows these maxims in accordance with the “cooperative principle”, or the shared goal of the listener and speaker to understand each other.

LLMs, given that they are programmed machines without the capability for internal goal-setting, do not necessarily share this goal. Being unbound by the cooperative goal of shared understanding, they are more readily able to output text in an exchange that violates Grice’s maxims. (This is part of why AI-written “content” sounds so hollow.)

Investigating the maxim of “quantity” can illuminate why LLMs seem to be so verbose. Grice’s maxim of quantity states that the speaker makes their speech as informative as is required for current purposes, and no more than that. Navigating this idea of “just enough” information in practice is a highly contextual activity. Humans are highly adept at parsing environmental and interpersonal context to infer the speaker’s intent and meaning, but LLMs must operate with only the context they’re given. This can blur the lines of what “just enough” information really means in a given scenario.

When our conversation partners (implicitly) apply Grice’s maxim of “quantity”, they do some filtering for us. We can trust that the information that they’ve chosen to share is just enough for us to grasp their meaning (hopefully accurately). LLMs do not (yet) offer this kind of contextual filtering, making them prone to verbose outputs that may or may not contain enough information to respond adequately to the prompt.

This can be somewhat mitigated through clever prompting. Asking an AI for a “brief” or “short” response might help remove some of the unnecessary information, and asking it for something specific might help guide it to providing the correct information. This is — though subtle — a dramatic shift in the ways in which we’re accustomed to conversation. With an LLM, the speaker (prompter) is responsible for providing all of the relevant context and narrowing the possibilities for what a reasonable response might be. This violates Grice’s maxim of quantity and thus demands that AI prompts are meaningfully different than ordinary speech, even though they look and sound nearly identical.

Identifying the ways that LLM’s linguistic output differs from human conversation can help to use LLMs more effectively. What are some key differences you’ve noticed between human-human and human-AI exchanges?

--

--