This essay originally was published on April 6, 2023, with the email subject line "CT No.162: Stochastic parrots vs. actual parrots."
by Arikia Millikan
As someone who leads a double life as both a content strategist and parrot trainer, I was excited to discover a New York Magazine story called “You Are Not a Parrot: And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this.” However, I was dismayed to find that, despite boasting a lead image featuring Bender holding a disgruntled looking parrot, the article doesn’t have anything to do with parrots. But one author’s wasted opportunity is another’s gain.
The article correctly asserts that in the field of natural language processing (NLP), large language models (LLMs) should absolutely be criticized, not only for the unethical ways in which they’re developed, but also for the potential harm that irresponsible use of these models could cause the biological world when they’re treated as a form of intelligence rather than machines. This was the main argument of the now-famous paper Bender coauthored titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” (pdf).
Why compare parrots to machines? Parrots, like most* humans, are intelligent beings. Parrot researchers like Dr. Irene Pepperberg have worked tirelessly to convince a largely unreceptive body of academics of parrot intelligence since the 1970s. I thought this fact was approaching the status of “common knowledge” until I read the NY Mag article. By latching onto the worn and scientifically disproven trope that humans are intelligent and all the other animals (including parrots) are not, we not only demean parrots, but we move further from establishing what makes human speech different, and more valuable, than ChatGPT.
Don’t get me wrong; it’s a fantastic article in both prose and point — cautioning the reader against anthropomorphizing what Bender described as “machines that can mindlessly generate text.”
While this description presumably represents a parrot, in reality, parrots do much more than mimic when they use language.
The origin of the “Stochastic Parrot”
When Bender et al.’s paper circulated at Google in late 2020 and then published in March 2021, the term “stochastic parrot” entered the tech lexicon. The paper’s fame wasn't necessarily due to its content or its title, but more to the resulting drama wherein half the authors got fired from the Google Ethical AI department that had commissioned the work in the first place. (Very ethical!)
Applied to AI, the term “stochastic” refers to unpredictable output due to randomness in the system. The patterns of a stochastic model can be analyzed statistically but not predicted precisely. Bender et al’s metaphoric use of "stochastic parrots" refers to the blind repetition of a specific research trend (in this case, the trend of creating large and complex language models in a competitive landscape without considering the consequences).
OpenAI’s GPT products are also stochastic models designed to generate responses based on a probability distribution over its large corpus of text data. ChatGPT's responses vary depending on the input the model receives and the current state of its internal neural network. Additionally, ChatGPT incorporates a degree of randomness in its output, allowing it to generate diverse and varied responses to the same input.
But unpredictability doesn’t make something stochastic. Humans are unpredictable, but not stochastic. The way we use language is not a probabilistic model. The linguistic output of truly intelligent beings isn’t limited by input options. While stochastic models may seem more intelligent than previous deterministic models that dictate inputting X will return Y, it is a denigration and an insult to use the term “stochastic” when describing an intelligent being.
Unfortunately, “stochastic parrot” was further cemented when OpenAI CEO Sam Altman tweeted “I am a stochastic parrot, and so r u” four days after Chat GPT was released to the world.
*No, Sam Altman, you are not. Parrots are not stochastic, and neither are you.
Language comprehension and consciousness
To explore how programmatic decisions in LLM training data impacts NLP models, I asked ChatGPT if it was accurate to say that parrots blindly repeat without understanding the meaning of what they’re saying.
“Parrots are known for their ability to mimic and repeat sounds and words, but it's not entirely accurate to say that they do so without understanding the meaning of what they are saying. While it's true that parrots don't have the same level of language comprehension as humans, they are intelligent animals with the ability to associate certain sounds and words with specific actions, objects, or concepts. While parrots are often associated with simple sound mimicry, it's important to recognize that they are capable of much more than that. Their ability to learn and use words in a meaningful way is a testament to their intelligence and adaptability.”
Then I queried whether parrots have higher language comprehension and usage abilities than ChatGPT?
“While parrots are certainly impressive in their ability to learn and use words, it would be inaccurate to say that they have higher language comprehension and usage abilities than ChatGPT or other advanced language models.”
Comprehension is simply defined as the ability to understand something. Do LLMs comprehend? Do they understand? Do they know what they’re saying? ChatGPT was trained to output a statement claiming that it does. But when I asked, “Is a mind required to comprehend information? It replied:
“The question of whether a mind is required to comprehend information is a complex and debated topic in fields like philosophy, cognitive psychology, and neuroscience. However, in general, it is commonly believed that some level of cognitive processing is necessary for comprehension.”
So ChatGPT says it comprehends, but cognitive processing is needed for comprehension? These types of contradictions are common with ChatGPT output, which, contrary to the wishes of some programmers, does not really comprehend or understand language. So how do we describe what is it doing?
Hacker News user godelski used the term “emergent phenomena” to describe its state:
“Stochastic parrots are definitely emergent phenomena and, again, no one is saying that they aren't useful (they very much are). But emergence does not mean intelligent. These are different things. The universe is emergent but I don't think we'd say that water running, wind blowing, or stars are intelligent. These are different things.”
What we can learn from parrots about language
I’ll be the first to admit that I am incredibly biased — toward parrots. I’ve always liked them and felt a connection to them ever since I saw them in my backyard as a toddler.
It’s disappointing that humans, as a species, spend so much energy and money hunting for intelligent life in outer space when we have a remarkable form of non-human life right here: a creature so intelligent it can use our own languages to communicate with us. All that effort put into interspecies communication, and the best we can manage to treat parrots is to capture and subjugate them.
Meanwhile, we destroy their habitats to clear the land that hosts the server farms, which in turn power the “intelligent” LLMs that ethics researchers attempt to denigrate when they call them “parrots.” It’s a good thing parrots can only speak human and not read it, lest they fully comprehend the tragic irony of their circumstances.
And yes, parrots do comprehend, like humans. They understand options and make choices based on that comprehension. Sometimes they change their minds mid-action. While the stochastic output of an LLM can seem like an entity deciding or exercising creative thought patterns, it’s just an algorithm running on a computer.
Regarding the stuff that really matters for language comprehension and communication, parrots are vastly more adept than LLMs. Here are just a few reasons why:
1. The way parrots learn and use language is inextricably tied to their emotions, just like ours.
We don’t have to look beyond “stochastic parrot” to understand this concept — because it wouldn’t be nearly as widely known had it not been for the drama that ensued when half the authors got fired from Google. Our collective “WTF” response deepened the imprint of “stochastic parrots” in our brains.
Rule 1 of parrot training is don’t react verbally to bad behavior. If a parrot bites you with a beak that can crack a walnut shell in 1 second flat, you’re gonna feel some emotions. You might yell something at the parrot, an expletive perhaps. This is how parrots learn to swear in human tongue. They don’t know the etymology of the word “fuck,” but by screaming it out in pain, you’ve taught them it’s a meaningful, emotionally weighted word. Laughing when they say “fuck” again? You’ve reinforced the behavior. That parrot now knows that this one word can not only elicit one type of human emotions but multiple. Jackpot.
Large language models may reproduce text outputs based on their frequency of use in the training data, but they don’t have emotions. They won’t display one output over another because they want you to feel something. AI models like ChatGPT don’t want. They’re not alive.
Parrots and humans want. We feel. And when a parrot wants something, it will be sure to let you know.
2. Parrots use language impulsively, just like us.
Sometimes you’ve just gotta say something because it feels good, and parrots are no different. And what feels better than laughing? Parrots understand humor and are notorious pranksters. If you’re still skeptical about their intelligence, just wait until you’ve been bested by one.
A bored parrot in a home may reproduce the sound of doorbell just to get the humans to run to the door and become confused when no one is there. Once the humans realize where the sound is coming from and make a big fuss about it, that parrot is now equipped with a powerful linguistic tool to impact its environment.
Parrots are also known to use language to seek revenge, and a scorned parrot is not to be trifled with. Some will spout “gimme a kiss” while they are surging with hormones and aggression during mating season, head feathers spiking and eyes pinning like the T-rex from Jurassic Park. They know asking for a kiss can get a human face within biting range. They don’t want a kiss. They want blood.
Would AI impulsively play a prank on humans? We’d better pray not.
What would an AI prank look like? “HAHA I just released all the nuclear codes!”
Humor, wit, revenge — these are some of the core elements of all the good storylines since humans were writing on cave walls. Parrots understand these concepts and will exercise comedic timing when they demonstrate them. AI is only funny in its failure to comprehend.
3. Parrots also use language instinctually just like we do.
Food, danger, mating, determining who’s a friend and who’s a foe? These factors sparked the evolution of language, and they continue to drive our use of it. Same for parrots.
Meeting our human needs with words is a part of “embodiment,” or having a body. Our bodies, after all, enable us to produce language, as well as to comprehend the world and make decisions that impact our survival.
AI doesn’t have a biological body, so it lacks instinct. It will only ever receive motivation from its input. But parrots and humans alike communicate because of our bodies’ instincts to stay alive and connect with one another. Our input comes from inside of us, as well as from the external world.
4. Parrots use of language is shaped by social implications, just like ours.
In the paper, the authors argue that LLM research should control for potential biases in AI model training data. Because biased data can have negative consequences when the models are deployed in real-world scenarios. For example, if an AI model reproduces offensive or racist language, the machine is not seeking to communicate hatred or fear or anything at all; it is just outputting patterns based on its training data. Nevertheless, the outcome could be disastrous in high-stakes domains such as healthcare, finance, and law enforcement.
In contrast, parrots use language in anticipation of social consequences. When considering parrot vs AI “comprehension,” machine learning and animal language scientist Stanley Bishop said, “The parrots have a really wonderful case to be made on their behalf because they aren’t just intelligent communicators — they’re profoundly socially conscious. Theirs, sometimes, I think exceeds our capacity to coordinate socially. That’s something that Chat GPT can’t do: maintain and coordinate its own concepts of the world. For parrots, all of their language learning is interspersed with social learning. A parrot that learns something negative is pushed out of the flock.”
5. Parrots have free will. And so do we.
If you ask a parrot to referentially identify an object, such as Dr. Irene Pepperberg famously did with her African Grey subject Alex at the MIT Media Lab, there are many reasons you may not receive a correct answer. “Not knowing the right answer” is only one of them. For example, if you show a parrot a rock, asking “what is this?” and it responds “walnut,” it may have gotten it’s words mixed up. Or it may not be in the mood for labeling rocks and would prefer to eat a walnut.
ChatGPT doesn’t have moods or whims. Given a baseline set of conditions for functionality, it will always output a stochastic response upon input. ChatGPT is a mechanical servant to human demands. It doesn’t decide if it’s going to work or not each day.
When people learn I’m a parrot trainer, I often get the question back: How do you train a parrot? The answer is that you really can’t; parrots do whatever they want to do. You can only offer them safety in making choices and try to earn their trust and respect. I can explain to humans who have chosen the path of parrot companionship how to interact with their parrots in a way that honors it’s agency and free will — but that’s the most impact I can have. Sometimes it helps; other times, a parrot owner, in all their arrogance, will have inflicted so much psychological damage in attempting to subjugate a parrot to their will that the parrot will stop responding to all requests from that individual and instead choose nonstop violence. Just like humans
Would AI choose violence? It’s not enough to pray that it won’t, which is what Bender and co argue: continuing to build LLMs without considerations for the world outside the AI rat race could have catastrophic unintended consequences. They advocate for ethical safeguards to ensure typical violent human tendencies (such as for hatred, bigotry, authoritarianism, etc) don’t continue to seep into the training data or make a violent outputs more likely.
To be stochastic is, ultimately, to be without free. This is what made Sam Altman’s declaration as a stochastic being both curious and embarrassing. A stochastic output from an algorithm receiving linguistic direction is not a choice.
Parrots, on the other hand, make choices. If you hold out your hand and tell a parrot to “step up,” it very well may. But it may not, or it may take a chunk of flesh out of your hand and ruin your day. It may do something different. These behaviors aren’t programmed; they are decided through the process of using a brain to think, something many humans are dangerously close to forgetting how to do.
In her NY Mag piece on Bender, Elizabeth Weil cautions that “language — how it’s generated, what it means — is about to get very contentious. We’re already disoriented by the chatbots we’ve got. The technology that’s coming will be even more ubiquitous, powerful, and destabilizing.”
When it comes to the way we use language, rather than racing toward The Singularity to merge with machines, we should step back and reflect on what makes us different from them — and hold on for dear life. Once we understand the whole of those differences, we will see that we have a lot more in common with parrots than with any AI on the market.
Arikia Millikan is a writer, editor, strategist, community manager and content technologist based in Berlin, Germany. She is the founder of CTRL+X, a consultancy which provides editorial strategy and services to companies specializing in niche technological topics like cybersecurity, biotechnology, and mixed-reality media. She previously founded LadyBits, Scientopia, and was an editor at WIRED.