Founding Memberships are limited. Lock in yours today and receive The Content Technologist newsletter for free, forever.
This post originally appeared in the February 13, 2020 issue of The Content Technologist with the email subject line "Can AI write as well as a human?" and a review of AI-powered style guide software Qordoba.
I’ve written about human vs. computer writers before, and I still stand by that original essay! But since August I’ve learned so much more about computer-assisted content creation.
In the increasingly opaque tech business landscape, we use buzzwords to obscure processes we either don’t want to share because we’re frightened of stinky stealers, or because we don’t entirely know what’s going on and we’re on deadline. “AI” is one of those buzzwords that you’ll see in marketing copy. IBM really got you comfortable with it, with Watson on Jeopardy! sketching out the future where our computer overlords came to take our jobs.
But if you haven’t heard it before, cement this correction into your persnickety creator’s brain: what we call “artificial intelligence” or “AI” is a generalized term for machine learning and natural language processing. It is neither artificial (those words and phrases were learned somewhere) nor is it all that intelligent. It’s certainly not worth an investment that in any way replaces a real person.
Here’s an example from AI Writer, a tool that provides machine-led content generation:
The only reason your company should use content marketing in any way is that you can use it to achieve these goals. Therefore, you must always define your business goals and then use them to develop marketing strategies for content. Too often, companies consider content marketing as an independent activity that is done on their own. Instead, first determine the goal based on the value you want to achieve for the company (sales, revenues, potential customers, etc.).
The above paragraph almost sounds like a sales presentation from a vendor… but a vendor you’d never hire because they have no idea what they’re talking about. It reads like a D-grade high school student attempting to write a 1,000-word essay. The only reason it’s not failing is that it’s technically mostly correct — the grammar is present, but the ideas are not. No ideas = no content.
In the words of Macbeth, “It is a tale, told by an idiot, full of sound and fury, signifying nothing.” But without the sound and the fury.
I have been wrong before, and I will be wrong again.
Look, I believed for wayyyy too long that a computer wrote the joke Olive Garden script. I told people about it, including some direct reports (sorry) and I am pretty sure I linked to it in this newsletter. I’d seen it on the socials, largely divorced from any useful context, and I shared it. I didn’t read any articles about it because why would I read an article about this stupid diversion when I had better things to do? So I propagated the idea that a computer could write something that funny. I was wrong, dead wrong about AI’s capabilities, like so many of us are wrong about so many things.
Y’all, a person wrote the Olive Garden script. A computer did not do it. In fact, most of what we consider AI is heavily assisted— if not fully created— by humans.
If you’re interested in the limits and capabilities of AI, I highly recommend Janelle Shane’s You Look Like a Thing and I Love You, which will clarify that a robot is not coming for your creative job anytime soon. Most writing about machine learning algorithms and other “AI”-related topics is written by programmers, which can be difficult to read if you haven’t studied computer science extensively. (I have a very hard time reading pretty much anything on github or anything for/by programmers because I’m not familiar with the base vocabulary, assumptions and shortcuts.) But Shane is a great writer, and YLLATAILU is an easy-to-understand guide to how machine learning works.
Shane also includes an entire chapter about how most bots or “deepfakes” or all of the scary AI things are likely just low-paid humans, slightly assisted by machines. Read it, and quell your fears… or get frightened because we’re living in the kind of economy where we hire people to act like computers.
When machine learning algorithms produce content, that content is absolute garbage. Computers (and low-paid humans) can generate words aplenty, but they don’t create meaning.
So if you’re looking for an automated solution for better website content, please keep in mind: Quick, cheap content is never good, and it likely won’t make you more visible in search results. The advanced machine learning that powers search engine algorithms in 2020 far better at reading good content, identifying the specific nouns and verbs unique to an industry.
When can you use AI-generated content
That said, computers can do amazing things in parsing out existing language or finding stories in single groups of numbers. Machine learning algorithms are fantastic for solving narrow, one-note, rules-based problems. Machine-assisted writing is particularly good at identifying stories in large data sets, identifying more than or less than or best since. But machine-assisted writing can’t tell you what those numbers mean or what you should do based on those simple data stories.
But computers are like the rookie screenwriter who only follows the basic rising action-climax-resolution outline they read in a book on plot. At best they can create an A-plot based on existing character patterns, but they can’t even begin to understand motivation or human behavior. (And you can forget about B-plots or cinematic universes.) Even though computers are great at finding patterns, the stories told by AI aren’t stories at all.
Reading writing from a machine learning algorithm is like learning to speak Spanish from Duolingo. It can get you maybe 5-10% of the way there, establishing a common vocabulary and helping you haggle for a new caftan on your Mexican beach vacation. But without speaking and practicing with a human, it’s unlikely you’ll be able to communicate with anyone at length, in any dialect, or show any mastery of context. (FYI, I’m “learning” Spanish with Duolingo right now in preparation for my Mexican beach vacation and yes, I’d like to purchase a caftan. But I’m not telling anyone I speak any sort of Spanish at all.)
Computer generated content is great for:
- Telling short, basic, one-plot stories based on numbers. The Washington Post, Reuters use computer-generated content to cover local elections and high school sports.
- Inserting phrases— already constructed by humans — into an appropriate context, followed by human review (here’s an old Wired story on WaPo’s process)
- Writing brief, redundant ad or email subject copy. (Computers can take this job because I certainly don’t want it.)
- Initial insights based on data from monthly reports. (Again, take this job, although most of what machine-assisted copy does is reiterate in language what you can see by reading the graph)
- Calling out historical data points or references for comparison in a database
- Filler copy until you can hire a writer
- Creating unintentional delight in turns of phrase that sound fun
- Using punctuation correctly
Copy you don’t actually expect people to read.
Computer-assisted content is not good at:
- Constructing intentionally delightful sentences, paragraphs or stories
- Telling stories that reflect the complexity of natural phenomena, aka anything that humans do.
- Recommending complex (i.e., more than two-step) actions based on data.
- Precision of language
- Concise writing
- Character, motivation or nuance
- Providing broader context and understanding cultural references or idioms
So should you invest in AI-written content? Can you guess what I’m going to say? If so, then you’re probably a human.
Related articles on The Content Technologist
Want more Content Technologist in your inbox every Thursday? Forever free for the first 1,000 subscribers.