This essay originally was published on April 27, 2023, with the email subject line "CT No.165: Methodologies for evaluating long-term content impact."
Several years ago, I experienced one of the distinct privileges of working for a sexy Silicon Valley startup: I learned what it feels like to sit in a $20,000 office chair. Situated in one of many huddle rooms, this lounge chair was clearly meant to inspire amazing one-on-one jam sessions. I thought this chair might be a good fit in my living room, so I found the manufacturer and googled the price, and lo and behold: $20k. I was horrified. While it was comfortable enough, I didn’t feel much different than other chairs I’d sat in before, the ones that didn’t cost tens of thousands of dollars.
My employer’s offices were filled with these $20k chairs, and I immediately wondered, “Who approved this specific chair as a necessary use of the venture capital poured into this company? What even is $20,000 times however many chairs there are?”
Assessing the value of my office furnishings was a far cry from where I had been working a few months before, when I was a PhD student in evaluation and education technologies. There, I’d been seated in a dark, cave-like office with old desks and creaky chairs, hidden away from the rest of the department.
And while I’d left the musty furniture behind for a seat at the other end of the spectrum, I realized that my chosen field of PhD research had more relevance in my new role than I initially thought. Sitting atop those premium office furnishings, I realized that my work in the typically academic field of evaluation could benefit this shiny, bright workplace, especially given the chair’s price tag.
If you’ve worked in government, nonprofits or academia, you’ve likely heard of or participated in evaluation studies. Evaluation methodologies determine the merit or success of a particular program, department, or initiative based on predetermined success or criteria. While not legally binding, the process of evaluation ensures organizational resources affect their intended outcomes.
I decided to introduce evaluation to various teams at my company. Knowing that people dislike being held accountable when it threatens a comfortable status quo, I was prepared for a bumpy road. But the evaluation idea flew with many departments: my colleagues liked being held accountable and wanted to know they were doing good work that contributed to the company’s success and supported its mission.
Although a startup is technically accountable to venture capital firms doling out millions of dollars, a private company’s responsibility is to valuation and investors first … even when its activities directly impact the public. (In fact, U.S. corporations are legally liable to their shareholders before anyone or anything else, including their employees and customers.)
But when dealing with grants of, say, tens of thousands of dollars, nonprofit and other non-governmental organizations (NGOs) are held far more accountable. Because they are tax-exempt, they're scrutinized to ensure they follow the conditions for that exemption. Unlike chancey and expensive startup ventures, nonprofit programs are tightly monitored with rigorous evaluation programs. A hard-won $20,000 grant isn’t going to purchase a single office chair.*
Like nonprofits, private companies take money (and lots more of it at a time) and promise to create some impact. They promise to make a change. They get tax breaks, too. Evaluation determines if the activities of an entity matchup to the outcomes and impacts promised. More broadly, it uses the rigorous scientific methods favored by empirically minded businesses, but with the end goal of determining goodness or merit.
Increased accountability might be frightening to some, but evaluation should also have a place in private companies. Whatever regulation exists isn’t cutting it, especially considering recent rounds of mass layoffs and executive-level scandals that leave well-meaning employees without recourse.
Especially for content teams, who are often struggling to prove their value to a company’s bottom line, evaluation can be a life raft, another tool for creative teams who want to demonstrate their impact on the company’s performance. After all, if brands are going to invest in their mission, and employees are going to be vigorously screened for values-fit, the formal process of evaluation can keep us all accountable.
*Editor's note: I recently discovered that the highest honor in American letters, the Pulitzer Prize, is an award of only $15,000. Think about $20k startup chairs, think about the folks who received all that venture capital, and then consider the actual work of winning a Pulitzer Prize. —DC
Introducing evaluation: An approach to understanding long-term impact
Evaluation encompasses much more than heuristics. It handles abstraction and complexity more often and is also designed for the long term (aka, not quarterly or even annual reporting). Specifically, while UX heuristics can cover the specific user interface of a software, evaluation would determine whether the development and dissemination of the software had the intended outcome and impact on its customer base. Evaluation also highlights externalities, or unintended consequences of a project, whether they impact an ecosystem or audience.
Although the field has existed in some form since the 1930s, advocates of organized evaluation have focused energy on finding a place in academia (to train other evaluators) as well as resolving issues of professionalism and standards for practice. Only one well-known evaluator, Michael Scriven, has consistently asserted that evaluation should be reaching other sectors like private industry. No time like the present to heed the call.
So what does evaluation have to offer content teams? A good starting place is developing evaluation criteria, since it is most similar to those heuristics you’ve probably heard of, and then moving into another skill area called logic modeling. Let’s look at each in turn.
Accept all cookies: How to develop evaluation criteria
Developing evaluation criteria is a good first step because it can be done relatively quickly and in a short amount of time. If you already have heuristics or governance for your team’s outputs, like style guides, brand guidelines, and so on, start here. When was the last time you looked at them? Did you inherit them, and do you agree with them now? What assumptions went into their creation? How consistently are you using them? And how do they affect or impact the business, in your estimation?