CT No.192: Using quality behavioral data for stronger editorial content
Because last week's issue didn't actually send.
Because last week's issue didn't actually send.
This newsletter went out a little late today to give new subscribers from today's What's Next event, presented by the Minnesota Interactive Marketing Association (MIMA), a taste of what we're all about. Welcome, new subscribers! A couple of weeks ago I spoke with Mark...
Creative Capital's website is a masterclass in how to maintain a database. Here, we dissect the exemplary information architecture strategy and explore role the site has played in revolutionizing the grant application process for artists and organizations across the contemporary arts world.
Great websites still exist. Here's a breakdown of a nonprofit grant-making arts organization's digital home and source of truth.
Why we should consider the form and not the "content"
Publisher Deborah Carver steps into the ring to weigh on recent media criticism of the word "content," suggesting that the budgets allocated to content production and the possibilities of generating "form" with AI should be where our attention lies.
The words we publish and hold up for peer review remain the best representation of our brains at work in the digital world. A published paper is the best way to look closely at the foundational assumptions of LLMs. And those begin with pop culture.
Transformers take static vector embeddings, which assign single values to every token, and expand their context, nearly simultaneously as they process the context of every other word in the sentence. But who cares, let's listen to a pop song!
How to understand tokens and vector embeddings, for word people.
Even in the face of "black box" algorithms, the history of artificial intelligence—natural language processing, more specifically—has left plenty of clues. While we can't understand the full equation, we can see how building blocks create common patterns in how current algorithms process language.