CT No.214: A fresh coat of paint
Check out our new homepage!
Check out our new homepage!
Hello, and welcome, new subscribers! It's time to talk style, a phase of The Content Technologist approach that addresses the many dimensions of finesse we can apply to content. To minimize word count, this issue focuses on the textual elements of brand and content style, but one day...
No. Don't get stressed about Google. Ever.
Content Pros Update is a free, monthly recap newsletter of all goings-on from The Content Technologist. It is distributed on Substack, Medium, LinkedIn Newsletters, and to monthly subscribers of The Content Technologist on roughly the third Tuesday of every month. It's been a while since we sent our...
In this issue: * A brief reflection on style and its role in content * Links of the week: Content management and shoddy tech journalism * Did you read? You should do that weird content project you've been thinking about Great content, structured properly, fosters business growth. Make your content last...
Launching June 30: Your content IS your marketing.
The words we publish and hold up for peer review remain the best representation of our brains at work in the digital world. A published paper is the best way to look closely at the foundational assumptions of LLMs. And those begin with pop culture.
Transformers take static vector embeddings, which assign single values to every token, and expand their context, nearly simultaneously as they process the context of every other word in the sentence. But who cares, let's listen to a pop song!
How to understand tokens and vector embeddings, for word people.
Even in the face of "black box" algorithms, the history of artificial intelligence—natural language processing, more specifically—has left plenty of clues. While we can't understand the full equation, we can see how building blocks create common patterns in how current algorithms process language.