CT No.85: Power outage
6.10.21: Summery playlists in lieu of real content
6.10.21: Summery playlists in lieu of real content
5.27.21: An essay about the murder of George Floyd, violence and policing
5.13.21: Measuring brand awareness, my favorite subscription-focused ESP, and the iPhone privacy referendum
4.29.21: Basecamp's woes shouldn't be your Basecamp-loving team's; time tracking is the secret to your content business; the complete guide to cumulative layout shift and more.
4.22.21: Why your content strategist grumbles when you say the phrase "landing page", an amazing no-code web design tool, and a case study on capturing a Gen Z voice in a fintech app
4.15.21: Black lives matter, always; picking a content management system; a review of Squarespace; and a few links
The words we publish and hold up for peer review remain the best representation of our brains at work in the digital world. A published paper is the best way to look closely at the foundational assumptions of LLMs. And those begin with pop culture.
Transformers take static vector embeddings, which assign single values to every token, and expand their context, nearly simultaneously as they process the context of every other word in the sentence. But who cares, let's listen to a pop song!
How to understand tokens and vector embeddings, for word people.
Even in the face of "black box" algorithms, the history of artificial intelligence—natural language processing, more specifically—has left plenty of clues. While we can't understand the full equation, we can see how building blocks create common patterns in how current algorithms process language.