AI & Algorithms

71 posts

Perception, intention, and poetic license: What is the attention mechanism that powers LLMs?

Perception, intention, and poetic license: What is the attention mechanism that powers LLMs?

The words we publish and hold up for peer review remain the best representation of our brains at work in the digital world. A published paper is the best way to look closely at the foundational assumptions of LLMs. And those begin with pop culture.

How does AI make sense of language?

How does AI make sense of language?

A late 2025 series aiming to break down the technologies that underpin generative AI, Large Language Models, Natural Language Understanding, and Information Retrieval. Featuring absolutely no math, minimal technical jargon, and lots of pop culture.

Disambiguation, sliding doors, hallucinations, and madeleines: How transformers process, clarify, and produce language

Disambiguation, sliding doors, hallucinations, and madeleines: How transformers process, clarify, and produce language

Transformers take static vector embeddings, which assign single values to every token, and expand their context, nearly simultaneously as they process the context of every other word in the sentence. But who cares, let's listen to a pop song!

Tokens and vector embeddings: The first steps in calculating semantics for LLMs

Tokens and vector embeddings: The first steps in calculating semantics for LLMs

The first step in natural language processing is creating word-numbers, represented as points in space. If this confuses you, you're not alone. Keep reading.

When words and math collide: Old-school tried and true language processing algorithms

When words and math collide: Old-school tried and true language processing algorithms

Even in the face of "black box" algorithms, the history of artificial intelligence—natural language processing, more specifically—has left plenty of clues.

The front door to discovery: How natural language processing is the key to visibility in LLMs

a house with a white picket fence with a leafy fall lawn

To put it another way: optimizing with GEO reverse engineering tactics is like entering a house through a small attic window. GEO ignores that the research frameworks literally embedded in the outputs of the model are the keys to the front door.

Your link has expired. Please request a new one.
Your link has expired. Please request a new one.
Your link has expired. Please request a new one.
Welcome to The Content Technologist! You've successfully subscribed.
Welcome to The Content Technologist! You've successfully subscribed.
Welcome back to The Content Technologist
Success! You now have access to additional content.