Writing · Series & Posts

Blogs & Long-form Notes

All my writing in one place, from series to occasional stand-alone posts on ML, embedded systems , programming knowledge and math. Use the search box below to jump to what you need.

🔍
Typing filters the posts below.

Posts

The posts here can be read in two ways: individually as stand-alone articles, or as parts of a structured series. Each series follows a clear progression of ideas, while individual posts focus on specific topics without requiring prior context.

The Strange History of Teaching Computers to Read Anything at All

Part 1 · Architecting Intelligence: Building LLMs From First Principle · Encodings Foundations LLMs

Telegraphs, Morse, ASCII, encoding wars, Unicode, and UTF-8, a tour of how we went from “machines can only do arithmetic” to “machines can at least store text”, and why even that doesn’t give us meaning. This is where the need for tokenisation starts to become unavoidable.

When Numbers Start to Mean Something

Part 2 · Architecting Intelligence: Building LLMs From First Principle · Geometry Similarity LLMs

We warm up with MNIST digits, Euclidean distance, and cosine similarity to see how numbers can carry structure and meaning in a vector space. Then we ask the slightly uncomfortable question: if this works so nicely for images, can we coax text into behaving this way too?

No posts match your search yet. Either clear the search, or I need to write more.