Rebuilding "What Is Code?"
I'm rebuilding Paul Ford's landmark 2015 Bloomberg essay
"What Is Code?"
— the 38,000-word interactive piece that mass-explained software to a business audience and remains one of the best things ever published on the web.
Bloomberg open-sourced the original codebase (jQuery, D3 v3, Backbone.js, Grunt),
and I'm doing a faithful modernization using Astro islands, GSAP ScrollTrigger, D3 v7, and Web Components
— preserving the original's 18 interactive modules (circuit simulators, keyboard visualizers, DOM explorers, animated guide characters)
while replacing every piece of 2015-era tooling with its contemporary equivalent.
The interesting addition: an "explorable source layer" that makes the rebuild recursive.
Readers can pop open any interactive, see the annotated source behind it, and tweak parameters in real-time
— learning what code is by manipulating the code that renders the essay they're reading.
It turns a faithful homage into a new pedagogical argument about code itself.
It's a portfolio piece, a community tribute, and a reusable framework for interactive longform writing.
Two Courses, One Prototype
I'm running two courses in parallel, both aimed at the same target: a working
Scholion prototype — a web app that ingests scientific papers,
extracts claim-dependency graphs (Toulmin structure: claims, warrants, backings, and their relationships),
and lets you navigate them interactively.
How to Solve It with Code
provides the application stack: FastHTML/HTMX for the UI, a PDF ingestion pipeline, LLM-prompted extraction,
and Pólya-based problem decomposition as a working method.
Practical Deep Learning for Coders
provides the model-layer understanding for when prompting hits its ceiling — eventually fine-tuning a smaller model
on labeled data generated by the LLM extraction pipeline (the LLM-as-labeler pattern).
For fast.ai I'm running a speed-run: prioritize the NLP lesson, collaborative filtering (for embedding intuition),
tabular/random forest (for claim classification baselines), and Part 2's transformer and attention content.
Computer vision lectures get skimmed at 2x for pedagogical patterns; the deployment and ethics lectures are skipped
(deployment is covered by SolveIt, ethics I've engaged with elsewhere).
The daily schedule runs eight hours in four blocks. Mornings: three hours on courses —
30 minutes of paper reading (which doubles as Scholion test data),
90 minutes on How to Solve It, 60 minutes on fast.ai.
The 2:1 ratio reflects that SolveIt produces the working artifact while fast.ai builds understanding that pays off later.
Midday: a one-hour buffer for reading, integration work, or overflow.
Afternoons: two hours on other projects (Notice, thbrdy.dev), two hours on job search.
The Scholion prototype built through the courses is also the strongest portfolio artifact —
these aren't competing priorities, they converge.