The headline from the new episode by Mark Thompson and Steve Little is: Browser Wars Heat Up, What AI Can Learn from AOL, Anthropic’s Speedy New AI Model, Simple Path to Better Prompts.
For me, the gem was hidden in a short section on major improvements in transcribing old handwriting. There’s a link to a blog post Has Google Quietly Solved Two of AI’s Oldest Problems? by Canadian historian Mark Humphries.
He found that a Google AI Studio transcription model that was tested on five documents (~1,000 words, 10% of total sample) achieved 1.7% character error rate (CER) and 6.5% word error rate (WER)—roughly 1 in 50 characters incorrect, including punctuation and capitalization. Most errors were punctuation and capitalization rather than actual words, with many involving ambiguous cases. Excluding these ambiguous punctuation and capitalization errors, the modified rates dropped to 0.56% CER and 1.22% WER—approximately 1 in 200 characters wrong when only counting substantive word errors.
Further analysis revealed that what began as a test on the readability of old documents may now be uncovering, by accident, the beginnings of machines that can actually reason in abstract, symbolic ways about the world they perceive.
The model that produced those results was available for only a short while as a test. If it’s generally as successful as described, it will surely become openly available. and stimulated even better performance in other facilities that rely on handwriting recognition technology, like FamilySearch Full Text Search.

