week 27 / 2025: on models and literacies "All models are wrong, but some models are useful"—so the saying goes. But how are they wrong, and how are they useful? And what does that mean for thinking about futures?
schedule adjustment notice Hello, worldbuilders! Just a quick note to let you know: it occurred to me this morning that my spending a bunch of time compiling weeknotes which don't get emailed to you, only to then spend a shorter amount of time two days later writing a short note alerting
week 27 / 2025: on models and literacies "All models are wrong, but some models are useful"—so the saying goes. But how are they wrong, and how are they useful? And what does that mean for thinking about futures?