little more than a reading list (AI)
Apr. 7th, 2026 09:21 amHere, two papers and two articles, all about AI, all I think better than most:
Researchers at the Wharton School at the University of Pennsylvania are proposing an extended model of cognition as a way of measuring and studying “cognitive surrender,” the regular handoff of cognition to LLM models. It’s long but if you’ve got the patience, it’s here. I didn’t see much in the way of surprises, but it does provide an interesting framework for analysis.
One not-emphasised takeaway is that once again, the human intervention for wrong LLM responses model is shit. It’s not emphasised because that’s not the point of their paper – they’re demonstrating their model as an explanative/conceptual framework – but it’s still there.
Scientific American writes about a study showing that AI outputs tend to sway users’ beliefs, even when users are told about biases built into the model. As many – including me – have said many times before, this is absolutely part of the point of AI, particularly but not just for people like Elon Musk. But it’s good to see numbers on it.
Combine study two with study one and you see why the tech brogliarchs so eager to turn thinking into something they sell you. They don’t want to make your life easier, they want to make you pay to think like them. Or, as Karl Bode put it a few months ago, “The problem with AI isn’t going to be Skynet. It’s going to be amoral extraction class assholes applying half-cooked automation at scale onto deeply broken sectors in exploitative ways in a country too corrupt to have functioning regulators.”
Finally, give a look of the narrowly-focused (to coding) but still worthwhile essay, “I used AI. It worked. I hated it.” It strikes me that much of what he hated about it are what people who actually want to be managers like, which explains so very, very much, doesn’t it?
Posted via Solarbird{y|z|yz}, Collected.














