
The Cognitive Trap
Part One argued that two AI optimizers on opposite sides of a metric will detach it from reality faster than any governance structure can catch up. Readers pushed back: we have humans in the loop. This piece explains why that is the problem. Automation bias is forty years of evidence, not theory. Radiologists defer to confidently wrong AI about half the time. Office workers reduce critical thinking the more they trust the system. MIT's EEG scans of ChatGPT users showed the lowest neural engagement of any group studied. Your campaign approver is not exercising judgment at 4:47 PM on a Friday, they are clicking a checkbox. Worse, the research identifies a moderate-knowledge trap, an inverted bell where the most dangerous zone is people who know enough about AI to use it confidently but not enough to distrust it appropriately, which describes approximately every senior marketing executive making AI deployment decisions today. The seniority that used to be a check has become the accelerant. Aviation teaches the same lesson, and the Talmudic principle of the shomer makes it sharper: responsibility cannot be delegated to a procedure, the guardian cannot say later that the dog was watching the chickens and the dashboard said the chickens were fine. The signature is real. The cognitive act of approval has been outsourced. The human is there to absorb blame while being structurally prevented from doing the work that would have stopped the failure. This is not oversight. It is a sacrifice mechanism. Part Three will lay out what an override layer that actually works looks like.
Subscribe to our premium content at ADOTAT+ to read the rest.
Become a paying subscriber to get access to this post and other subscriber-only content.
Upgrade