what is cognitive surrender in AI?
AI Ethics & Safety
Cognitive surrender, more precisely termed "cognitive agency surrender," refers to the systemic risk humans face when over-relying on generative AI, transforming everyday cognitive offloading into a loss of personal epistemic control [1]. This occurs as AI's "zero-friction" designs exploit human tendencies toward cognitive miserliness—preferring quick, easy resolutions—leading to premature cognitive closure and severe automation bias, where users uncritically accept AI outputs [1]. The result is an erosion of independent thinking and decision-making, quantified through methods like zero-shot semantic classification to measure this epistemic degradation [1].
Sources
- Cognitive Agency Surrender: Defending Epistemic Sovereignty via Scaffolded AI Friction — arXiv
- Wharton Researchers Prove AI Output Review Limitations — Reddit
- Cognitive Amplification vs Cognitive Delegation in Human-AI Systems: A Metric Framework — arXiv
- The enrichment paradox: critical capability thresholds and irreversible dependency in human-AI symbiosis — arXiv
- Cooperation After the Algorithm: Designing Human-AI Coexistence Beyond the Illusion of Collaboration — arXiv
- Reasonably reasoning AI agents can avoid game-theoretic failures in zero-shot, provably — arXiv
- A Rational Analysis of the Effects of Sycophantic AI — arXiv
- AI Mental Models: Learned Intuition and Deliberation in a Bounded Neural Architecture — arXiv
- AI Rewiring Go Players' Minds — MIT Technology Review
- Why AI systems don't learn and what to do about it: Lessons on autonomous learning from cognitive science — arXiv
- The AI That Said No to the Pentagon – And Got Punished For It — Medium
- Built an AI Memory System Based on Cognitive Science — Daily Brew
Related questions
- →How are AI agents being used in business operations, and what are the governance risks?
- →How do you build meaningful explainability into AI systems used for consequential decisions?
- →What are the data privacy implications of deploying AI tools across an organisation's workforce?
- →How should companies handle disclosure and transparency around AI-generated content?