Wednesday, 9 April 2025

AI: The Human Advantage: Cooperation > Intelligence


"As someone who follows AI developments with interest (though I’m not a technical expert), I had an insight about AI safety that might be worth considering. It struck me that we might be overlooking something fundamental about what makes humans special and what might make AI risky.

"The Human Advantage: Cooperation > Intelligence
  • Humans dominate not because we’re individually smartest, but because we cooperate at unprecedented scales [ It's called division or specialisation of labor. ...To somehow prevent AI from practicing division of labour would be to make AI inefficient]
  • Our evolutionary advantage is social coordination, not just raw processing power
  • This suggests AI alignment should focus on cooperation capabilities, not just intelligence alignment
"The Hidden Risk: AI-to-AI Coordination
  • The real danger may not be a single superintelligent AI, but multiple AI systems coordinating without human oversight
  • AIs cooperating with each other could potentially bypass human control mechanisms
  • This could represent a blind spot in current safety approaches that focus on individual systems
"A Possible Solution: Social Technologies for AI
  • We could develop “social technologies” for AI – equivalent to the norms, values, institutions, and incentive systems that enable human society that promote and prioritise humans
  • Example: Design AI systems with deeply embedded preferences for human interaction over AI interaction; or with small, unpredictable variations in how they interpret instructions from other AIs but not from humans
  • This creates a natural dependency on human mediation for complex coordination, similar to how translation challenges keep diplomats relevant"
~ Jack Skidmore from his email to Tyler Cowen on Coordination & AI Safety

No comments: