Credits
The Island Problem is a seamless hodgepodge of ideas. We want to thank all who contributed at least one hedgehog-worth of motivation.
Inspirations
- Dan Hendrycks: His evolutionary model for a many-AGIs landscape provided the critical foundation for The Island Problem's driving force, based on competitive pressures and natural selection.
- Paper: Natural Selection Favors AIs over Humans
- Video lecture: Available on YouTube
- Max Tegmark: His outreach has been a driving force.
- Rob Miles: His YouTube video about maximizers helped make this concept obviously important. Also, he is just... an inspiration to all.
- YouTube Channel: Rob Miles on AI Safety
- Key concept: How maximizers inevitably push toward extreme outcomes
- Eliezer Yudkowsky: Back GPT-4 was still new, his appearance on Bankless and Lex Fridman was the terrible flame that started me (Travis, the author) on this brain-melting journey to explain AI risk in a new way. Before this, I was blissfully unaware of MIRI and LessWrong.
- Keep the Future Human: The "Autonomy, Generality, Intelligence" structure from Keep the Future Human aligns closely with our framework. Their emphasis on "Autonomous" rather than "Artificial" AGI is both more accurate and more actionable for understanding catastrophic outcomes.
- David Shapiro: His calls for scientifically-grounded AI safety communication, and his fascinating "anti-doomer" shift in 2024, inspired many parts of this project.
- The Hypotheses page, which lays out testable hypotheses.
- The multi-agent logic of Resources, which describes why multiple AGIs would either compete, or cooperate with each other only — minus humans. Either of which would be a catastrophic outcome for us. This was inspired by Shapiro's belief that the default outcome for superintelligent AIs is that they will figure out how to cooperate.
- Liron Shapira: Pushing the community to develop something called intellidynamics — like thermodynamics but for intelligence — helped inspire the option maximizer principle in TIP.
- Robin Hanson: The option maximizers in TIP are similar to Robin Hanson's Grabby Aliens. The AGIs in the Island Problem are grabby aliens developing here on Earth.
- Gill Verdon: His philosophy of free energy maximization inspired the opposing idea of the concentration gradient in the Island Problem. Likewise his idea of free competition between multiple agents inspired the idea of catastrophic optimization through competition.
Similar Ideas
These people, works, and ideas were not direct inspirations, but are related, if you are looking for ideas similar to the Island Problem. Know any other good ones? Email us.
- Michael Nielsen: His essay ASI existential risk: reconsidering alignment as a goal explores how alignment represents a small, complicated target within the much larger space of scientific truth — insights that informed our "island vs ocean" metaphor.
- Instrumental Convergence: The concept of divergent optimization in TIP is partly inspired by instrumental convergence. It describes AGIs diverging from our local optimum toward physics-optimal systems, rather than converging on similar subgoals like self-preservation.
But... who is behind this?
Travis Bernard. I keep a low profile. More about me? Okay, uhh... Art/philosophy/dev background. State-college in the boonies. I run a bootstrapped startup that builds a website framework for radio stations. (Yeah, really. I don't have an AI startup, or work for FLI, or whatever.) Here's my X, though I don't really post much.
Contact
Want to help improve The Island Problem? Join our GitHub community or email us.
Did we miss anyone? email us.