AGI that can solve all of our problems sounds amazing, but it creates a bigger problem.
Even if we successfully build some AGIs that are aligned and safe, we still have this bigger problem.
How big is this problem? Well, it's sort of as big as physics itself.
We call it the Island Problem.
Here's the basic idea:
- We live on a small "island" in a vast "ocean" of physics.
- This small "island" is made of the safe physical systems that are compatible with humans.
- Competition will push some AGIs to leave our "island" because the "ocean" of physics gives them vastly more options to build the strongest, most-optimal systems.
- The autonomous AGIs that can use any option can dominate the safer AGIs that are limited to our "island" of human-compatible options.
- Once these AGIs can leave our island, they will build their own islands, and these new islands will eat our island.
- The dominant AGIs will lock in their dominance by capturing Earth's physical resources to create "islands" of optimal conditions for themselves.
In other words:
AGI will be aligned with physics, not with humans.