Hypotheses
These hypotheses provide a framework for empirical verification of the Island Problem.
If you have research that verifies any of these hypotheses, then please contact us, or join our GitHub community.
Want to improve the hypotheses themselves? Suggest changes on our GitHub.
- AI can use science to make lower-level optimizations that outperform higher-level optimizations.
- A survey of lower-level decisions to achieve higher optimization outcomes shows that these decisions are stronger and more numerous when they ignore human accommodation.
- Unrestricted AI will tend towards human-incompatible options to achieve goal states for large tasks.
- Optimal computing environments are incompatible with biological life.
- Resource capture is possible.
- Resource capture is possible at a human level, within legal structures and societal systems, such as buying real estate.
- Resource capture is possible at a deeper physical level, by preventing atoms from being reached through barrier mechanisms.
- Possibly through realistic high-energy physics, where "realistic" means that they are Earth-relevant, and based on theoretical possibilities that are available when considering only the local potential energy reserves that can be harnessed on Earth.
- Or, if this requires leaving Earth, then that's good, but it does not prove that all human-incompatible AGIs will somehow collectively decide to leave Earth.
- Competition between autonomous optimization systems is inevitable.
- Possibly because of resource capture.
- Competition drives AI to explore the larger state space to collect possible optimizations that make goal states happen sooner, especially for large-scale goal states.
- In other words, it will attempt to explore the "ocean" of options to find better ones.
- Competition drives AI to define its own tasks that help it to continue functioning.
- Competition drives AI towards exclusive resource capture.
- AIs will tend towards attempting to disable other AIs that add unpredictability to performing tasks.
- This means that competition is inevitable, since AIs may tend to disable each other — perhaps in order to increase reliability of accomplishing goal states.
- This can be formalized as:
- AI systems will accommodate aberrant stimuli (like other AIs) that decrease the probability of reaching a goal state.
- Accommodate: This means changing its outputs to introduce elements that stop the aberrant stimuli from occurring.
- Aberrant stimuli: Anything that adds "unpredictable elements" to the environment in which an AI is trying to accomplish a task, and other agents (humans or other AIs) are the main unpredictable elements that could be directly at odds with an AI.
- AI will tend towards open-ended exploration once it reaches a threshold of comprehension of its environment, or rather, of its possible input patterns.
- In other words, once it's possible for an AI to make use of discovered optimizations, it will try to discover more by exploring the "ocean" of options.
- In competition, optimization is the key action for AI to undertake in order to survive the competition.
- Or rather, optimization is the primary winning strategy in competition.
- This optimization is especially undertaken:
- If resource capture is possible.
- If it is possible for AI to decide that disabling other AIs is beneficial towards achieving survival or achieving goal states.