Artificial General Intelligence Poses a Threat to Human Civilization, According to Roman Yampolskiy
In the realm of technological advancement, a topic of great significance is the Probability of Doom (p(doom)) related to Artificial General Intelligence (AGI). This discussion revolves around estimating the likelihood that AGI or superintelligent AI might cause catastrophic or existential risks to humanity.
The risks associated with AGI are multifaceted, encompassing various flavours of danger. One such flavour is the existential risk, where AGI or superintelligence could surpass human control and cause irreversible global catastrophe or extinction, due to its vastly superior capabilities and potentially uncontrollable behaviour.
Another flavour is the takeoff speed risk, which refers to the speed at which AGI develops. Whether it develops suddenly in a rapid “takeoff” leading to runaway self-improvement and dominance by one AI, or more gradually with competitive progress among many AI systems, poses significant challenges.
The control and alignment risk is another crucial concern. It questions whether we can design AI systems whose goals and behaviours reliably align with human values and safety, preventing unintended harm.
Furthermore, the distribution of power in AGI development also presents risks. Concentrating AI development in a single company or nation versus dispersing it among many competing actors affects global stability and the risk of misuse.
In an effort to mitigate these risks, potential solutions have been proposed. These include global regulation to increase oversight and coordination, AI safety research to prevent catastrophic failures, slowing or pausing development to buy time for safety measures, and encouraging competitive balance to reduce the risk from a single runaway superintelligence.
The path forward is uncertain but generally includes continuing to evaluate and update p(doom) estimates as capabilities evolve, acknowledging current uncertainty and disagreements about likely scenarios and timelines. It also emphasises safety and governance frameworks alongside innovation in AI capabilities, engaging a broad range of stakeholders to build consensus on managing AI risks, and prioritising research and interventions that meaningfully improve humanity’s survival odds during the critical window before or during the emergence of AGI.
In conclusion, the discussion around p(doom) is a nuanced dialogue that balances the acknowledged potential catastrophic risks of AGI, deep uncertainty about how and when those risks might manifest, and the urgent need to develop safety measures, regulation, and cooperative strategies to steer the development of AI toward beneficial outcomes rather than doom.
Artificial General Intelligence (AGI) is an example of technology that is driving the probabilities of existential risks, as its uncontrollable behavior and superior capabilities could lead to global catastrophe or extinction. Moreover, the development of AGI might involve artificial-intelligence, which needs to be carefully managed to ensure its goals and behaviors align with human values, preventing any unintended harm.