Originally posted only futurism.
Researchers at Google Deepmind and the University of Oxford have concluded that it’s now “likely” that superintelligent AI will spell the end of humanity — a grim scenario that more and more researchers are starting to predict.
In a recent paper published in the journal AI Magazine, the team — comprised of DeepMind senior scientist Marcus Hutter and Oxford researchers Michael Cohen and Michael Osborne — argues that machines will eventually become incentivized to break the rules their creators set to compete for limited resources or energy.
“Under the conditions we have identified, our conclusion is much stronger than that of any previous publication — an existential catastrophe is not just possible, but likely,” Cohen, Oxford University engineering student and co-author of the paper, tweeted earlier this month.
ir paper, the researchers argue that humanity could face its doom in the form of super-advanced “misaligned agents” that perceives humankind as standing in the way of a reward.
“One good way for an agent to maintain long-term control of its reward is to eliminate potential threats, and use all available energy to secure its computer,” the paper reads.
“Losing this game would be fatal,” the researchers wrote.
Tragcally, the researchers argue, there’s not a whole lot we can do about it.
“In a world with infinite resources, I would be extremely uncertain about what would happen,” Cohen told Motherboard in an interview. “In a world with finite resources, there’s unavoidable competition for these resources.”
And that could bode badly for humanity.
“And if you’re in a competition with something capable of outfoxing you at every turn, then you shouldn’t expect to win,” he added.
In response to this threat, humanity should only carefully and slowly progress its AI technologies.
If these assumptions hold true, “a sufficiently advanced artificial agent would likely intervene in the provision of goal-information, with catastrophic consequences,” the paper warns.