When Elon Musk co-founded OpenAI its goal was to determine how AI technologies could best serve humanity. According to a new company charter, its mission going forward will be developing “highly autonomous systems that outperform humans at most economically valuable work.” It wants to make machines smarter than people.
It’s called artificial general intelligence (AGI) and, depending on who you ask, it’s either the Holy Grail or Pandora’s Box when it comes to machine learning.
Some experts, like Google’s Ray Kurzweil, believe we’re mere decades away from the singularity (the moment machines become more intelligent than humans). Others say it’ll never happen. Most of the people involved in the conversation on AGI are either still trying to sort out the semantics, academic types, or more worried about funding than existential threats to our species.
Thankfully OpenAI is a non-profit. With over $1 billion in funding, and support from some of the smartest minds (and biggest companies) in the field of AI, it’s in a unique position to focus on technology without worrying about pleasing shareholders or losing grants. To that end, the company is dedicated to developing AGI carefully and avoiding an AI arms race that could cause researchers to lose focus.
According to the charter:
We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.
In the document, the company even goes so far as to state it would back off and discontinue its research if another company appeared to be on the verge of AGI.
The conversation around sentient AI is a difficult one. A lot of people view the idea of AGI as far-fetched science fiction, and even believers feel like it’s too early to really be concerned with technology that’s decades away.
But at some point, we’re going to need some strong guidelines to help developers avoid accidental pitfalls (like destroying the human race). And once the robots rise up it’ll be too late to come up with common-sense policies.
OpenAI co-founder Ilya Sutskever told TNW:
When OpenAI started, there were well-established norms for building a strong technical lab. But there wasn’t really precedent for how to build an organization aiming to make the long term impact of these technologies go well. Over the past two years, we’ve built capabilities, safety, and policy teams from scratch, and each has contributed insight on the norms we hope to see in ourselves and others as AI technologies become powerful. We view these technologies as affecting everyone, and so we worked with other institutions in making sure these principles felt right not just to us but to others in the community.
The company doesn’t have a current timeline for AGI, but it’s begun what it calls “the next phase of OpenAI,” which will include increasing investments in personnel and equipment with the intent of making “consequential breakthroughs in artificial intelligence.”