3-5 Evaggelikis Scholis, 14231 Nea Ionia, Greece

Elon Musk and Tech Leaders Urge Top AI Labs to Pause Training Beyond GPT-4

Originally posted on cnet.

They’re calling for a halt in development of AI systems more advanced than GPT-4 for at least six months.

Elon Musk, along with a number of tech executives and experts in AI, computer science and other disciplines, in an open letter published Tuesday urged leading artificial intelligence labs to pause development of AI systems more advanced than GPT-4, citing “profound risks” to human society.

The open letter, issued by the nonprofit Future of Life Institute, counts more than 1,000 signatories, including Musk, Apple co-founder Steve Wozniak, Stability AI CEO Emad Mostaque and Sapiens author Yuval Noah Harari. It calls for an immediate halt in training of systems for at least six months, which must be public, verifiable and include all public actors.

“AI systems with human-competitive intelligence can pose profound risks to society and humanity as shown by extensive research and acknowledged by top AI labs,” the letter says. “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we risk loss of control of our civilization?”

The open letter comes a couple of weeks after the public debut of OpenAI’s GPT-4, the large language model that powers the premium version of the wildly popular chatbot ChatGPT. The new GPT-4 can handle more complex tasks and produce more nuanced results than earlier versions, and is also less subject to the flaws of earlier versions, according to OpenAI.

To do their work, systems like GPT-4 need to be trained on large quantities of data that they can then draw on to answer questions and perform other tasks. ChatGPT, which burst onto the scene in November, has a humanlike ability to write work emails, plan travel itineraries, produce computer code and perform well on tests such as the bar exam.

OpenAI didn’t immediately respond to a request for comment.

But on its website, OpenAI acknowledges the need to ensure that technological systems that are “generally smarter than humans” work toward the benefit of humanity. “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.”

Since the start of the year, a growing list of companies, including Google, Microsoft, Adobe, Snapchat, DuckDuckGo and Grammarly, have announced services that take advantage of generative AI skills.

OpenAI’s own research has shown that there are risks that come with these AI skills. Generative AI systems can, for instance, quote unreliable sources or, as OpenAI noted, “increase safety challenges by taking harmful or unintended actions, increasing the capabilities of bad actors who would defraud, mislead, or abuse others.”

AI experts are spooked by where all this might be heading, and by companies rushing out products without adequate safeguards or even an understanding of the implications.

“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources,” the letter says. “Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control.”

Source: cnet

Related Posts