A group of influential figures from Silicon Valley and the larger tech community released an open letter this week calling for a pause in the development of powerful artificial intelligence programs, arguing that they present unpredictable dangers to society.
The organization that created the open letter, the Future of Life Institute, said the recent rollout of increasingly powerful AI tools by companies like Open AI, IBM and Google demonstrates that the industry is “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
The signatories of the letter, including Elon Musk, founder of Tesla and SpaceX, and Steve Wozniak, co-founder of Apple, called for a six-month halt to all development work on large language model AI projects.
“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” the letter says. “These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.”
The letter does not call for a halt to all AI-related research but focuses on extremely large systems that assimilate vast amounts of data and use it to solve complex tasks and answer difficult questions.
However, experts told VOA that commercial competition between different AI labs, and a broader concern about allowing Western companies to fall behind China in the race to develop more advanced applications of the technology, make any significant pause in development unlikely.
Chatbots offer window
While artificial intelligence is present in day-to-day life in myriad ways, including algorithms that curate social media feeds, systems used to make credit decisions in many financial institutions and facial recognition increasingly used in security systems, large language models have increasingly taken center stage in the discussion of AI.
In its simplest form, a large language model is an AI system that analyzes large amounts of textual data and uses a set of parameters to predict the next word in a sentence. However, models of sufficient complexity, operating with billions of parameters, are able to model human language, sometimes with uncanny accuracy.
In November of last year, Open AI released a program called ChatGPT (Chat Generative Pre-trained Transformer) to the general public. Based on the underlying GPT 3.5 model, the program allows users to communicate with the program by entering text through a web browser, which returns responses created nearly instantaneously by the program.
ChatGPT was an immediate sensation, as users used it to generate everything from complex computer code to poetry. Though it was quickly apparent that the program frequently returned false or misleading information, the potential for it to disrupt any number of sectors of life, from academia to customer service systems to national defense, was clear.
Microsoft has since integrated ChatGPT into its search engine, Bing. More recently, Google has rolled out its own AI-supported search capability, known as Bard.
GPT-4 as benchmark
In the letter calling for pause in development, the signatories use GPT-4 as a benchmark. GPT-4 is an AI tool developed by Open AI that is more powerful than the version that powers the original ChatGPT. It is currently in limited release. The moratorium being called for in the letter is on systems “more powerful than GPT-4.”
One problem though, is that it is not precisely clear what “more powerful” means in this context.
“There are other models that, in computational terms, are much less large or powerful, but which have very powerful potential impacts,” Bill Drexel, an associate fellow with the AI Safety and Stability program at the Center for a New American Security (CNAS), told VOA. “So there are much smaller models that can potentially help develop dangerous pathogens or help with chemical engineering — really consequential models that are much smaller.”
Limited capabilities
Edward Geist, a policy researcher at the RAND Corporation and the author of the forthcoming book Deterrence Under Uncertainty: Artificial Intelligence and Nuclear Warfare told VOA that it is important to understand both what programs like GPT-4 are capable of, but also what they are not.
For example, he said, Open AI has made it clear in technical data provided to potential commercial customers that once the model is trained on a set of data, there is no clear way to teach it new facts or to otherwise update it without completely retraining the system. Additionally, it does not appear to be able to perform tasks that require “evolving” memory, such as reading a book.
“There are, sort of, glimmerings of an artificial general intelligence,” he said. “But then you read the report, and it seems like it’s missing some features of what I would consider even a basic form of general intelligence.”
Geist said that he believes many of those warning about the dangers of AI are “absolutely earnest” in their concerns, but he is not persuaded that those dangers are as severe as they believe.
“The gap between that super-intelligent self-improving AI that has been postulated in those conjectures, and what GPT-4 and its ilk can actually do seems to be very broad, based on my reading of Open AI’s technical report about it.”
Commercial and security concerns
James A. Lewis, senior vice president and director of the Strategic Technologies Program at the Center for Strategic and International Studies (CSIS), told VOA he is skeptical that the open letter will have much effect, for reasons as varied as commercial competition and concerns about national security.
Asked what he thinks the chances are of the industry agreeing to a pause in research, he said, “Zero.”
“You’re asking Microsoft to not compete with Google?” Lewis said. “They’ve been trying for decades to beat Google on search engines, and they’re on the verge of being able to do it. And you’re saying, let’s take a pause? Yeah, unlikely.”
Competition with China
More broadly, Lewis said, improvements in AI will be central to progress in technology related to national defense.
“The Chinese aren’t going to stop because Elon Musk is getting nervous,” Lewis said. “That will affect [Department of Defense] thinking. If we’re the only ones who put the brakes on, we lose the race.”
Drexel, of CNAS, agreed that China is unlikely to feel bound by any such moratorium.
“Chinese companies and the Chinese government would be unlikely to agree to this pause,” he said. “If they agreed, they’d be unlikely to follow through. And in any case, it’d be very difficult to verify whether or not they were following through.”
He added, “The reason why they’d be particularly unlikely to agree is because — particularly on models like GPT-4 — they feel and recognize that they are behind. [Chinese President] Xi Jinping has said numerous times that AI is a really important priority for them. And so catching up and surpassing [Western companies] is a high priority.”
Li Ang Zhang, an information scientist with the RAND Corporation, told VOA he believes a blanket moratorium is a mistake.
“Instead of taking a fear-based approach, I’d like to see a better thought-out strategy towards AI governance,” he said in an email exchange. “I don’t see a broad pause in AI research as a tenable strategy but I think this is a good way to open a conversation on what AI safety and ethics should look like.”
He also said that a moratorium might disadvantage the U.S. in future research.
“By many metrics, the U.S. is a world leader in AI,” he said. “For AI safety standards to be established and succeed, two things must be true. The U.S. must maintain its world-lead in both AI and safety protocols. What happens after six months? Research continues, but now the U.S. is six months behind.”
…
Read More