Премʼєр-міністр Словенії Роберт Голоб разом з колегами із Словаччини та Хорватії прибув сьогодні із візитом до України
A group of influential figures from Silicon Valley and the larger tech community released an open letter this week calling for a pause in the development of powerful artificial intelligence programs, arguing that they present unpredictable dangers to society.
The organization that created the open letter, the Future of Life Institute, said the recent rollout of increasingly powerful AI tools by companies like Open AI, IBM and Google demonstrates that the industry is “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
The signatories of the letter, including Elon Musk, founder of Tesla and SpaceX, and Steve Wozniak, co-founder of Apple, called for a six-month halt to all development work on large language model AI projects.
“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” the letter says. “These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.”
The letter does not call for a halt to all AI-related research but focuses on extremely large systems that assimilate vast amounts of data and use it to solve complex tasks and answer difficult questions.
However, experts told VOA that commercial competition between different AI labs, and a broader concern about allowing Western companies to fall behind China in the race to develop more advanced applications of the technology, make any significant pause in development unlikely.
Chatbots offer window
While artificial intelligence is present in day-to-day life in myriad ways, including algorithms that curate social media feeds, systems used to make credit decisions in many financial institutions and facial recognition increasingly used in security systems, large language models have increasingly taken center stage in the discussion of AI.
In its simplest form, a large language model is an AI system that analyzes large amounts of textual data and uses a set of parameters to predict the next word in a sentence. However, models of sufficient complexity, operating with billions of parameters, are able to model human language, sometimes with uncanny accuracy.
In November of last year, Open AI released a program called ChatGPT (Chat Generative Pre-trained Transformer) to the general public. Based on the underlying GPT 3.5 model, the program allows users to communicate with the program by entering text through a web browser, which returns responses created nearly instantaneously by the program.
ChatGPT was an immediate sensation, as users used it to generate everything from complex computer code to poetry. Though it was quickly apparent that the program frequently returned false or misleading information, the potential for it to disrupt any number of sectors of life, from academia to customer service systems to national defense, was clear.
Microsoft has since integrated ChatGPT into its search engine, Bing. More recently, Google has rolled out its own AI-supported search capability, known as Bard.
GPT-4 as benchmark
In the letter calling for pause in development, the signatories use GPT-4 as a benchmark. GPT-4 is an AI tool developed by Open AI that is more powerful than the version that powers the original ChatGPT. It is currently in limited release. The moratorium being called for in the letter is on systems “more powerful than GPT-4.”
One problem though, is that it is not precisely clear what “more powerful” means in this context.
“There are other models that, in computational terms, are much less large or powerful, but which have very powerful potential impacts,” Bill Drexel, an associate fellow with the AI Safety and Stability program at the Center for a New American Security (CNAS), told VOA. “So there are much smaller models that can potentially help develop dangerous pathogens or help with chemical engineering — really consequential models that are much smaller.”
Edward Geist, a policy researcher at the RAND Corporation and the author of the forthcoming book Deterrence Under Uncertainty: Artificial Intelligence and Nuclear Warfare told VOA that it is important to understand both what programs like GPT-4 are capable of, but also what they are not.
For example, he said, Open AI has made it clear in technical data provided to potential commercial customers that once the model is trained on a set of data, there is no clear way to teach it new facts or to otherwise update it without completely retraining the system. Additionally, it does not appear to be able to perform tasks that require “evolving” memory, such as reading a book.
“There are, sort of, glimmerings of an artificial general intelligence,” he said. “But then you read the report, and it seems like it’s missing some features of what I would consider even a basic form of general intelligence.”
Geist said that he believes many of those warning about the dangers of AI are “absolutely earnest” in their concerns, but he is not persuaded that those dangers are as severe as they believe.
“The gap between that super-intelligent self-improving AI that has been postulated in those conjectures, and what GPT-4 and its ilk can actually do seems to be very broad, based on my reading of Open AI’s technical report about it.”
Commercial and security concerns
James A. Lewis, senior vice president and director of the Strategic Technologies Program at the Center for Strategic and International Studies (CSIS), told VOA he is skeptical that the open letter will have much effect, for reasons as varied as commercial competition and concerns about national security.
Asked what he thinks the chances are of the industry agreeing to a pause in research, he said, “Zero.”
“You’re asking Microsoft to not compete with Google?” Lewis said. “They’ve been trying for decades to beat Google on search engines, and they’re on the verge of being able to do it. And you’re saying, let’s take a pause? Yeah, unlikely.”
Competition with China
More broadly, Lewis said, improvements in AI will be central to progress in technology related to national defense.
“The Chinese aren’t going to stop because Elon Musk is getting nervous,” Lewis said. “That will affect [Department of Defense] thinking. If we’re the only ones who put the brakes on, we lose the race.”
Drexel, of CNAS, agreed that China is unlikely to feel bound by any such moratorium.
“Chinese companies and the Chinese government would be unlikely to agree to this pause,” he said. “If they agreed, they’d be unlikely to follow through. And in any case, it’d be very difficult to verify whether or not they were following through.”
He added, “The reason why they’d be particularly unlikely to agree is because — particularly on models like GPT-4 — they feel and recognize that they are behind. [Chinese President] Xi Jinping has said numerous times that AI is a really important priority for them. And so catching up and surpassing [Western companies] is a high priority.”
Li Ang Zhang, an information scientist with the RAND Corporation, told VOA he believes a blanket moratorium is a mistake.
“Instead of taking a fear-based approach, I’d like to see a better thought-out strategy towards AI governance,” he said in an email exchange. “I don’t see a broad pause in AI research as a tenable strategy but I think this is a good way to open a conversation on what AI safety and ethics should look like.”
He also said that a moratorium might disadvantage the U.S. in future research.
“By many metrics, the U.S. is a world leader in AI,” he said. “For AI safety standards to be established and succeed, two things must be true. The U.S. must maintain its world-lead in both AI and safety protocols. What happens after six months? Research continues, but now the U.S. is six months behind.”
U.S. lawmakers and officials are ratcheting up threats to ban TikTok, saying the Chinese-owned video-sharing app used by millions of Americans poses a threat to privacy and U.S. national security.
But free speech advocates and legal experts say an outright ban would likely face a constitutional hurdle: the First Amendment right to free speech.
“If passed by Congress and enacted into law, a nationwide ban on TikTok would have serious ramifications for free expression in the digital sphere, infringing on Americans’ First Amendment rights and setting a potent and worrying precedent in a time of increased censorship of internet users around the world,” a coalition of free speech advocacy organizations wrote in a letter to Congress last week, urging a solution short of an outright ban.
The plea came as U.S. lawmakers grilled TikTok CEO Shou Chew over concerns the Chinese government could exploit the platform’s user data for espionage and influence operations in the United States.
TikTok, which bills itself as a “platform for free expression” and a “modern-day version of the town square,” says it has more than 150 million users in the United States.
But the platform is owned by ByteDance, a Beijing-based company, and U.S. officials have raised concerns that the Chinese government could utilize the app’s user data to influence and spy on Americans.
Aaron Terr, director of public advocacy at the Foundation for Individual Rights and Expression, said while there are legitimate privacy and national security concerns about TikTok, the First Amendment implications of a ban so far have received little public attention.
“If nothing else, it’s important for that to be a significant part of the conversation,” Terr said in an interview. “It’s important for people to consider alongside national security concerns.”
To be sure, the First Amendment is not absolute. There are types of speech that are not protected by the amendment. Among them: obscenity, defamation and incitement.
But the Supreme Court has also made it clear there are limits on how far the government can go to regulate speech, even when it involves a foreign adversary or when the government argues that national security is at stake.
In a landmark 1965 case, the Supreme Court invalidated a law that prevented Americans from receiving foreign mail that the government deemed was “communist political propaganda.”
In another consequential case involving a defamation lawsuit brought against The New York Times, the court ruled that even an “erroneous statement” enjoyed some constitutional protection.
“And that’s relevant because here, one of the reasons that Congress is concerned about TikTok is the potential that the Chinese government could use it to spread disinformation,” said Caitlin Vogus, deputy director of the Free Expression Project at the Center for Democracy and Technology, one of the signatories of the letter to Congress.
Proponents of a ban deny a prohibition would run afoul of the First Amendment.
“This is not a First Amendment issue, because we’re not trying to ban booty videos,” Republican Senator Marco Rubio, a longtime critic of TikTok, said on the Senate floor on Monday.
ByteDance, TikTok’s parent company, is beholden to the Chinese Communist Party, Rubio said.
“So, if the Communist Party goes to ByteDance and says, ‘We want you to use that algorithm to push these videos on Americans to convince them of whatever,’ they have to do it. They don’t have an option,” Rubio said.
The Biden administration has reportedly demanded that ByteDance divest itself from TikTok or face a possible ban.
TikTok denies the allegations and says it has taken measures to protect the privacy and security of its U.S. user data.
Rubio is sponsoring one of several competing bills that envision different pathways to a TikTok ban.
A House bill called the Deterring America’s Technological Adversaries Act would empower the president to shut down TikTok.
A Senate bill called the RESTRICT Act would authorize the Commerce Department to investigate information and communications technologies to determine whether they pose national security risks.
This would not be the first time the U.S. government has attempted to ban TikTok.
In 2020, then-President Donald Trump issued an executive order declaring a national emergency that would have effectively shut down the app.
In response, TikTok sued the Trump administration, arguing that the executive order violated its due process and First Amendment rights.
While courts did not weigh in on the question of free speech, they blocked the ban on the grounds that Trump’s order exceeded statutory authority by targeting “informational materials” and “personal communication.”
Allowing the ban would “have the effect of shutting down, within the United States, a platform for expressive activity used by about 700 million individuals globally,” including more than 100 million Americans, federal judge Wendy Beetlestone wrote in response to a lawsuit brought by a group of TikTok users.
A fresh attempt to ban TikTok, whether through legislation or executive action, would likely trigger a First Amendment challenge from the platform, as well as its content creators and users, according to free speech advocates. And the case could end up before the Supreme Court.
In determining the constitutionality of a ban, courts would likely apply a judicial review test known as an “intermediate scrutiny standard,” Vogus said.
“It would still mean that any ban would have to be justified by an important governmental interest and that a ban would have to be narrowly tailored to address that interest,” Vogus said. “And I think that those are two significant barriers to a TikTok ban.”
But others say a “content-neutral” ban would pass Supreme Court muster.
“To pass content-neutral laws, the government would need to show that the restraint on speech, if any, is narrowly tailored to serve a ‘significant government interest’ and leaves open reasonable alternative avenues for expression,” Joel Thayer, president of the Digital Progress Institute, wrote in a recent column in The Hill online newspaper.
In Congress, even as the push to ban TikTok gathers steam, there are lone voices of dissent.
One is progressive Democrat Alexandria Ocasio-Cortez. Another is Democratic Representative Jamal Bowman, himself a prolific TikTok user.
Opposition to TikTok, Bowman said, stems from “hysteria” whipped up by a “Red scare around China.”
“Our First Amendment gives us the right to speak freely and to communicate freely, and TikTok as a platform has created a community and a space for free speech for 150 million Americans and counting,” Bowman, who has more than 180,000 TikTok followers, said recently at a rally held by TikTok content creators.
Instead of singling out TikTok, Bowman said, Congress should enact new legislation to ensure social media users are safe and their data secure.
New data is suggesting at least some U.S. adversaries are taking advantage of the hugely popular TikTok video-sharing app for influence operations.
A report Thursday by the Alliance for Securing Democracy (ASD) finds Russia “has been using the app to push its own narrative” in its effort to undermine Western support for Ukraine.
“Based on our analysis, some users are engaging more with Russian state media than other, more reputable independent news outlets on the platform,” according to the report by the U.S.-based election security advocate that tracks official state actors and state-backed media.
“More TikTok users follow RT than The New York Times,” it said.
The ASD report found that as of March 22, there were 78 Russian-funded news outlets on TikTok with a total of more than 14 million followers.
It also found that despite a commitment from TikTok to label the accounts as belonging to state-controlled media, 31 of the accounts were not labeled.
Yet even labeling the accounts seemed to have little impact on their ability to gain an audience.
“By some measures, including the performance of top posts, labeled Russian state media accounts are reaching larger audiences on TikTok than other platforms,” the report said. “RIA Novosti’s top TikTok post so far in 2023 has more than 5.6 million views. On Twitter, its top post has fewer than 20,000 views.”
The report on Russian state media’s use of TikTok comes as U.S. officials are again voicing concern about the potential for TikTok to be used for disinformation campaigns and foreign influence operations.
“Just a tremendous number of people in the United States use TikTok,” John Plumb, the principal cyber adviser to the U.S. secretary of defense, told members of a House Armed Services subcommittee, warning of “the control China may have to direct information through it” and use it as a “misinformation platform.”
“This provides a foreign nation a platform for information operations,” U.S. Cyber Command’s General Paul Nakasone added, noting that TikTok has 150 million users in the United States.
“One-third of the adult population receives their news from this app,” he said. “One-sixth of our children are saying they’re constantly on this app.”
TikTok, owned by China-based ByteDance, has sought to push back against the concerns.
“Let me state this unequivocally: ByteDance is not an agent of China or any other country,” TikTok CEO Shou Zi Chew told U.S. lawmakers during a hearing last week.
“We do not promote or remove content at the request of the Chinese government,” he said, trying to downplay fears about the company’s data collection practices and Chinese laws that would require the company to share that information with the Chinese government if asked.U.S. lawmakers, intelligence and security officials, however, have their doubts.
The top Republican on the Senate Intelligence Committee, Marco Rubio, earlier this month warned that TikTok is “probably one of the most valuable surveillance tools on the planet.”
A day later, Cyber Command’s Nakasone told members of the House Intelligence Committee that TikTok is like a “loaded gun,” while FBI Director Christopher Wray warned that TikTok’s recommendation algorithm “could be used to conduct influence operations.”
“That’s not something that would be easily detected,” he added.Read More
A Chinese hacking group that is likely state-sponsored and has been linked previously to attacks on U.S. state government computers is highly active and focusing on a broad range of targets that may be of strategic interest to China’s government and security services, a private American cybersecurity firm said in a report Thursday.
The hacking group, which the report called RedGolf, shares such close overlap with groups tracked by other security companies under the names APT41 and BARIUM that it is thought they are either the same or very closely affiliated, said Jon Condra, director of strategic and persistent threats for Insikt Group, the threat research division of Massachusetts-based cybersecurity company Recorded Future.
Following up on previous reports of APT41 and BARIUM activities and monitoring the targets that were attacked, Insikt Group said it had identified a cluster of domains and infrastructure “highly likely used across multiple campaigns by RedGolf” over the past two years.
“We believe this activity is likely being conducted for intelligence purposes rather than financial gain due to the overlaps with previously reported cyberespionage campaigns,” Condra said in an emailed response to questions from The Associated Press.
China’s Foreign Ministry denied the accusations, saying, “This company has produced false information on so-called ‘Chinese hacker attacks’ more than once in the past. Their relevant actions are groundless accusations, far-fetched and lack professionalism.”
Chinese authorities have consistently denied any form of state-sponsored hacking, instead saying China itself is a major target of cyberattacks.
APT41 was implicated in a 2020 U.S. Justice Department indictment that accused Chinese hackers of targeting more than 100 companies and institutions in the U.S. and abroad, including social media and video game companies, universities and telecommunications providers.
In its analysis, Insikt Group said it found evidence that RedGolf “remains highly active” in a wide range of countries and industries, “targeting aviation, automotive, education, government, media, information technology and religious organizations.”
Insikt Group did not identify specific victims of RedGolf, but said it was able to track scanning and exploitation attempts targeting different sectors with a version of the KEYPLUG backdoor malware also used by APT41.
Insikt said it had identified several other malicious tools used by RedGolf in addition to KEYPLUG, “all of which are commonly used by many Chinese state-sponsored threat groups.”
In 2022, the cybersecurity firm Mandiant reported that APT41 was responsible for breaches of the networks of at least six U.S. state governments, also using KEYPLUG.
In that case, APT41 exploited a previously unknown vulnerability in an off-the-shelf commercial web application used by 18 states for animal health management, according to Mandiant, which is now owned by Google. It did not identify which states’ systems were compromised.
Mandiant called APT41 “a prolific cyber threat group that carries out Chinese state-sponsored espionage activity in addition to financially motivated activity potentially outside of state control.”
Cyber intelligence companies use different tracking methodologies and often name the threats they identify differently, but Condra said APT41, BARIUM and RedGolf “likely refer to the same set of threat actor or group(s)” due to similarities in their online infrastructure, tactics, techniques and procedures.
“RedGolf is a particularly prolific Chinese state-sponsored threat actor group that has likely been active for many years against a wide range of industries globally,” he said.
“The group has shown the ability to rapidly weaponize newly reported vulnerabilities and has a history of developing and using a large range of custom malware families.”
U.S. Secretary of State Antony Blinken on Thursday urged democracies around the world to work together to ensure technology is used to promote democratic values and fight efforts by authoritarian regimes to use it to repress, control and divide citizens.
Blinken made the comments as he led a discussion on “Advancing Democracy and Internet Freedom in a Digital Age.” The session was part of U.S. President Joe Biden’s Summit for Democracy, a largely virtual gathering of leaders taking place this week from the State Department in Washington.
Blinken said the world is at the point where technology is “reorganizing the life of the world” and noted many countries are using these technologies to advance democratic principles and make life better for their citizens.
He pointed to the Maldives, where court hearings are being held online; Malaysia, where the internet was used to register 3 million new voters last year; and Estonia, where government services are delivered faster and more simply.
At the same time, Blinken said the internet is being used more and more to spread disinformation and foment dissent. He said the U.S. and its democratic partners must establish rules and norms to promote an open, free and safe internet.
The secretary of state identified four priorities to help meet this goal, including using technology to improve people’s lives in tangible ways, establishing rights-respecting rules for emerging technologies, investing in innovation, and countering the effects of authoritarian governments’ use of digital tools to abuse citizens and weaken democracies.
Since the summit began earlier the week, the White House has emphasized the desire of the U.S. to make “technology work for and not against democracy.”
On Wednesday, the prime ministers of eight European countries signed an open letter to the chief executives of major social media companies calling for them to be more aggressive in blocking the spread of false information on their platforms. The leaders of Ukraine, Moldova, Poland, the Czech Republic, Estonia, Latvia, Lithuania and Slovakia signed the letter.
The statement told the companies their tech platforms “have become virtual battlegrounds, and hostile foreign powers are using them to spread false narratives that contradict reporting from fact-based news outlets.”
It went on to say advertisements and artificial amplification on Meta’s platforms, which include Facebook, are often used to call for social unrest, bring violence to the streets and destabilize governments.
About 120 global leaders are participating in the summit. It is seen as Biden’s attempt to bolster the standing of democracies as autocratic governments advance their own agendas, such as Russia’s 13-month invasion of Ukraine, and China’s alliance with Moscow.
In a statement as the summit opened Tuesday, the White House said, “President Biden has called the struggle to bolster democratic governance at home and abroad the defining challenge of our time.”
The statement went on to say, “Democracy — transparent and accountable government of, for, and by the people — remains the best way to realize lasting peace, prosperity, and human dignity.”
An open letter signed by Elon Musk, Apple co-founder Steve Wozniak and other prominent high-tech experts and industry leaders is calling on the artificial intelligence industry to take a six-month pause for the development of safety protocols regarding the technology.
The letter — which as of early Thursday had been signed by nearly 1,400 people — was drafted by the Future of Life Institute, a nonprofit group dedicated to “steering transformative technologies away from extreme, large-scale risks and towards benefiting life.”
In the letter, the group notes the rapidly developing capabilities of AI technology and how it has surpassed human performance in many areas. The group uses the example of how AI used to create new drug treatments could easily be used to create deadly pathogens.
Perhaps most significantly, the letter points to the recent introduction of GPT-4, a program developed by San Francisco-based company OpenAI, as a standard for concern.
GPT stands for Generative Pre-trained Transformer, a type of language model that uses deep learning to generate human-like conversational text.
The company has said GPT-4, its latest version, is more accurate and human-like and has the ability to analyze and respond to images. The firm says the program has passed a simulated bar exam, the test that allows someone to become a licensed attorney.
In its letter, the group maintains that such powerful AI systems should be developed “only once we are confident that their effects will be positive and their risks will be manageable.”
Noting the potential a program such as GPT-4 could have to create disinformation and propaganda, the letter calls on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
The letter says AI labs and independent experts should use the pause “to jointly develop and implement a set of shared safety protocols for advanced AI design and development that will ensure they are safe beyond a reasonable doubt.”
Meanwhile, another group has taken its concerns about the negative potential for GPT-4 a step further.
The nonprofit Center for AI and Digital Policy filed a complaint with the U.S. Federal Trade Commission on Thursday calling on the agency to suspend further deployment of the system and launch an investigation.
In its complaint, the group said the technical description of the GPT-4 system provided by its own makers describes almost a dozen major risks posed by its use, including “disinformation and influence operations, proliferation of conventional and unconventional weapons,” and “cybersecurity.”
Some information for this report was provided by The Associated Press and Reuters.