Міноборони ФРН наразі не може підтвердити автентичність запису, але заявило, що контррозвідка розслідує справу
…
washington — Speaking Mandarin and promoting love for China, countless videos of foreign-looking women made with artificial intelligence started popping up on Chinese social media platforms around the Lunar New Year earlier this month.
The avatars in the videos are created with online images that are stolen, reproduced and repurposed so that even the women in real life recognize themselves in the videos.
Olga Loiek is one of those women. She’s a 20-year-old Ukrainian who studies cognitive science at the University of Pennsylvania. A couple of months ago, Loiek started a YouTube channel where she talks about mental health and shares her philosophies about life.
However, shortly after that, she started receiving messages from followers telling her that they had seen her on Chinese social media. There, she’s not Olga Loiek but a Russian woman who speaks Mandarin, loves China and wants to marry a Chinese man. Her name is Natasha, or Anna, or Grace, depending on the social media platform you find her on in China.
“I started translating the videos with Google Translate, and I realized that most of these accounts are talking about things like China, Russia, how good the relationship between China and Russia is,” she told VOA. “This feels very violating.”
In some videos, the avatars talk about how much they value Russia and China’s close ties. In other videos, they praise Chinese history and culture or talk about how much Russian women want to marry Chinese.
“If you marry Russian women, we will wash clothes, cook, and wash dishes for you every day,” an avatar said. “We will also give you foreign babies, as many as you want.”
Several dozen videos of Loiek’s avatar speaking Mandarin have been found on video sites Douyin and Bilibili. Most of these accounts would ask viewers to visit their online stores to buy what they say are authentic Russian goods.
Douyin, China’s version of TikTok, has labeled some of these videos as potentially AI-generated. But comments show that many believed they were looking at a real woman. One netizen wrote, “Russian beauty, Chinese people welcome you.”
Loiek said she would never say things like that, obviously, given that she’s from Ukraine, which has been at war with Russia since 2022.
She said, “This is probably used to make people, maybe people in China, feel that foreigners feel that their country is superior.”
On Bilibili, China’s biggest video site, some AI videos using Loiek’s face are marked with the logo of HeyGen, indicating that the video was generated on the company’s website.
In one tutorial on Bilibili, the demonstrator even shows how to make a short video on HeyGen with a clip of Loiek talking.
HeyGen is an AI company headquartered in Los Angeles that was launched in China in 2020. It specializes in realistic digital avatars, voice generation and video translating.
The technology developed by HeyGen was used in AI videos of Donald Trump and Taylor Swift speaking perfect Mandarin that went viral on Chinese social media in October 2023. According to Forbes, the company is now valued at $75 million.
HeyGen’s moderation policy states that users cannot generate avatars that “represent real individuals, including celebrities or public figures, without explicit consent.” The company’s official tutorial video on avatar making shows that users must submit a video of people giving consent to the use of their likeness. It’s unclear how some in China could circumvent the requirement to make videos of Loiek.
Loiek said that since she and her YouTube subscribers have sent complaints to Chinese social media companies, about a dozen of the accounts imitating her have been taken down.
VOA reached out to HeyGen and Douyin’s parent company, ByteDance, for comments but has not received a response.
The Chinese government rolled out provisions to regulate deepfakes and other “deep synthesis services” in early 2023. The law prohibits generating deepfakes without the consent of the people whose image or other information is used.
Loiek posted her story on YouTube, and it has been shared on Chinese social media. Netizens across platforms sympathized with her and called for tougher regulations on AI.
Chinese tech giants such as Baidu and Tencent are investing heavily in AI technology. One of the most hyped-up services powered by AI is digital humans.
Tencent and Xiaoice, a Chinese AI studio spun off from Microsoft, offer digital human services that can clone people and turn them into AI avatars for as little as $145.
AI avatars have also been found in online disinformation campaigns that spread pro-China and anti-U.S. narratives. In February 2023, research firm Graphika found a social media campaign promoting Beijing’s interests using realistic-looking computer-generated people in videos.
In September 2023, the U.S. State Department warned in a report, “Access to global data combined with the latest developments in artificial intelligence technology would enable the PRC [People’s Republic of China] to surgically target foreign audiences and thereby perhaps influence economic and security decisions in its favor.
As for Loiek, she does not plan to quit YouTube or stop posting.
“We need some sort of regulatory frameworks, so we can understand and we can prevent these things from happening,” she said.
Adrianna Zhang contributed to this report.
…
WASHINGTON — U.S. security officials are bracing for an onslaught of fast-paced influence operations, from a wide range of adversaries, aimed at impacting the country’s coming presidential election.
FBI Director Christopher Wray issued the latest warning about attempts to meddle with American voters as they decide whom to support when they go to the polls come November, telling a meeting of security professional Thursday that technologies such as artificial intelligence are already altering the threat landscape.
“This election cycle, the U.S. will face more adversaries moving at a faster pace and enabled by new technology,” Wray said.
“Advances in generative AI [artificial intelligence], for instance, are lowering the barrier to entry, making it easier for both more and less sophisticated foreign adversaries to engage in malign influence while making foreign influence efforts by players both old and new, more realistic and more difficult to detect,” he said.
The warning echoes concerns raised earlier in the week by a top lawmaker and by the White House, both singling out Russia.
“I worry that we are less prepared for foreign intervention in our elections in 2024 than we were in 2020,” said Mark Warner, the chairman of the Senate Intelligence Committee, during a cybersecurity conference on Tuesday.
On Sunday, White House national security adviser Jake Sullivan told NBC’s “Meet the Press” there is “plenty of reason to be concerned.”
“There is a history here in presidential elections by the Russian Federation, by its intelligence services,” Sullivan said.
U.S. intelligence agencies concluded Russia sought to interfere in both the 2016 and 2020 elections.
But Russia has not been alone.
A declassified intelligence assessment looking at the 2022 midterm elections concluded with high to moderate confidence that Russia was joined by China and Iran in seeking to sway the outcome.
“China tacitly approved efforts to try to influence a handful of midterm races involving members of both U.S. political parties,” the report said.
“Tehran relied primarily on its intelligence services and Iran-based online influencers to conduct its covert operations,” it said. “Iran’s influence activities reflected its intent to exploit perceived social divisions and undermine confidence in U.S. democratic institutions during this election cycle.”
The United States has also alleged other adversaries, such as Cuba, Venezuela and Lebanese Hezbollah, have sought to influence elections, as have allies, such as Turkey and Saudi Arabia.
The warnings from Wray and others are encountering pushback from some lawmakers and conservative commentators who view such statements as an attempt to resurrect what they call the “Russia hoax” — saying the narrative that Russia interfered in the 2016 U.S. presidential election to help former President Donald Trump win is without merit.
Warner, however, dismissed that view in response to a question from VOA on the sidelines of Tuesday’s security conference. “Anyone who doesn’t think the Russian intel services have and will continue to interfere in our elections … I wonder where they’re getting their information to start with,” he said.
Wray on Thursday suggested the list of countries and other foreign groups seeking to influence U.S. voters is set to expand. “AI is most useful for what I would call kind of mediocre bad guys and making them kind of like intermediate,” he said.
“The really sophisticated adversaries are using AI more just to increase the speed and scale of their efforts,” he said. “But we are coming towards a day very soon where what I would call the experts, the most sophisticated adversaries, are going to find ways to use AI to be even more elite.”
Some private cybersecurity firms also see the danger growing.
This past September, Microsoft warned that Beijing has developed a new artificial intelligence capability that can produce “eye-catching content” more likely to go viral compared to previous Chinese influence operations.
Others agree.
“Whether it’s robocalls, whether it’s fake videos — all those things really even back to 2022, weren’t as prevalent,” Trellix CEO Bryan Palma told VOA. “You weren’t going to get any high-quality type of deepfake video.
“I think you’re going to see more and more of that as we get closer to the election,” he said.
…