Російська влада дозволила одній із фінансових установ розблокувати дев’ять мільйонів доларів із приблизно 30 мільйонів, заморожених у російських банках.
…
NEW YORK — An oversight board is criticizing Facebook owner Meta’s policies regarding manipulated media as “incoherent” and insufficient to address the flood of online disinformation that already has begun to target elections across the globe this year.
The quasi-independent board on Monday said its review of an altered video of President Joe Biden that spread on Facebook exposed gaps in the policy. The board said Meta should expand the policy to focus not only on videos generated with artificial intelligence, but on media regardless of how it was created. That includes fake audio recordings, which already have convincingly impersonated political candidates in the U.S. and elsewhere.
It also said Meta should clarify the harm it is trying to prevent and should label images, videos and audio clips as manipulated instead of removing the posts altogether.
The board’s feedback reflects the intense scrutiny that is facing many tech companies for their handling of election falsehoods in a year when voters in more than 50 countries will go to the polls. As both generative artificial intelligence deepfakes and lower-quality “cheap fakes” on social media threaten to mislead voters, the platforms are trying to catch up and respond to false posts while protecting users’ rights to free speech.
“As it stands, the policy makes little sense,” oversight board co-chair Michael McConnell said of Meta’s policy in a statement on Monday. He said the company should close gaps in the policy while ensuring political speech is “unwaveringly protected.”
Meta said it is reviewing the oversight board’s guidance and will respond publicly to the recommendations within 60 days.
Spokesperson Corey Chambliss said while audio deepfakes aren’t mentioned in the company’s manipulated media policy, they are eligible to be fact-checked and will be labeled or down-ranked if fact-checkers rate them as false or altered. The company also takes action against any type of content if it violates Facebook’s Community Standards, he said.
Facebook, which turned 20 this week, remains the most popular social media site for Americans to get their news, according to Pew. But other social media sites, among them Meta’s Instagram, WhatsApp and Threads, as well as X, YouTube and TikTok, also are potential hubs where deceptive media can spread and fool voters.
Meta created its oversight board in 2020 to serve as a referee for content on its platforms. Its current recommendations come after it reviewed an altered clip of Biden and his adult granddaughter that was misleading but didn’t violate the company’s policies because it didn’t misrepresent anything he said.
The original footage showed Biden placing an “I Voted” sticker high on his granddaughter’s chest, at her instruction, then kissing her on the cheek. The version that appeared on Facebook was altered to remove the important context, making it seem as if he touched her inappropriately.
The board’s ruling on Monday upheld Meta’s 2023 decision to leave the seven-second clip up on Facebook, since it didn’t violate the company’s existing manipulated media policy. Meta’s current policy says it will remove videos created using artificial intelligence tools that misrepresent someone’s speech.
“Since the video in this post was not altered using AI and it shows President Biden doing something he did not do (not something he didn’t say), it does not violate the existing policy,” the ruling read.
The board advised the company to update the policy and label similar videos as manipulated in the future. It argued that to protect users’ rights to freedom of expression, Meta should label content as manipulated rather than removing it from the platform if it doesn’t violate any other policies.
The board also noted that some forms of manipulated media are made for humor, parody or satire and should be protected. Instead of focusing on how a distorted image, video or audio clip was created, the company’s policy should focus on the harm manipulated posts can cause, such as disrupting the election process, the ruling said.
Meta said on its website that it welcomes the Oversight Board’s ruling on the Biden post and will update the post after reviewing the board’s recommendations.
Meta is required to heed the oversight board’s rulings on specific content decisions, though it’s under no obligation to follow the board’s broader recommendations. Still, the board has gotten the company to make some changes over the years, including making messages to users who violate its policies more specific to explain to them what they did wrong.
Jen Golbeck, a professor in the University of Maryland’s College of Information Studies, said Meta is big enough to be a leader in labeling manipulated content, but follow-through is just as important as changing policy.
“Will they implement those changes and then enforce them in the face of political pressure from the people who want to do bad things? That’s the real question,” she said. “If they do make those changes and don’t enforce them, it kind of further contributes to this destruction of trust that comes with misinformation.”
…
Hong Kong — Scammers tricked a multinational firm out of some $26 million by impersonating senior executives using deepfake technology, Hong Kong police said Sunday, in one of the first cases of its kind in the city.
Law enforcement agencies are scrambling to keep up with generative artificial intelligence, which experts say holds potential for disinformation and misuse — such as deepfake images showing people mouthing things they never said.
A company employee in the Chinese finance hub received “video conference calls from someone posing as senior officers of the company requesting to transfer money to designated bank accounts,” police told AFP.
Police received a report of the incident on January 29, at which point some HK$200 million ($26 million) had already been lost via 15 transfers.
“Investigations are still ongoing and no arrest has been made so far,” police said, without disclosing the company’s name.
The victim was working in the finance department, and the scammers pretended to be the firm’s U.K.-based chief financial officer, according to Hong Kong media reports.
Acting Senior Superintendent Baron Chan said the video conference call involved multiple participants, but all except the victim were impersonated.
“Scammers found publicly available video and audio of the impersonation targets via YouTube, then used deepfake technology to emulate their voices… to lure the victim to follow their instructions,” Chan told reporters.
The deepfake videos were pre-recorded and did not involve dialogue or interaction with the victim, he added.
…
BEIJING — A small but powerful Chinese rocket capable of carrying payloads at competitive costs delivered nine satellites into orbit Saturday, Chinese state media reported, in what is gearing up to be another busy year for Chinese commercial launches.
The Jielong-3, or Smart Dragon-3, blasted off from a floating barge off the coast of Yangjiang in southern Guangdong province, the second launch of the rocket in just two months.
Developed by China Rocket Company, a commercial offshoot of a state-owned launch vehicle manufacturer, Jielong-3 made its first flight in December 2022.
President Xi Jinping has called for the expansion of strategic industries including the commercial space sector, deemed key to building constellations of satellites for communications, remote sensing and navigation.
Also Saturday, Chinese automaker Geely Holding Group launched 11 satellites to boost its capacity to provide more accurate navigation for autonomous vehicles.
Last year saw 17 Chinese commercial launches with one failure, among a record 67 orbital launches by China. That was up from 10 Chinese commercial launches in 2022, including two failures.
In 2023, China conducted more launches than any other country except for the United States, which made 116 launch attempts, including just under 100 by Elon Musk’s SpaceX.
Critical to the construction of commercial satellite networks is China’s ability to open more launch windows, expand rocket types to accommodate different payload sizes, lower launch costs and increase the number of launch sites.
…
NEW YORK — A graphic video from a Pennsylvania man accused of beheading his father that circulated for hours on YouTube has put a spotlight yet again on gaps in social media companies’ ability to prevent horrific postings from spreading across the web.
Police said Wednesday that they charged Justin Mohn, 32, with first-degree murder and abusing a corpse after he beheaded his father, Michael, in their Bucks County home and publicized it in a 14-minute YouTube video that anyone, anywhere could see.
News of the incident — which drew comparisons to the beheading videos posted online by the Islamic State militants at the height of their prominence nearly a decade ago — came as the CEOs of Meta, TikTok and other social media companies were testifying in front of federal lawmakers frustrated by what they see as a lack of progress on child safety online. YouTube, which is owned by Google, did not attend the hearing despite its status as one of the most popular platforms among teens.
The disturbing video from Pennsylvania follows other horrific clips that have been broadcast on social media in recent years, including domestic mass shootings livestreamed from Louisville, Kentucky; Memphis, Tennessee; and Buffalo, New York — as well as carnages filmed abroad in Christchurch, New Zealand, and the German city of Halle.
Middletown Township Police Capt. Pete Feeney said the video in Pennsylvania was posted at about 10 p.m. Tuesday and online for about five hours, a time lag that raises questions about whether social media platforms are delivering on moderation practices that might be needed more than ever amid wars in Gaza and Ukraine, and an extremely contentious presidential election in the U.S.
“It’s another example of the blatant failure of these companies to protect us,” said Alix Fraser, director of the Council for Responsible Social Media at the nonprofit advocacy organization Issue One. “We can’t trust them to grade their own homework.”
A spokesperson for YouTube said the company removed the video, deleted Mohn’s channel and was tracking and removing any re-uploads that might pop up. The video-sharing site says it uses a combination of artificial intelligence and human moderators to monitor its platform but did not respond to questions about how the video was caught or why it wasn’t done sooner.
Major social media companies moderate content with the help of powerful automated systems, which can often catch prohibited content before a human can. But that technology can sometimes fall short when a video is violent and graphic in a way that is new or unusual, as it was in this case, said Brian Fishman, co-founder of the trust and safety technology startup Cinder.
That’s when human moderators are “really, really critical,” he said. “AI is improving, but it’s not there yet.”
The Global Internet Forum to Counter Terrorism, a group set up by tech companies to prevent these types of videos from spreading online, was in communication with its all of its members about the incident on Tuesday evening, said Adelina Petit-Vouriot, a spokesperson for the organization.
Roughly 40 minutes after midnight Eastern time on Wednesday, GIFCT issued a “Content Incident Protocol,” which it activates to formally alert its members – and other stakeholders – about a violent event that’s been livestreamed or recorded. GIFCT allows the platform with the original footage to submit a “hash” — a digital fingerprint corresponding to a video — and notifies nearly two dozen other member companies so they can restrict it from their platforms.
But by Wednesday morning, the video had already spread to X, where a graphic clip of Mohn holding his father’s head remained on the platform for at least seven hours and received 20,000 views. The company, formerly known as Twitter, did not respond to a request for comment.
Experts in radicalization say that social media and the internet have lowered the barrier to entry for people to explore extremist groups and ideologies, allowing any person who may be predisposed to violence to find a community that reinforces those ideas.
In the video posted after the killing, Mohn described his father as a 20-year federal employee, espoused a variety of conspiracy theories and ranted against the government.
Most social platforms have policies to remove violent and extremist content. But they can’t catch everything, and the emergence of many newer, less closely moderated sites has allowed more hateful ideas to fester unchecked, said Michael Jensen, senior researcher at the University of Maryland-based Consortium for the Study of Terrorism and Responses to Terrorism, or START.
Despite the obstacles, social media companies need to be more vigilant about regulating violent content, said Jacob Ware, a research fellow at the Council on Foreign Relations.
“The reality is that social media has become a front line in extremism and terrorism,” Ware said. “That’s going to require more serious and committed efforts to push back.”
Nora Benavidez, senior counsel at the media advocacy group Free Press, said among the tech reforms she would like to see are more transparency about what kinds of employees are being impacted by layoffs, and more investment in trust and safety workers.
Google, which owns YouTube, this month laid off hundreds of employees working on its hardware, voice assistance and engineering teams. Last year, the company said it cut 12,000 workers “across Alphabet, product areas, functions, levels and regions,” without offering additional detail.
…