01001, Київ, Україна
info@ukrlines.com

Yahoo Halts Services in Mainland China

Yahoo said it stopped providing services in mainland China because of what it described as a difficult operating environment.

The U.S. web services provider said in a statement on its website the move took effect on November 1 “in recognition of the increasingly challenging business and legal environment.”

November 1 is the date on which China’s Personal Information Protection Law took effect. The law limits what information companies can compile and standardizes how it must be archived. Other content restrictions on internet companies also were recently imposed.

China previously blocked Facebook, Google and most other global social media sites and search engines. Users in China can still access these services by using a virtual private network (VPN). 

In October, Microsoft stopped providing its Linkedin business and employment service in China, citing a “more challenging operating environment and greater compliance requirements in China.”

Some information for this report came from The Associated Press and Reuters.

Read More

China Hits Reset on Belt and Road Initiative

Green energy is the new focus of China’s one-of-a-kind Belt and Road Initiative or BRI, that aims to build a series of infrastructure projects from Asia to Europe.

The eco-friendlier version of BRI has caught the attention of some 70 other countries that are getting new infrastructure from the Asian economic powerhouse in exchange for expanding trade.

The reset on China’s eight-year-old, $1.2 trillion effort comes after leaving a nagging layer of smog in parts of Eurasia, where those projects operate.

Now the county that’s already mindful of pollution at home is preparing a new BRI that will focus on greener projects, instead of pollution-generating coal-fired plants. It would still further China’s goal of widening trade routes in Eurasia through the initiative’s new ports, railways and power plants.

The Second Belt and Road announced in China on October 18, coincides with the 2021 United Nations Climate Change Conference, or COP26, which runs from Sunday through November 12 in Glasgow, Scotland. China could use the forum to detail its plans.

“China’s policy shift towards a more green BRI reflects China’s own commitment to reach net zero carbon emissions by 2060 and its efforts to implement a green transition within China’s domestic economy,” said Rajiv Biswas, Asia-Pacific chief economist with the market research firm IHS Markit.

“Furthermore, China’s policy shift…also reflects the increasing policy priority being given towards renewable energy and sustainable development policies by most of China’s BRI partner countries,” he said.

The Belt and Road helps lift the economies of developing countries from Kazakhstan to more modern ones, such as Portugal. BRI also unnerves China’s superpower rival, the United States, which has no comparable program.

History of focusing on fossil fuel

China has a history of putting billions of dollars in fossil fuel projects in other countries since 2013, the American research group Council on Foreign Relations says in a March 2021 study.

From 2014 to 2017, it says, about 90% of energy-sector loans by major Chinese banks to BRI countries were for fossil fuel projects and China was “involved in” 240 coal plants in just 2016. In 2018, the study adds, 40% of energy lending went to coal projects. Those investments, the group says, “promise to make climate change mitigation far more difficult.”

South and Southeast Asia are the main destinations for coal-fired projects at 80% of the total Belt and Road portfolio, the Beijing-based research center Global Environmental Institute says.

Global shift toward green energy

Chinese President Xi Jinping said last year China would try to peak its carbon dioxide emissions before 2030. The Second Belt and Road calls for working with partner countries on “energy transition” toward more wind, solar and biomass, the National Energy Administration and Shandong provincial government said in an October 18 statement. 

Some countries are pushing China to offer greener projects due to environmental pressure at home, though some foreign leaders prefer the faster, cheaper, more polluting options to prove achievements while in office, said Jonathan Hillman, economics program senior fellow at the Center for International & Strategic Studies research organization.

“There was a period in the first phase of the Belt and Road where projects were being shoveled out the door and with not enough attention to the quality of those projects,” he said.

Poorer countries are pressured now to balance providing people basic needs against environmental issues, said Song Seng Wun, an economist in the private banking unit of Malaysian bank CIMB. The basics still “take priority,” he said, and newer coal-fired plants help.

“Although I would say environmental issues (are) important, I think a lot of people don’t realize how much more efficient these more modern coal plants are, so I think we must have a balance,” Song said.

In the past few years however, cancellation rates of coal-fired projects have exceeded new approvals, Hillman said. “The action honestly has come more from participating countries,” he said. “They’ve decided that’s not the direction they want to go.”

In February, Chinese officials told the Bangladesh Ministry of Finance they would no longer consider coal mining and coal-fired power stations. Greece, Kenya, Pakistan and Serbia have asked China to dial back on polluting projects, Hillman said.

“The next decade will show to what extent the Belt and Road will drive green infrastructure,” London-based policy institute Chatham House says in a September 2021 report.

Belt-and-Road renewable energy investments reached a new high last year of 57% of its total for energy projects in 2020, according to IHS data.

New pledges at COP26?

COP26 is expected to showcase the environmental achievements of participating countries as they try to meet U.N. Paris Climate Change commitments, Biswas said.

China’s statements ahead of the conference so far differ little from past statements. But China’s energy administration said on October 18 that its second Belt and Road “emphasizes the necessity of increased support for developing countries” in terms of money, technology and ability to carry out green energy projects.

Chinese companies on BRI projects may eventually be required to reduce environmental risks, Biswas said. Those companies would in turn follow principles released in 2018 to ensure that their projects generate less carbon. A year later, as international criticism grew, Chinese President Xi added a slate of Belt and Road mini-initiatives, including some that touched on green projects.

But the 2019 plans were non-binding and untransparent, Hillman said. At COP26, he said, “I would take any big announcements with more than a grain of salt.”

Read More

US Lawmakers Vote to Tighten Restrictions on Huawei, ZTE

The U.S. Senate voted unanimously on Thursday to approve legislation to prevent companies that are deemed security threats, such as Huawei Technologies Co. Ltd. or ZTE Corp., from receiving new equipment licenses from U.S. regulators. 

The Secure Equipment Act, the latest effort by the U.S. government to crack down on Chinese telecom and tech companies, was approved last week by the U.S. House in a 420-4 vote and now goes to President Joe Biden for his signature. 

“Chinese state-directed companies like Huawei and ZTE are known national security threats and have no place in our telecommunications network,” Republican Senator Marco Rubio said. The measure would prohibit the Federal Communications Commission from reviewing or issuing new equipment licenses to companies on its “Covered Equipment or Services List.” 

In March, the FCC designated five Chinese companies as posing a threat to national security under a 2019 law aimed at protecting U.S. communications networks. 

The affected companies included the previously designated Huawei and ZTE, as well as Hytera Communications Corp., Hangzhou Hikvision Digital Technology Co., and Zhejiang Dahua Technology Co. 

The FCC in June had voted unanimously to advance a plan to ban approvals for equipment in U.S. telecommunications networks from those Chinese companies even as lawmakers pursued legislation to mandate it. 

The FCC vote in June drew opposition from Beijing. 

“The United States, without any evidence, still abuses national security and state power to suppress Chinese companies,” Zhao Lijian, a spokesperson at China’s Foreign Ministry, said in June. 

Under proposed rules that won initial approval in June, the FCC could also revoke prior equipment authorizations issued to Chinese companies. 

A spokesperson for Huawei, which has repeatedly denied it is controlled by the Chinese government, declined to comment Thursday but in June called the proposed FCC revision “misguided and unnecessarily punitive.” 

FCC Commissioner Brendan Carr said the commission has approved more than 3,000 applications from Huawei since 2018. Carr said Thursday the bill “will help to ensure that insecure gear from companies like Huawei and ZTE can no longer be inserted into America’s communications networks.” 

On Tuesday, the FCC voted to revoke the authorization for China Telecom’s U.S. subsidiary to operate in the United States, citing national security concerns. 

 

Read More

Facebook Inc. Rebrands as Meta to Stress ‘Metaverse’ Plan

Facebook CEO Mark Zuckerberg said his company is rebranding itself as Meta in an effort to encompass its virtual-reality vision for the future — what Zuckerberg calls the ” metaverse.” 

Skeptics point out that it also appears to be an attempt to change the subject from the Facebook Papers, a leaked document trove so dubbed by a consortium of news organizations that include The Associated Press. Many of these documents, first described by former Facebook employee-turned-whistleblower Frances Haugen, have revealed how Facebook ignored or downplayed internal warnings of the negative and often harmful consequences its social network algorithms created or magnified across the world.

“Facebook is the world’s social media platform and they are being accused of creating something that is harmful to people and society,” said marketing consultant Laura Ries. She compared the name Meta to when BP rebranded to “Beyond Petroleum” to escape criticism that it harmed the environment. “They can’t walk away from the social network with a new corporate name and talk of a future metaverse.”

What is the metaverse? Think of it as the internet brought to life, or at least rendered in 3D. Zuckerberg has described it as a “virtual environment” you can go inside of — instead of just looking at on a screen. Essentially, it’s a world of endless, interconnected virtual communities where people can meet, work and play, using virtual reality headsets, augmented reality glasses, smartphone apps or other devices.

It also will incorporate other aspects of online life such as shopping and social media, according to Victoria Petrock, an analyst who follows emerging technologies.

Zuckerberg says he expects the metaverse to reach a billion people within the next decade. It will be a place people will be able to interact, work and create products and content in what he hopes will be a new ecosystem that creates millions of jobs for creators.

The announcement comes amid an existential crisis for Facebook. It faces heightened legislative and regulatory scrutiny in many parts of the world following revelations in the Facebook Papers.

In explaining the rebrand, Zuckerberg said the name “Facebook” just doesn’t encompass everything the company does anymore. In addition to its primary social network, that now includes Instagram, Messenger, its Quest VR headset, its Horizon VR platform and more.

“Today we are seen as a social media company,” Zuckerberg said. “But in our DNA, we are a company that builds technology to connect people.”

Facebook the app, along with Instagram, WhatsApp and Messenger, are here to stay; the company’s corporate structure also won’t change. But on December 1, its shares will start trading under a new ticker symbol, “MVRS.”

Metaverse, he said, is the new way. Zuckerberg, who is a fan of classics, explained that the word “meta” comes from the Greek word “beyond.”

A corporate rebranding won’t solve the myriad problems at Facebook revealed by thousands of internal documents in recent weeks. It probably won’t even get people to stop calling the social media giant Facebook — or a “social media giant,” for that matter.

But that isn’t stopping Zuckerberg, seemingly eager to move on to his next big thing as crisis after crisis emerges at the company he created.

Just as smartphones replaced desktop computers, Zuckerberg is betting that the metaverse will be the next way people will interact with computers — and each other. If Instagram and messaging were Facebook’s forays into the mobile evolution, Meta is its bet on the metaverse.

Read More

US State Department Creates Bureau to Tackle Digital Threats

The State Department is creating a new Bureau of Cyberspace and Digital Policy to focus on tackling cybersecurity challenges at a time of growing threats from opponents. There will also be a new special envoy for critical and emerging technology, who will lead the technology diplomacy agenda with U.S. allies.

On Wednesday, Secretary of State Antony Blinken said the organizational changes underscore the need for a robust approach for dealing with cyber threats. 

“We want to make sure technology works for democracy, fighting back against disinformation, standing up for internet freedom, and reducing the misuse of surveillance technology,” Blinken said in a speech on modernizing American diplomacy. 

Blinken said the new bureau will be led by an ambassador-at-large. The chief U.S. diplomat is also seeking a 50% increase in State Department’s information technology budget. 

The announcement comes as hackers backed by foreign governments, such as Russia and China, continue to attack U.S. infrastructures and global technology systems to steal sensitive information.

Earlier this year, the Office of the Director of National Intelligence said that more countries are relying on cyber operations to steal information, influence populations and damage industry, but the U.S. is most concerned about Russia, China, Iran and North Korea.

The U.S. technology giant Microsoft said on Monday that the same Russia-backed hackers responsible for the 2020 SolarWinds breach of corporate computer systems are continuing to attack global technology systems, this time targeting cloud service resellers.

A senior State Department official told reporters on Wednesday that Washington has been clear with Moscow that cyber criminals targeting the U.S. is “not acceptable.” The United States has asked the Russian government to “take action against that type of criminal behavior.” 

Confronting cyberattacks continues to be “a high priority” in U.S. relations with Russia, the senior official said.

China is also considered to be one of the United States’ main cyber adversaries, having coordinated teams both inside and outside of the government conducting cyberespionage campaigns that were large-scale and indiscriminate, according to analysts.

Over the past year, experts have attributed notable hacks in the U.S., Europe and Asia to China’s Ministry of State Security, the nation’s civilian intelligence agency, which has taken the lead in Beijing’s cyberespionage, consolidating efforts by the People’s Liberation Army. 

In addition to expanding the State Department’s capacity on cybersecurity, Blinken also unveiled other steps to modernize American diplomacy, including the launch of a new “policy ideas channel” that allows American diplomats to share their policy ideas directly with senior leadership, building and retaining a diverse workforce, as well as a plan to “reinvigorate the in-person diplomacy and public engagement.” 

The organization changes to beef up resources and staffers to tackle international cybersecurity challenges came after the State Department completed an extensive review of cyberspace and emerging technology.

Read More

Five Things Facebook Has to Worry About After Whistleblower Disclosures

The past several weeks have been difficult for the social media behemoth Facebook, with a series of whistleblower revelations demonstrating that the company knew its signature platform was exacerbating all manner of social ills around the globe, from human trafficking to sectarian violence.   

The tide shows no sign of receding. New revelations this week have demonstrated that the company’s supposed commitment to freedom of expression takes a back seat to its bottom line when repressive governments, like Vietnam’s, demand that dissent be silenced. They showed that Facebook knew its algorithms were steering users toward extreme content, such as QAnon conspiracy theories and phony anti-vaccine claims, but took few steps to remedy the problem.

 

In statements to various media outlets, the company has defended itself, saying it dedicates enormous resources to assuring safety on its platform and asserting that much of the information provided to journalists and government officials has been taken out of context.   

In a conference call to discuss the company’s quarterly earnings on Monday, Facebook CEO Mark Zuckerberg claimed that recent media coverage is painting a misleading picture of his company.   

“Good faith criticism helps us get better,” Zuckerberg said. “But my view is that what we are seeing is a coordinated effort to selectively use leaked documents to paint a false picture of our company. The reality is that we have an open culture, where we encourage discussion and research about our work so we can make progress on many complex issues that are not specific to just us.”   

The revelations, as well as unrelated business challenges, mean that Facebook, which also owns Instagram and the messaging service WhatsApp, has a lot of things to worry about in the coming weeks and months. Here are five of the biggest. 

A potential SEC investigation 

Whistleblower Frances Haugen, a former product manager with the company, delivered thousands of pages of documents to lawmakers and journalists last month, prompting the wave of stories about the company’s practices. But the documents also went to the Securities and Exchange Commission, raising the possibility of a federal investigation of the company. 

Haugen claims the documents provide evidence that the company withheld information that might have affected investors’ decisions about purchasing Facebook’s stock. Among other things, she says that the documents show that Facebook knew that its number of actual users — a key measurement of its ability to deliver the advertising it depends on for its profits — was lower than it was reporting.   

 

The SEC has not indicated whether or not it will pursue an investigation into the company, and a securities fraud charge would be difficult to prove, requiring evidence that executives actively and knowingly misled investors. But even an investigation could be harmful to the company’s already bruised corporate image. 

In a statement provided to various media, a company spokesperson said, “We make extensive disclosures in our S.E.C. filings about the challenges we face, including user engagement, estimating duplicate and false accounts, and keeping our platform safe from people who want to use it to harm others . . . All of these issues are known and debated extensively in the industry, among academics and in the media. We are confident that our disclosures give investors the information they need to make informed decisions.”   

Antitrust suit 

Facebook is already being sued by the Federal Trade Commission (FTC), which claims that between the company’s main site, Instagram, and WhatsApp, Facebook exercises monopoly power in the social media market. The agency is demanding that the three platforms be split up.   

Facebook has publicly claimed it does not have monopoly power, but internal documents made available by Haugen demonstrate that the company knows it is overwhelmingly dominant in some areas, potentially handing the FTC additional ammunition as it attempts to persuade a federal judge to break up the company.  

Legislative action 

Congress doesn’t agree on much these days, but Haugen’s testimony in a hearing last month sparked bipartisan anger at Facebook and Instagram, especially over revelations that the latter has long been aware that its platform is harmful to the mental health of many teenage users, particularly young girls. 

Several pieces of legislation have since been introduced, including a proposal to create an “app ratings board” that would set age and content ratings for applications on internet-enabled devices.  

  

Others seek to make social media companies like Facebook liable for harm done by false information circulating on the platform, or to force the company to offer stronger privacy protections and to give users the right to control the spread of content about themselves. 

Ramya Krishnan, a staff attorney at the Knight First Amendment Institute and a lecturer in law at Columbia Law School, is one of many academics who have been pushing for lawmakers to require Facebook and other social media platforms to allow researchers and journalists better access to data about their audiences and their engagement.   

“We’ve seen increased interest among lawmakers and regulators in expanding the space for research and journalism focused on the platform, reflecting the understanding that in order to effectively regulate the platforms we need to better understand the effect that they are having on society and democracy,” she told VOA.

 

Internal dissent 

One of the most striking things about the documents released this week is the amount of anger inside Facebook over the company’s public image. The disclosures include reams of internal messages and other communications in which Facebook employees complain about the company’s unwillingness to police content on the site.   

“I’m struggling to match my values to my employment here,” one employee wrote in response to the assault on the U.S. Capitol on January 6, which was partly organized on Facebook. “I came here hoping to effect change and improve society, but all I’ve seen is atrophy and abdication of responsibility.”  

The documents show that the company is losing employees — particularly those charged with combating hate speech and misinformation — because they don’t believe their efforts have the support of management. 

Advertiser boycott 

Last year the Anti-Defamation League organized a campaign to pressure companies to “pause” their advertising on Facebook in protest over its failure to eliminate hateful rhetoric on the platform. In a statement given to VOA, Jonathan A. Greenblatt, the group’s CEO, said it is preparing to do so again. 

“Mark Zuckerberg would have you believe that Facebook is doing all it can to address the amplification of hate and disinformation,” Greenblatt said. “Now we know the truth: He was aware it was happening and chose to ignore internal researchers’ recommendations and did nothing about it. So we will do something about it, because literally, lives have been lost and people are being silenced and killed as a direct result of Facebook’s negligence.”   

He continued, “We are in talks to decide what the best course of action is to bring about real change at Facebook, whether it’s with policymakers, responsible shareholders, or advertisers,” he said. “But make no mistake: We’ve successfully taken on Facebook’s hate and misinformation machine before, and we aren’t afraid to do it again. It’s time to rein in this rogue company and its harmful products.” 

Read More

International Police Operation Cracks Down on Illegal Internet Drug Vendors

U.S. federal law enforcement agencies and Europol announced dozens of arrests to break up a global operation that sold illegal drugs using a shadowy realm of the internet. 

At a Department of Justice news conference Tuesday in Washington, officials said they arrested 150 people for allegedly selling illicit drugs, including fake prescription opioids and cocaine, over the so-called darknet. Those charged are alleged to have carried out tens of thousands of illegal sales using a part of the internet that is accessible only by using specialized anonymity tools. 

The 10-month dragnet called “Operation HunTor” — named after encrypted internet tools — resulted in the seizure of 234 kilograms of drugs, including amphetamines, cocaine and opioids worth more than $31 million. Officials said many of the confiscated drugs were fake prescription pills laced with the powerful synthetic opioid fentanyl. The counterfeit tablets are linked to a wave of drug overdoses.

“This international law enforcement operation spanned across three continents and sends one clear message to those hiding on the darknet peddling illegal drugs: there is no dark internet,” said U.S. Deputy Attorney General Lisa Monaco. 

Investigators rounded up and arrested 65 people in the United States. Other arrests occurred in Australia, Bulgaria, France, Germany, Italy, the Netherlands, Switzerland, and the United Kingdom. In addition to counterfeit medicine, authorities also confiscated more than 200,000 ecstasy, fentanyl, oxycodone, hydrocodone, and methamphetamine pills. 

“We face new and increasingly dangerous threats as drug traffickers expand into the digital world and use the darknet to sell dangerous drugs like fentanyl and methamphetamine,” said Anne Milgram, administrator of the Drug Enforcement Administration (DEA). “We cannot stress enough the danger of these substances.” 

The international police agency Europol worked alongside the U.S. Justice Department’s Joint Criminal Opioid and Darknet Enforcement team.

 

“No one is beyond the reach of the law, even on the dark web,” said Jean-Philippe Lecouffe, Europol’s deputy executive director.

 

The dark web is preferred by criminal networks who want to keep their internet activities private and anonymous. In this case, it served as a platform for illegal cyber sales of counterfeit medication and other drugs that were delivered by private shipping companies. 

Investigators said the fake drugs are primarily made in laboratories in Mexico using chemicals imported from China. Prosecutors also targeted drug dealers who operated home labs to manufacture fake prescription pain pills. 

“Those purchasing drugs through the darknet often don’t know what they’re getting,” Associate Deputy FBI Director Paul Abbate said. The joint investigation followed enforcement efforts in January in which authorities shut down “DarkMarket,” the world’s largest illegal international marketplace on the dark web. 

Last month, the DEA warned Americans that international and domestic drug dealers were flooding the country with fake pills, driving the U.S. overdose crisis. The agency confiscated more the 9.5 million potentially lethal pills in the last year.

More than 93,000 Americans died from drug overdoses in 2020, the highest number on record, according the U.S. Centers for Disease Control. U.S. health officials attribute the rise to the use of fentanyl, which can be 100 times more potent than morphine. 

U.S. officials said investigations are continuing and more arrests are expected.

 

Read More

Artificial Intelligence-Powered App Helps Musicians Learn to Play

A popular new music app uses artificial intelligence to “democratize” how musicians of all skill levels learn and play music. VOA’s Julie Taboh has more.

Read More

Rental Car Company Hertz Announces Purchase of 100,000 Teslas 

Car rental company Hertz says it will buy 100,000 electric cars from Tesla. 

Hertz interim CEO Mark Fields said the Model 3 cars could be ready for renters as early as November, The Associated Press reported. 

Fields said the reason for the move was that electric cars are becoming mainstream, and consumer interest in them is growing.

“More are willing to try and buy,” he told AP. “It’s pretty stunning.” 

All of the cars should be available by the end of 2022, the company said. When all are delivered, they will make up 20% of the company’s fleet.

Hertz, which emerged from bankruptcy in June, did not disclose the cost of the order, but it could be valued at as much as $4 billion, according to some news reports. 

The company said it plans to build its own charging station network, with 3,000 in 65 locations by the end of 2022 and 4,000 by the end of 2023. Renters will also have access to Tesla’s charging network for a fee. 

Tesla stock jumped as much as 12% on the news 

Some information in this report came from The Associated Press. 

 

 

Read More

In Face of Hack Attacks, US State Department to Set Up Cyber Bureau

The U.S. State Department plans to establish a bureau of cyberspace and digital policy in the face of a growing hacking problem, specifically a surge of ransomware attacks on U.S. infrastructure. 

State Department spokesperson Ned Price said a Senate-confirmed ambassador at large will lead the bureau. 

Hackers have struck numerous U.S. companies this year. 

One such attack on pipeline operator Colonial Pipeline led to temporary fuel supply shortages on the U.S. East Coast. Hackers also targeted an Iowa-based agricultural company, sparking fears of disruptions to Midwest grain harvesting. 

Two weeks ago, the Treasury Department said suspected ransomware payments totaling $590 million were made in the first six months of this year. It put the cryptocurrency industry on alert about its role fighting ransomware attacks. 

 

Read More

Facebook Whistleblower Presses Case with British Lawmakers 

Facebook whistleblower Frances Haugen told British lawmakers Monday that the social media giant “unquestionably” amplifies online hate. 

In testimony to a parliamentary committee in London, the former Facebook employee echoed what she told U.S. senators earlier this month.

Haugen said the media giant fuels online hate and extremism and does not have any incentive to change its algorithm to promote less divisive content.

She argued that as a result, Facebook may end up sparking more violent unrest around the world.

Haugen said the algorithm Facebook has designed to promote more engagement among users “prioritizes and amplifies divisive and polarizing extreme content” as well as concentrates it. 

Facebook did not respond to Haugen’s testimony Monday. Earlier this month, Haugen addressed a Senate committee and said the company is harmful. Facebook rejected her accusations. 

“The argument that we deliberately push content that makes people angry for profit is deeply illogical,” said Facebook CEO Mark Zuckerberg. 

Haugen’s testimony comes as a coalition of new organizations Monday began publishing stories on Facebook’s practices based on internal company documents that Haugen secretly copied and made public. 

Haugen is a former Facebook product manager who has turned whistleblower. 

Earlier this month when Haugen addressed U.S. lawmakers, she argued that a federal regulator was needed to oversee large internet companies like Facebook. 

British lawmakers are considering creating such a national regulator as part of a proposed online safety bill. The legislation also proposes fining companies like Facebook up to 10% of their global revenue for any violations of government policies. 

Representatives from Facebook and other social media companies are set to address British lawmakers on Thursday. 

Haugen is scheduled to meet with European Union policymakers in Brussels next month.

Some information in this report came from the Associated Press and Reuters. 

 

Read More

Microsoft Discloses New Russian Hacking Effort

The U.S. technology giant Microsoft says that the same Russia-backed hackers responsible for the 2020 SolarWinds breach of corporate computer systems is continuing to attack global technology systems, this time targeting cloud service resellers.

Microsoft said the group, which it calls Nobelium, is employing a new strategy to take advantage of the direct access resellers have to their customers’ IT systems, hoping to “more easily impersonate an organization’s trusted technology partner to gain access to their downstream customers.”

Resellers are intermediaries between software and hardware producers and the eventual technology product users.

In a statement Sunday, Microsoft said it has been monitoring Nobelium’s attacks since May and has notified more than 140 companies targeted by the group, with as many as 14 of the companies’ systems believed to have been compromised.

“This recent activity is another indicator that Russia is trying to gain long-term, systematic access to a variety of points in the technology supply chain and establish a mechanism for surveilling — now or in the future — targets of interest to the Russian government,” Microsoft wrote in a blog post.

“Fortunately, we have discovered this campaign during its early stages, and we are sharing these developments to help cloud service resellers, technology providers, and their customers take timely steps to help ensure Nobelium is not more successful,” the company said.

Charles Carmakal, senior vice president and chief technology officer at cybersecurity firm Mandiant, said this attack was different from the SolarWinds attack that used malicious code inserted into legitimate software, saying this involves “leveraging stolen identities” to access systems.

“This attack path makes it very difficult for victim organizations to discover they were compromised and investigate the actions taken by the threat actor,” he said. “This is particularly effective for the threat actor for two reasons: First, it shifts the initial intrusion away from the ultimate targets, which in some situations are organizations with more mature cyber defenses, to smaller technology partners with less mature cyber defenses.

“And second, investigating these intrusions requires collaboration and information-sharing across multiple victim organizations, which is challenging due to privacy concerns and organizational sensitivities,” Carmakal said.

When asked about the attack, White House Principal Deputy Press Secretary Karine Jean-Pierre said Monday companies “can prevent these attempts if the cloud service providers implement baseline cybersecurity practices, including multifactor authentication.”

“Broadly speaking, the federal government is aggressively using our authorities to protect the nation from cyber threats, including helping the private sector defend itself through increased intelligence sharing, innovative partnership to deploy cybersecurity technologies, bilateral and multilateral diplomacy, and measures we do not speak about publicly for national security reasons,” she told reporters aboard Air Force One on route to New Jersey.

Microsoft said Nobelium had made 22,868 attacks since July but had only been successful a handful of times. Most of the attacks have targeted U.S. government agencies and think tanks in the United States, followed by attacks in Ukraine, the United Kingdom and in other NATO countries.

A U.S. government official downplayed the attacks in a statement to The Associated Press, saying, “The activities described were unsophisticated password spray and phishing, run-of-the mill operations for the purpose of surveillance that we already know are attempted every day by Russia and other foreign governments.”

Washington blamed Russia’s SVR foreign intelligence agency for the 2020 SolarWinds hack, which compromised several federal agencies and went undetected for much of last year. Russia has denied any wrongdoing.​

Some information for this report comes from AP and Reuters.

Read More

Whistleblower Haugen to Testify as UK Scrutinizes Facebook

Former Facebook data scientist turned whistleblower Frances Haugen plans to answer questions Monday from lawmakers in the United Kingdom who are working on legislation to rein in the power of social media companies. 

Haugen is set to appear before a parliamentary committee scrutinizing the British government’s draft legislation to crack down on harmful online content, and her comments could help lawmakers beef up the new rules. She’s testifying the same day that Facebook is set to release its latest earnings and that The Associated Press and other news organizations started publishing stories based on thousands of pages of internal company documents she obtained. 

It will be her second appearance before lawmakers after she testified in the U.S. Senate earlier this month about the danger she says the company poses, from harming children to inciting political violence and fueling misinformation. Haugen cited internal research documents she secretly copied before leaving her job in Facebook’s civic integrity unit. 

The documents, which Haugen provided to the U.S. Securities and Exchange Commission, allege Facebook prioritized profits over safety and hid its own research from investors and the public. Some stories based on the files have already been published, exposing internal turmoil after Facebook was blindsided by the Jan. 6 U.S. Capitol riot and how it dithered over curbing divisive content in India, and more is to come. 

Facebook CEO Mark Zuckerberg has disputed Haugen’s portrayal of the company as one that puts profit over the well-being of its users or that pushes divisive content, saying a false picture is being painted. But he does agree on the need for updated internet regulations, saying lawmakers are best able to assess the tradeoffs.

Haugen told U.S. lawmakers that she thinks a federal regulator is needed to oversee digital giants like Facebook, something that officials in Britain and the European Union are already working on. 

The U.K. government’s online safety bill calls for setting up a regulator that would hold companies to account when it comes to removing harmful or illegal content from their platforms, such as terrorist material or child sex abuse images. 

“This is quite a big moment,” Damian Collins, the lawmaker who chairs the committee, said ahead of the hearing. “This is a moment, sort of like Cambridge Analytica, but possibly bigger in that I think it provides a real window into the soul of these companies.” 

Collins was referring to the 2018 debacle involving data-mining firm Cambridge Analytica, which gathered details on as many as 87 million Facebook users without their permission.

Representatives from Facebook and other social media companies plan to speak to the committee Thursday. 

Ahead of the hearing, Haugen met the father of Molly Russell, a 14-year-old girl who killed herself in 2017 after viewing disturbing content on Facebook-owned Instagram. In a chat filmed by the BBC, Ian Russell told Haugen that after Molly’s death, her family found notes she wrote about being addicted to Instagram.

Haugen also is scheduled to meet next month with European Union officials in Brussels, where the bloc’s executive commission is updating its digital rulebook to better protect internet users by holding online companies more responsible for illegal or dangerous content. 

Under the U.K. rules, expected to take effect next year, Silicon Valley giants face an ultimate penalty of up to 10% of their global revenue for any violations. The EU is proposing a similar penalty. 

The U.K. committee will be hoping to hear more from Haugen about the data that tech companies have gathered. Collins said the internal files that Haugen has turned over to U.S. authorities are important because it shows the kind of information that Facebook holds — and what regulators should be asking when they investigate these companies.  

The committee has already heard from another Facebook whistleblower, Sophie Zhang, who raised the alarm after finding evidence of online political manipulation in countries such as Honduras and Azerbaijan before she was fired.

Read More

Facebook’s Language Gaps Weaken Screening of Hate, Terrorism

In Gaza and Syria, journalists and activists feel Facebook censors their speech, flagging inoffensive Arabic posts as terrorist content. In India and Myanmar, political groups use Facebook to incite violence. All of it frequently slips through the company’s efforts to police its social media platforms because of a shortage of moderators who speak local languages and understand cultural contexts.

Internal company documents from the former Facebook product manager-turned-whistleblower Frances Haugen show the problems plaguing the company’s content moderation are systemic, and that Facebook has understood the depth of these failings for years while doing little about it.

Its platforms have failed to develop artificial-intelligence solutions that can catch harmful content in different languages. As a result, terrorist content and hate speech proliferate in some of the world’s most volatile regions. Elsewhere, the company’s language gaps lead to overzealous policing of everyday expression.

This story, along with others published Monday, is based on former Facebook product manager-turned-whistleblower Frances Haugen’s disclosures to the Securities and Exchange Commission, which were also provided to Congress in redacted form by her legal team. The redacted versions received by Congress were obtained by a consortium of news organizations, including The Associated Press.

In a statement to the AP, a Facebook spokesperson said that over the last two years the company has invested in recruiting more staff with local dialect and topic expertise to bolster its review capacity globally.

When it comes to Arabic content moderation, in particular, the company said, “We still have more work to do.”

But the documents show the problems are not limited to Arabic. In Myanmar, where Facebook-based misinformation has been linked repeatedly to ethnic violence, the company’s internal reports show it failed to stop the spread of hate speech targeting the minority Rohingya Muslim population.

In India, the documents show moderators never flagged anti-Muslim hate speech broadcast by Prime Minister Narendra Modi’s far-right Hindu nationalist group because Facebook lacked moderators and automated filters with knowledge of Hindi and Bengali.

Arabic, Facebook’s third-most common language, does pose particular challenges to the company’s automated systems and human moderators, each of which struggles to understand spoken dialects unique to each country and region, their vocabularies salted with different historical influences and cultural contexts. The platform won a vast following across the region amid the 2011 Arab Spring, but its reputation as a forum for free expression in a region full of autocratic governments has since changed.

Scores of Palestinian journalists have had their accounts deleted. Archives of the Syrian civil war have disappeared. During the 11-day Gaza war last May, Facebook’s Instagram app briefly banned the hashtag #AlAqsa, a reference to the Al-Aqsa Mosque in Jerusalem’s Old City, a flashpoint of the conflict. The company later apologized, saying it confused Islam’s third-holiest site for a terrorist group.

Criticism, satire and even simple mentions of groups on the company’s Dangerous Individuals and Organizations list — a docket modeled on the U.S. government equivalent — are grounds for a takedown.

“We were incorrectly enforcing counterterrorism content in Arabic,” one document reads, noting the system “limits users from participating in political speech, impeding their right to freedom of expression.”

The Facebook blacklist includes Gaza’s ruling Hamas party, as well as Hezbollah, the militant group that holds seats in Lebanon’s Parliament, along with many other groups representing wide swaths of people and territory across the Middle East.

The company’s language gaps and biases have led to the widespread perception that its reviewers skew in favor of governments and against minority groups. 

Israeli security agencies and watchdogs also monitor Facebook and bombard it with thousands of orders to take down Palestinian accounts and posts as they try to crack down on incitement.

“They flood our system, completely overpowering it,” said Ashraf Zeitoon, Facebook’s former head of policy for the Middle East and North Africa region, who left in 2017.

Syrian journalists and activists reporting on the country’s opposition also have complained of censorship, with electronic armies supporting embattled President Bashar Assad aggressively flagging dissident posts for removal. 

Meanwhile in Afghanistan, Facebook does not translate the site’s hate speech and misinformation pages into Dari and Pashto, the country’s two main languages. The site also doesn’t have a bank of hate speech terms and slurs in Afghanistan, so it can’t build automated filters that catch the worst violations.

In the Philippines, homeland of many domestic workers in the Middle East, Facebook documents show that engineers struggled to detect reports of abuse by employers because the company couldn’t flag words in Tagalog, the major Philippine language.

In the Middle East, the company over-relies on artificial-intelligence filters that make mistakes, leading to “a lot of false positives and a media backlash,” one document reads. Largely unskilled moderators, in over their heads and at times relying on Google Translate, tend to passively field takedown requests instead of screening proactively. Most are Moroccans and get lost in the translation of Arabic’s 30-odd dialects.

The moderators flag inoffensive Arabic posts as terrorist content 77% of the time, one report said.

Although the documents from Haugen predate this year’s Gaza war, episodes from that bloody conflict show how little has been done to address the problems flagged in Facebook’s own internal reports.

Activists in Gaza and the West Bank lost their ability to livestream. Whole archives of the conflict vanished from newsfeeds, a primary portal of information. Influencers accustomed to tens of thousands of likes on their posts saw their outreach plummet when they posted about Palestinians.

“This has restrained me and prevented me from feeling free to publish what I want,” said Soliman Hijjy, a Gaza-based journalist.

Palestinian advocates submitted hundreds of complaints to Facebook during the war, often leading the company to concede error. In the internal documents, Facebook reported it had erred in nearly half of all Arabic language takedown requests submitted for appeal.

Facebook’s internal documents also stressed the need to enlist more Arab moderators from less-represented countries and restrict them to where they have appropriate dialect expertise.

“It is surely of the highest importance to put more resources to the task to improving Arabic systems,” said the report.

Meanwhile, many across the Middle East worry the stakes of Facebook’s failings are exceptionally high, with potential to widen long-standing inequality, chill civic activism and stoke violence in the region.

“We told Facebook: Do you want people to convey their experiences on social platforms, or do you want to shut them down?” said Husam Zomlot, the Palestinian envoy to the United Kingdom. “If you take away people’s voices, the alternatives will be uglier.” 

Read More

Italian Lab Creates Extreme Weather; Could Predict Climate Change Effects

Researchers at a specialized lab in Italy say understanding climate change effects requires recreating them in a controlled environment. So, they built one. VOA’s Arash Arabasadi has more.

Read More

Zoom Gets More Popular Despite Worries About Links to China

Very few companies can boast of having their name also used as a verb. Zoom is one of them. The popularity of the videoconferencing platform continues to grow around the world despite continued questions about whether Chinese authorities are monitoring the calls.

Since Zoom became a household word last year during the pandemic, internet users including companies and government agencies have asked whether the app’s data centers and staff in China are passing call logs to Chinese authorities.

“Some of the more informed know about that, but the vast majority, they don’t know about that, or even if they do, they really don’t give much thought about it,” said Jack Nguyen, partner at the business advisory firm Mazars in Ho Chi Minh City.

He said in Vietnam, for example, many people resent China over territorial spats, but Vietnamese tend to Zoom as willingly as they sign on to rivals such as Microsoft Teams. They like Zoom’s free 40 minutes per call, said Nguyen.

Whether to use the Silicon Valley-headquartered Zoom, now as before, comes down to a user-by-user calculation of the service’s benefits versus the possibility that call logs are being viewed in China, analysts say. China hopes to identify and stop internet content that flouts Communist Party interests.

The 10-year-old listed company officially named Zoom Video Communications reported over $1 billion in revenue in the April-June quarter this year, up 54% over the same quarter of 2020 when the COVID-19 pandemic drove face-to-face meetings online. In the same quarter, the most recent one detailed by the company, Zoom had 504,900 customers of more than 10 employees, up about 36% year on year.

Zoom commanded a 42.8% U.S. market share, leading competitors, as of May 2020, the news website LearnBonds reported. Its U.S. share was up to 55% by March this year, according to ToolTester Network data.

Tech media cite Zoom’s free 40 minutes and capacity for up to 100 call participants as major reasons for its popularity.

Links to China?

Keys that Zoom uses to encrypt and decrypt meetings may be sent to servers in China, Wired Business Media’s website Security Week has reported. Some encryption keys were issued by servers in China, news website WCCF Tech said.

Zoom did not answer VOA’s requests this month for comment.

Zoom has acknowledged keeping at least one data center and a staff employee in China, where the communist government requires resident tech firms to provide user data on request. In September 2019, the Chinese government turned off Zoom in China, and in April last year Zoom said international calls were routed in error through a China-based data center.

“Odds are high” of China getting records of Zoom calls, said Jacob Helberg, a senior adviser at the Stanford University Center on Geopolitics and Technology.

“If you have Zoom engineers in China who have access to the actual servers, from an engineering standpoint those engineers can absolutely have access to content of potential communications in China,” he said.

Zoom said in a statement in early April 2020 that certain meetings held by its non-Chinese users might have been “allowed to connect to systems in China, where they should not have been able to connect,” SmarterAnalyst.com reported.

Excitement and caution

Zoom said in 2019 it had put in place “strict geo-fencing procedures around our mainland China data center.”

“No meeting content will ever be routed through our mainland China data center unless the meeting includes a participant from China,” it said in a blog post.

Among the bigger users of Zoom is the University of California, a 10-campus system that switched to online learning in early 2020. Zoom was selected following a request for proposals “years” before the pandemic, a UC-Berkeley spokesperson told VOA on Thursday.

Elsewhere in the United States, NASA has banned employees from using Zoom, and the Senate has urged its members to avoid it because of security concerns. The German Foreign Ministry and Australian Defense Force restrict use as well, while Taiwan barred Zoom for government business last year. China claims sovereignty over self-ruled Taiwan, which has caused decades of political hostility.

“For Taiwan, there’s still some doubt,” said Brady Wang, a Taipei analyst with the market intelligence firm Counterpoint Research, referring particularly to Zoom’s encryption software. “And in the final analysis, these kinds of choices are numerous, so it’s not like you must rely on Zoom.”

LinkedIn’s withdrawal from China announced this month may spark new scrutiny over Zoom, said Zennon Kapron, founder and director of Kapronasia, a Shanghai financial industry research firm.

“I think when you look at the other technology players that are currently in China or that have relations to China such as Zoom, there will be a renewed push probably by consumers, businesses and even regulators in some jurisdictions to really try to understand and pry apart what the roles of Chinese suppliers or development houses are in developing some of these platforms and the potential security risks that go with them,” Kapron said.

Read More

Facebook Dithered in Curbing Divisive User Content in India

Facebook in India has been selective in curbing hate speech, misinformation and inflammatory posts, particularly anti-Muslim content, according to leaked documents obtained by The Associated Press, even as its own employees cast doubt over the company’s motivations and interests.

From research as recent as March of this year to company memos that date back to 2019, the internal company documents on India highlight Facebook’s constant struggles in quashing abusive content on its platforms in the world’s biggest democracy and the company’s largest growth market. Communal and religious tensions in India have a history of boiling over on social media and stoking violence.

The files show that Facebook has been aware of the problems for years, raising questions over whether it has done enough to address these issues. Many critics and digital experts say it has failed to do so, especially in cases where members of Prime Minister Narendra Modi’s ruling Bharatiya Janata Party, the BJP, are involved.

Modi has been credited for leveraging the platform to his party’s advantage during elections, and reporting from The Wall Street Journal last year cast doubt over whether Facebook was selectively enforcing its policies on hate speech to avoid blowback from the BJP. Both Modi and Facebook chairman and CEO Mark Zuckerberg have exuded bonhomie, memorialized by a 2015 image of the two hugging at Facebook headquarters.

According to the documents, Facebook saw India as one of the most “at risk countries” in the world and identified both Hindi and Bengali languages as priorities for “automation on violating hostile speech.” Yet, Facebook didn’t have enough local language moderators or content-flagging in place to stop misinformation that at times led to real-world violence.

In a statement to the AP, Facebook said it has “invested significantly in technology to find hate speech in various languages, including Hindi and Bengali” which has “reduced the amount of hate speech that people see by half” in 2021. 

“Hate speech against marginalized groups, including Muslims, is on the rise globally. So we are improving enforcement and are committed to updating our policies as hate speech evolves online,” a company spokesperson said. 

This AP story, along with others being published, is based on disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by former Facebook employee-turned-whistleblower Frances Haugen’s legal counsel. The redacted versions were obtained by a consortium of news organizations, including the AP.

In February 2019 and ahead of a general election when concerns about misinformation were running high, a Facebook employee wanted to understand what a new user in the country saw on their news feed if all they did was follow pages and groups solely recommended by the platform.

The employee created a test user account and kept it live for three weeks, during which an extraordinary event shook India — a militant attack in disputed Kashmir killed more than 40 Indian soldiers, bringing the country to near war with rival Pakistan.

In a report, titled “An Indian Test User’s Descent into a Sea of Polarizing, Nationalistic Messages,” the employee, whose name is redacted, said they were shocked by the content flooding the news feed, which “has become a near constant barrage of polarizing nationalist content, misinformation, and violence and gore.”

Seemingly benign and innocuous groups recommended by Facebook quickly morphed into something else altogether, where hate speech, unverified rumors and viral content ran rampant.

The recommended groups were inundated with fake news, anti-Pakistan rhetoric and Islamophobic content. Much of the content was extremely graphic.

“Following this test user’s News Feed, I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total,” the researcher wrote.

The Facebook spokesperson said the test study “inspired deeper, more rigorous analysis” of its recommendation systems and “contributed to product changes to improve them.”

“Separately, our work on curbing hate speech continues and we have further strengthened our hate classifiers, to include four Indian languages,” the spokesperson said.

Other research files on misinformation in India highlight just how massive a problem it is for the platform.

In January 2019, a month before the test user experiment, another assessment raised similar alarms about misleading content. 

In a presentation circulated to employees, the findings concluded that Facebook’s misinformation tags weren’t clear enough for users, underscoring that it needed to do more to stem hate speech and fake news. Users told researchers that “clearly labeling information would make their lives easier.”

Alongside misinformation, the leaked documents reveal another problem dogging Facebook in India: anti-Muslim propaganda, especially by Hindu-hardline groups.

India is Facebook’s largest market with over 340 million users — nearly 400 million Indians also use the company’s messaging service WhatsApp. But both have been accused of being vehicles to spread hate speech and fake news against minorities.

In February 2020, these tensions came to life on Facebook when a politician from Modi’s party uploaded a video on the platform in which he called on his supporters to remove mostly Muslim protesters from a road in New Delhi if the police didn’t. Violent riots erupted within hours, killing 53 people. Most of them were Muslims. Only after thousands of views and shares did Facebook remove the video.

In April, misinformation targeting Muslims again went viral on its platform as the hashtag “Coronajihad” flooded news feeds, blaming the community for a surge in COVID-19 cases. The hashtag was popular on Facebook for days but was later removed by the company.

The misinformation triggered a wave of violence, business boycotts and hate speech toward Muslims.

Criticisms of Facebook’s handling of such content were amplified in August of last year when The Wall Street Journal published a series of stories detailing how the company had internally debated whether to classify a Hindu hard-line lawmaker close to Modi’s party as a “dangerous individual” — a classification that would ban him from the platform — after a series of anti-Muslim posts from his account.

The documents also show how the company’s South Asia policy head herself had shared what many felt were Islamophobic posts on her personal Facebook profile. 

Months later the India Facebook official quit the company. Facebook also removed the politician from the platform, but documents show many company employees felt the platform had mishandled the situation, accusing it of selective bias to avoid being in the crosshairs of the Indian government.

As recently as March this year, the company was internally debating whether it could control the “fear mongering, anti-Muslim narratives” pushed by Rashtriya Swayamsevak Sangh, a far-right Hindu nationalist group that Modi is also a part of, on its platform.

In one document titled “Lotus Mahal,” the company noted that members with links to the BJP had created multiple Facebook accounts to amplify anti-Muslim content.

The research found that much of this content was “never flagged or actioned” since Facebook lacked “classifiers” and “moderators” in Hindi and Bengali languages. 

Facebook said it added hate speech classifiers in Hindi starting in 2018 and introduced Bengali in 2020.

Read More

Another Whistleblower Accuses Facebook of Wrongdoing: Report

A former Facebook worker reportedly told U.S. authorities Friday the platform has put profits before stopping problematic content, weeks after another whistleblower helped stoke the firm’s latest crisis with similar claims.

The unnamed new whistleblower filed a complaint with the U.S. Securities and Exchange Commission, the federal financial regulator, that could add to the company’s woes, said a Washington Post report.

Facebook has faced a storm of criticism over the past month after former employee Frances Haugen leaked internal studies showing the company knew of potential harm fueled by its sites, prompting U.S. lawmakers to renew a push for regulation.

In the SEC complaint, the new whistleblower recounts alleged statements from 2017, when the company was deciding how to handle the controversy related to Russia’s interference in the 2016 U.S. presidential election.  

“It will be a flash in the pan. Some legislators will get pissy. And then in a few weeks they will move onto something else. Meanwhile we are printing money in the basement, and we are fine,” Tucker Bounds, a member of Facebook’s communications team, was quoted in the complaint as saying, The Washington Post reported.  

The second whistleblower signed the complaint on October 13, a week after Haugen’s testimony before a Senate panel, according to the report.

Haugen told lawmakers that Facebook put profits over safety, which led her to leak reams of internal company studies that underpinned a damning Wall Street Journal series.

The Washington Post reported the new whistleblower’s SEC filing claims the social media giant’s managers routinely undermined efforts to combat misinformation and other problematic content for fear of angering then-U.S. President Donald Trump or for turning off the users who are key to profits.

Erin McPike, a Facebook spokesperson, said the article was “beneath the Washington Post, which during the last five years would only report stories after deep reporting with corroborating sources.”  

Facebook has faced previous firestorms of controversy, but they did not translate into substantial U.S. legislation to regulate social media.

Read More

Apple Updates App Store Payment Rules in Concession to Developers

Apple has updated its App Store rules to allow developers to contact users directly about payments, a concession in a legal settlement with companies challenging its tightly controlled marketplace.

According to App Store rules updated Friday, developers can now contact consumers directly about alternate payment methods, bypassing Apple’s commission of 15 or 30%.

They will be able to ask users for basic information, such as names and e-mail addresses, “as long as this request remains optional”, said the iPhone maker.

Apple proposed the changes in August in a legal settlement with small app developers.

But the concession is unlikely to satisfy firms like “Fortnite” developer Epic Games, with which the tech giant has been grappling in a drawn-out dispute over its payments policy.  

Epic launched a case aiming to break Apple’s grip on the App Store, accusing the iPhone maker of operating a monopoly in its shop for digital goods or services.

In September, a judge ordered Apple to loosen control of its App Store payment options, but said Epic had failed to prove that antitrust violations had taken place.

For Epic and others, the ability to redirect users to an out-of-app payment method is not enough: it wants players to be able to pay directly without leaving the game.

Both sides have appealed. 

Apple is also facing investigations from US and European authorities that accuse it of abusing its dominant position.

Read More

China’s Reach Into Africa’s Digital Sector Worries Experts

Chinese companies like Huawei and the Transsion group are responsible for much of the digital infrastructure and smartphones used in Africa. Chinese phones built in Africa come with already installed apps for mobile money transfer services that increase the reach of Chinese tech companies. But while many Africans may find the availability of such technology useful, the trend worries some experts on data management.

China has taken the lead in the development of Africa’s artificial intelligence and communication infrastructure. 

In July 2020, Cameroon contracted with Huawei, a Chinese telecommunication infrastructure company, to equip government data centers. In 2019, Kenya was reported to have signed the same company to deliver smart city and surveillance technology worth $174 million. 

A study by the Atlantic Council, a U.S.-based think tank, found that Huawei has developed 30% of the 3G network and 70% of the 4G network in Africa. 

Eric Olander is the managing editor of the Chinese Africa Project, a media organization examining China’s engagement in Africa. He says Chinese investment is helping Africa grow.

“The networking equipment is really what is so vital and what the Chinese have been able to do with Huawei, in particular, is they bring the networking infrastructure together with state-backed loans and that’s the combination that has proven to be very effective. So, a lot of governments that would not be able to afford 4G and 5G network upgrades are able to get these concessional loans from the China Exim Bank that are used and to purchase Huawei equipment,” Olander said.

Data compiled by the Australian Strategic Policy Institute, a Canberra-based defense and policy research organization, show China has built 266 technology projects in Africa ranging from 4G and 5G telecommunications networks to data centers, smart city projects that modernize urban centers and education programs.  

But while the new technology has helped modernize the African continent, some say it comes at a cost that is not measured in dollars. 

China loaned the Ethiopian government more than $3 billion to be used to upgrade its digital infrastructure. Critics say the money helped Ethiopia expand its authoritarian rule and monitor telecom network users. 

According to an investigation by The Wall Street Journal, Huawei technology helped the Ugandan and Zambian governments spy on government critics.  In 2019, Uganda procured millions of dollars in closed circuit television surveillance technology from Huawei, ostensibly to help control urban crime.

Police in the East African nation admitted to using the system’s facial recognition ability supplied by Huawei to arrest more than 800 opposition supporters last year.

Bulelani Jili, a cybersecurity fellow at the Belfer Center at Harvard University, says African citizens must be made aware of the risks in relations with Chinese tech companies.

“There is need [for] greater public awareness and attention to this issue in part because it’s a key metric surrounding both development but also the kind of Africa-China relations going forward…. We should also be thinking about data sovereignty is going to be a key factor going forward.” 

Jili said data sharing will create more challenges for relations between Africa and China. 

“There are security questions about data, specifically how it’s managed, who owns it, and how governments depend on private actors to provide them the technical capacity to initiate certain state services.”  

London-based organization Privacy International says at least 24 African countries have laws that protect the personal data of their citizens. But experts say most of those laws are not enforced. 

Read More

Facebook Kept Oversight Board in Dark about Special Treatment of VIP Accounts

Facebook’s quasi-independent oversight board criticized the company Thursday, saying many high-profile accounts such as celebrities and politicians are not held to the same standards as other accounts.

In a blog post, the board said, “Facebook has not been fully forthcoming with the Board on its ‘Cross-Check’ system, which the company uses to review content decisions relating to high-profile users.”

The Wall Street Journal had previously reported about the company’s double standards, and that 5.8 million accounts fell under the Cross-Check system.

“At times, the documents show, [Cross-Check] has protected public figures whose posts contain harassment or incitement to violence, violations that would typically lead to sanctions for regular users,” the Journal reported.

Facebook spokesman Andy Stone told the Journal that Cross-Check “was designed for an important reason: to create an additional step so we can accurately enforce policies on content that could require more understanding.”

The board said Facebook kept it in the dark about the existence of Cross-Check.

“When Facebook referred the case related to former U.S. President Trump to the Board, it did not mention the cross-check system,” the board wrote. “Given that the referral included a specific policy question about account-level enforcement for political leaders, many of whom the Board believes were covered by cross-check, this omission is not acceptable.”

“Facebook only mentioned cross-check to the Board when we asked whether Mr. Trump’s page or account had been subject to ordinary content moderation processes.”

The board urged Facebook to provide greater transparency.

The board was created last October after the company faced criticism it was not quickly and effectively dealing with what some feel is problematic content.

Decisions by the board are binding and cannot be overturned. 

 

Some information in this report comes from Reuters.

Read More

New Name for Facebook? Critics Cry Smoke and Mirrors

Facebook critics pounced Wednesday on a report that the social network plans to rename itself, arguing it may be seeking to distract from recent scandals and controversy.

The report from tech news website The Verge, which Facebook refused to confirm, said the embattled company was aiming to show its ambition to be more than a social media site.

But an activist group calling itself The Real Facebook Oversight Board warned that major industries like oil and tobacco had rebranded to “deflect attention” from their problems.

“Facebook thinks that a rebrand can help them change the subject,” said the group’s statement, adding the real issue was the need for oversight and regulation.

Facebook spokesman Andy Stone told AFP: “We don’t have any comment and aren’t confirming The Verge’s report.”

The Verge cited an unnamed source noting the name would reflect Facebook’s efforts to build the “metaverse,” a virtual reality version of the internet that the tech giant sees as the future.

Facebook on Monday announced plans to hire 10,000 people in the European Union to build the metaverse, with CEO Mark Zuckerberg emerging as a leading promoter of the concept.

Fallout

The announcement comes as Facebook grapples with the fallout of a damaging scandal, major outages of its services and rising calls for regulation to curb its vast influence.

The company has faced a storm of criticism over the past month after former employee Frances Haugen leaked internal studies showing Facebook knew its sites could be harmful to young people’s mental health.

The Washington Post last month suggested that Facebook’s interest in the metaverse is “part of a broader push to rehabilitate the company’s reputation with policymakers and reposition Facebook to shape the regulation of next-wave internet technologies.”

Silicon Valley analyst Benedict Evans argued a rebranding would ignore fundamental problems with the platform.

“If you give a broken product a new name, people will quite quickly work out that this new brand has the same problems,” he tweeted.

“A better ‘rebrand’ approach is generally to fix the problem first and then create a new brand reflecting the new experience,” he added.

Google rebranded itself as Alphabet in a corporate reconfiguration in 2015, but the online search and ad powerhouse remains its defining unit despite other operations such as Waymo self-driving cars and Verily life sciences.

Read More

Facebook to Pay Up to $14 Million Over Discrimination Against US Workers 

Facebook must pay a $4.75 million fine and up to $9.5 million in back pay to eligible victims who say the company discriminated against U.S. workers in favor of foreign ones, the Justice Department announced Tuesday. 

The discrimination took place from at least January 1, 2018, until at least September 18, 2019. 

The Justice Department said Facebook “routinely refused” to recruit or consider U.S. workers, including U.S. citizens and nationals, asylees, refugees and lawful permanent residents, in favor of temporary visa holders. Facebook also helped the visa holders get their green cards, which allowed them to work permanently 

In a separate settlement, the company also agreed to train its employees in anti-discrimination rules and conduct wider searches to fill jobs. 

The fines and back pay are the largest civil awards ever given by the DOJ’s civil rights division in its 35-year history. 

“Facebook is not above the law and must comply with our nation’s civil rights laws,” Assistant Attorney General Kristen Clarke told reporters in a telephone conference. 

“While we strongly believe we met the federal government’s standards in our permanent labor certification [PERM] practices, we’ve reached agreements to end the ongoing litigation and move forward with our PERM program, which is an important part of our overall immigration program,” a Facebook spokesperson said in a statement. “These resolutions will enable us to continue our focus on hiring the best builders from both the U.S. and around the world and supporting our internal community of highly skilled visa holders who are seeking permanent residence.” 

Some information in this report came from the Associated Press.

Read More

Catching a Ride: A ‘Robotaxi’ Drives Itself to You

Consider it a “robotaxi.” For customers who need a ride in Las Vegas, they can now use a ride-hailing startup whose cars drive themselves to customers. Tina Trinh reports.

Read More

Facebook Plans to Hire 10,000 in EU to Build ‘Metaverse’

Facebook says it plans to hire 10,000 workers in the European Union over the next five years to work on a new computing platform.

The company said in a blog post Sunday that those high-skilled workers will help build “the metaverse,” a futuristic notion for connecting people online that encompasses augmented and virtual reality.

Facebook executives have been touting the metaverse as the next big thing after the mobile internet as they also contend with other matters such as antitrust crackdowns, the testimony of a whistleblowing former employee and concerns about how the company handles vaccine-related and political misinformation on its platform.

In a separate blog post Sunday, the company defended its approach to combating hate speech, in response to a Wall Street Journal article that examined the company’s inability to detect and remove hateful and excessively violent posts.

Read More

US Puts Cryptocurrency Industry on Notice Over Ransomware Attacks 

Suspected ransomware payments totaling $590 million were made in the first six months of this year, more than the $416 million reported for all of 2020, U.S. authorities said on Friday, as Washington put the cryptocurrency industry on alert about its role in combating ransomware attacks. 

The U.S. Treasury Department said the average amount of reported ransomware transactions per month in 2021 was $102.3 million, with REvil/Sodinokibi, Conti, DarkSide, Avaddon, and Phobos the most prevalent ransomware strains reported. 

President Joe Biden has made the government’s cybersecurity response a top priority for the most senior levels of his administration following a series of attacks this year that threatened to destabilize U.S. energy and food supplies. 

Avoiding  U.S. sanctions

Seeking to stop the use of cryptocurrencies in the payment of ransomware demands, Treasury told members of the crypto community they are responsible for making sure they do not directly or indirectly help facilitate deals prohibited by U.S. sanctions. 

Its new guidance said the industry plays an increasingly critical role in preventing those blacklisted from exploiting cryptocurrencies to evade sanctions. 

“Treasury is helping to stop ransomware attacks by making it difficult for criminals to profit from their crimes, but we need partners in the private sector to help prevent this illicit activity,” Deputy Treasury Secretary Wally Adeyemo said in a statement. 

The new guidance also advised cryptocurrency exchanges to use geolocation tools to block access from countries under U.S. sanctions. 

Hackers use ransomware to take down systems that control everything from hospital billing to manufacturing. They stop only after receiving hefty payments, typically in cryptocurrency. 

Large scale hacks

This year, gangs have hit numerous U.S. companies in large scale hacks. One such attack on pipeline operator Colonial Pipeline led to temporary fuel supply shortages on the U.S. East Coast. Hackers also targeted an Iowa-based agricultural company, sparking fears of disruptions to grain harvesting in the Midwest. 

The Biden administration last month unveiled sanctions against cryptocurrency exchange Suex OTC, S.R.O. over its alleged role in enabling illegal payments from ransomware attacks, officials said, in the Treasury’s first such move against a cyptocurrency exchange over ransomware activity.

Read More

Facebook Objects to Releasing Private Posts About Myanmar’s Rohingya Campaign

Facebook was used to spread disinformation about the Rohingya, the Muslim ethnic minority in Myanmar, and in 2018 the company began to delete posts, accounts and other content it determined were part of a campaign to incite violence. 

That deleted but stored data is at issue in a case in the United States over whether Facebook should release the information as part of a claim in international court. 

Facebook this week objected to part of a U.S. magistrate judge’s order that could have an impact on how much data internet companies must turn over to investigators examining the role social media played in a variety of international incidents, from the 2017 Rohingya genocide in Myanmar to the 2021 Capitol riot in Washington. 

The judge ruled last month that Facebook had to give information about these deleted accounts to Gambia, the West African nation, which is pursuing a case in the International Court of Justice against Myanmar, seeking to hold the Asian nation responsible for the crime of genocide against the Rohingya.

But in its filing Wednesday, Facebook said the judge’s order “creates grave human rights concerns of its own, leaving internet users’ private content unprotected and thereby susceptible to disclosure — at a provider’s whim — to private litigants, foreign governments, law enforcement, or anyone else.” 

The company said it was not challenging the order when it comes to public information from the accounts, groups and pages it has preserved. It objects to providing “non-public information.” If the order is allowed to stand, it would “impair critical privacy and freedom of expression rights for internet users — not just Facebook users — worldwide, including Americans,” the company said. 

Facebook has argued that providing the deleted posts is in violation of U.S. privacy, citing the Stored Communications Act, the 35-year-old law that established privacy protections in electronic communication. 

Deleted content protected? 

In his September decision, U.S. Magistrate Judge Zia M. Faruqui said that once content is deleted from an online service, it is no longer protected.

Paul Reichler, a lawyer for Gambia, told VOA that Facebook’s concern about privacy is misplaced. 

“Would Hitler have privacy rights that should be protected?” Reichler said in an interview with VOA. “The generals in Myanmar ordered the destruction of a race of people. Should Facebook’s business interests in holding itself out as protecting the privacy rights of these Hitlers prevail over the pursuit of justice?” 

But Orin Kerr, a law professor at the University of California at Berkeley, said on Twitter that the judge’s ruling erred and that the implication of the ruling is that “if a provider moderates contents, all private messages and emails deleted can be freely disclosed and are no longer private.”

The 2017 military crackdown on the Rohingya resulted in more than 700,000 people fleeing their homes to escape mass killings and rapes, a crisis that the United States has called “ethnic cleansing.”

‘Coordinated inauthentic behavior’ 

Human rights advocates say Facebook had been used for years by Myanmar officials to set the stage for the crimes against the Rohingya. 

Frances Haugen, the former Facebook employee who testified about the company in Congress last week, said Facebook’s focus on keeping users engaged on its site contributed to “literally fanning ethnic violence” in countries. 

In 2018, Facebook deleted and banned accounts of key individuals, including the commander in chief of Myanmar’s armed forces and the military’s television network, as well as 438 pages, 17 groups and 160 Facebook and Instagram accounts — what the company called “coordinated inauthentic behavior.” The company estimated 12 million people in Myanmar, a nation of 54 million, followed these accounts. 

Facebook commissioned an independent human rights study  of its role that concluded that prior to 2018, it indeed failed to prevent its service “from being used to foment division and incite offline violence.” 

Facebook kept the data on what it deleted for its own forensic analysis, the company told the court. 

The case comes at a time when law enforcement and governments worldwide increasingly seek information from technology companies about the vast amount of data they collect on users. 

Companies have long cited privacy concerns to protect themselves, said Ari Waldman, a professor of law and computer science at Northeastern University. What’s new is the vast quantity of data that companies now collect, a treasure trove for investigators, law enforcement and government. 

“Private companies have untold amounts of data based on the commodification of what we do,” Waldman said.

Privacy rights should always be balanced with other laws and concerns, such as the pursuit of justice, he added.

Facebook working with the IIMM 

In August 2020, Facebook confirmed that it was working with the Independent Investigative Mechanism for Myanmar (IIMM), a United Nations-backed group that is investigating Myanmar. The U.N. Human Rights Council established the IIMM, or “Myanmar Mechanism,” in September 2018 to collect evidence of the country’s most serious international crimes.

Recently, IIMM told VOA it has been meeting regularly with Facebook employees to gain access to information on the social media network related to its ongoing investigations in the country. 

A spokesperson for IIMM told VOA’s Burmese Service that Facebook “has agreed to voluntarily provide some, but not all, of the material the Mechanism has requested.” 

IIMM head Nicholas Koumjian wrote to VOA that the group is seeking material from Facebook “that we believe is relevant to proving criminal responsibility for serious international crimes committed in Myanmar that fall within our mandate.”  

Facebook told VOA in an email it is cooperating with the U.N. Myanmar investigators. 

“We’ve committed to disclose relevant information to authorities, and over the past year we’ve made voluntary, lawful disclosures to the IIMM and will continue to do so as the case against Myanmar proceeds,” the spokesperson wrote. The company has made what it calls “12 lawful data disclosures” to the IIMM but didn’t provide details. 

Human rights activists are frustrated that Facebook is not doing more to crack down on bad actors who are spreading hate and disinformation on the site.

“Look, I think there are many people at Facebook who want to do the right thing here, and they are working pretty hard,” said Phil Robertson, who covers Asia for Human Rights Watch. “But the reality is, they still need to escalate their efforts. I think that Facebook is more aware of the problems, but it’s also in part because so many people are telling them that they need to do better.” 

Matthew Smith of the human rights organization Fortify Rights, which closely tracked the ethnic cleansing campaign in Myanmar, said the company’s business success indicates it could do a better job of identifying harmful content. 

“Given the company’s own business model of having this massive capacity to deal with massive amounts of data in a coherent and productive way, it stands to reason that the company would absolutely be able to understand and sift through the data points that could be actionable,” Smith said. 

Gambia has until later this month to respond to Facebook’s objections.

Read More

US Authorities Disclose Ransomware Attacks Against Water Facilities

U.S. authorities said on Thursday that four ransomware attacks had penetrated water and wastewater facilities in the past year, and they warned similar plants to check for signs of intrusions and take other precautions. 

The alert from the Cybersecurity and Infrastructure Security Agency (CISA) cited a series of apparently unrelated hacking incidents from September 2020 to August 2021 that used at least three different strains of ransomware, which encrypts computer files and demands payment for them to be restored. 

Attacks at an unnamed Maine wastewater facility three months ago and one in California in August moved past desktop computers and paralyzed the specialized supervisory control and data acquisition (SCADA) devices that issue mechanical commands to the equipment. 

The Maine system had to turn to manual controls, according to the alert co-signed by the FBI, National Security Agency and Environmental Protection Agency. 

A March hack in Nevada also reached SCADA devices that provided operational visibility but could not issue commands. 

CISA said it is seeing increasing attacks on many forms of critical infrastructure, in line with those on the water plants. 

In some cases, the water facilities are handicapped by low municipal spending on technology cybersecurity. 

The Department of Homeland Security agency’s recommendations include access log audits and strict use of additional factors for authentication beyond passwords.  

Read More