Read More
WASHINGTON — U.S. Commerce Secretary Gina Raimondo vowed Monday to take the “strongest action possible” in response to a semiconductor chip-making breakthrough in China that a House Foreign Affairs Committee said “almost certainly required the use of U.S. origin technology and should be an export control violation.”
In an interview with Bloomberg News, Raimondo called Huawei Technology’s advanced processor in its Mate Pro 60 smartphone released in August “deeply concerning” and said the Commerce Department investigates such things vigorously.
The United States has banned chip sales to Huawei, which reportedly used chips from China chip giant Semiconductor Manufacturing International Corp., or SMIC, in the phone that are 7 nanometers, a technology China has not been known as able to produce.
Raimondo said the U.S. was also looking into the specifics of three new artificial intelligence accelerator chips that California-based Nvidia Corp. is developing for China. “We look at every spec of every new chip, obviously, to make sure it doesn’t violate the export controls,” she said.
Nvidia came under U.S. scrutiny for designing China-specific chips that were just under new Commerce Department requirements announced in October for tighter export controls on advanced AI chips for civilian use that could have military applications.
China’s Foreign Ministry responded to Raimondo’s comments Tuesday, saying the U.S. was “undermining the rights of Chinese companies” and contradicting the principles of a market economy.
‘Almost certainly required US origin technology’
The U.S. House Foreign Affairs Committee in a December 7 report criticized the Commerce Department’s Bureau of Industry and Security, or BIS, the regulatory body for regulating dual-use export controls.
The report said Chinese chip giant “SMIC is producing 7 nanometer chips — advanced technology for semiconductors that had been only capable of development by TSMC, Intel and Samsung.”
“Despite this breakthrough by SMIC, which almost certainly required the use of U.S. origin technology and should be an export control violation, BIS has not acted,” the 66-page report said. “We can no longer afford to avoid the truth: the unimpeded transfer of U.S. technology to China is one of the single-largest contributors to China’s emergence as one of the world’s premier scientific and technological powers.”
Excessive approvals alleged
Committee Chairman Michael McCaul said BIS had an excessive rate of approval for controlled technology transfers and lacked checks on end-use, raising serious questions about the current U.S. export control mechanism.
“U.S. export control officials should adopt a presumption that all [Chinese] entities will divert technology to military or surveillance uses,” said McCaul’s report, but “currently, the overwhelming approval rates for licenses or exceptions for dual-use technology transfers to China indicate that licensing officials at BIS are likely presuming that items will be used only for their intended purposes.”
According to BIS’s website, a key in determining whether an export license is needed from the Department of Commerce is knowing whether the item one intends to export has a specific Export Control Classification Number, or ECCN. All ECCNs are listed in the Commerce Control List, or CCL, which is divided into ten broad categories.
The committee’s report said that “in 2020, nearly 98% of CCL items export to China went without a license,” and “in 2021, BIS approved nearly 90% of applications for the export of CCL items to China.”
The report said that between 2016 and 2021, “the United States government’s two export control officers in China conducted on average only 55 end-user checks per year of the roughly 4,000 active licenses in China. Put another way, BIS likely verified less than 0.01% of all licenses, which represent less than 1% of all trade with China.”
China skilled in avoiding controls
But China is also skilled at avoiding U.S. export controls, analysts said.
William Yu, an economist at UCLA Anderson Forecast, told VOA Mandarin in a phone interview that China can get banned chips through a third country. “For example, some countries in the Middle East set up a company in that country to buy these high-level chips from the United States. From there, one is transferred back to China,” Yu said.
Thomas Duesterberg, a senior fellow at the Hudson Institute, told VOA Mandarin in a phone interview that the Commerce Department’s BIS has a hard job.
“If you forbid technology from going to one company in China, the Chinese are experts at creating another company or just moving the company to a new address and disguising its name to try to evade the controls. China is a big country and there’s a lot of technology that is at stake here,” he said.
“It’s true on the one hand that BIS has been successful in some areas, such as advanced semiconductors in conjunction with denial of Chinese ability to buy American technology companies,” said Duesterberg. “But it’s also true as the [House Foreign Affairs Committee] report emphasizes that a lot of activities that policymakers would like to restrict is not being done.”
Insufficient resources or political will?
Despite its huge responsibility to ensure that the United States stays ahead in the escalating U.S.-China science and technology competition, the Commerce Department’s BIS is small, employing just over 300 people.
At the annual Reagan National Defense Forum on December 2, Secretary Raimondo lamented that BIS “has the same budget today as it did a decade ago” despite the increasing challenges and workload, reported Breaking Defense, a New York-based online publication on global defense and politics.
U.S. Representatives Elise Stefanik, Mike Gallagher, who is chairman of the House Select Committee on the Chinese Communist Party, and McCaul released a joint response to Raimondo’s call for additional funds for the BIS, saying resources alone would not resolve export control shortcomings.
Raimondo also warned chip companies that the U.S. would further tighten controls to prevent cutting edge AI technology from going to Beijing.
“The threat from China is large and growing,” she said in an interview to CNBC at the December 2 forum. “China wants access to our most sophisticated semiconductors, and we can’t afford to give them that access. We’re not just going to deny a single company in China, we’re going to deny the whole country access to our cutting-edge semiconductors.”
…
European Union officials worked into the late hours last week hammering out an agreement on world-leading rules meant to govern the use of artificial intelligence in the 27-nation bloc.
The Artificial Intelligence Act is the latest set of regulations designed to govern technology in Europe — that may be destined to have global impact.
Here’s a closer look at the AI rules:
What is the AI act and how does it work?
The AI Act takes a “risk-based approach” to products or services that use artificial intelligence and focuses on regulating uses of AI rather than the technology. The legislation is designed to protect democracy, the rule of law and fundamental rights like freedom of speech, while still encouraging investment and innovation.
The riskier an AI application is, the stiffer the rules. Those that pose limited risk, such as content recommendation systems or spam filters, would have to follow only light rules such as revealing that they are powered by AI.
High-risk systems, such as medical devices, face tougher requirements like using high-quality data and providing clear information to users.
Some AI uses are banned because they’re deemed to pose an unacceptable risk, like social scoring systems that govern how people behave, some types of predictive policing and emotion recognition systems in school and workplaces.
People in public can’t have their faces scanned by police using AI-powered remote “biometric identification” systems, except for serious crimes like kidnapping or terrorism.
The AI Act won’t take effect until two years after final approval from European lawmakers, expected in a rubber-stamp vote in early 2024. Violations could draw fines of up to 35 million euros ($38 million) or 7% of a company’s global revenue.
How does the AI act affect the rest of the world?
The AI Act will apply to the EU’s nearly 450 million residents, but experts say its impact could be felt far beyond because of Brussels’ leading role in drawing up rules that act as a global standard.
The EU has played the role before with previous tech directives, most notably mandating a common charging plug that forced Apple to abandon its in-house Lightning cable.
While many other countries are figuring out whether and how they can rein in AI, the EU’s comprehensive regulations are poised to serve as a blueprint.
“The AI Act is the world’s first comprehensive, horizontal and binding AI regulation that will not only be a game-changer in Europe but will likely significantly add to the global momentum to regulate AI across jurisdictions,” said Anu Bradford, a Columbia Law School professor who’s an expert on EU law and digital regulation.
“It puts the EU in a unique position to lead the way and show to the world that AI can be governed, and its development can be subjected to democratic oversight,” she said.
Even what the law doesn’t do could have global repercussions, rights groups said.
By not pursuing a full ban on live facial recognition, Brussels has “in effect greenlighted dystopian digital surveillance in the 27 EU Member States, setting a devastating precedent globally,” Amnesty International said.
The partial ban is “a hugely missed opportunity to stop and prevent colossal damage to human rights, civil space and rule of law that are already under threat through the EU.”
Amnesty also decried lawmakers’ failure to ban the export of AI technologies that can harm human rights — including for use in social scoring, something China does to reward obedience to the state through surveillance.
What are other countries doing about AI regulation?
The world’s two major AI powers, the U.S. and China, also have started the ball rolling on their own rules.
U.S. President Joe Biden signed a sweeping executive order on AI in October, which is expected to be bolstered by legislation and global agreements.
It requires leading AI developers to share safety test results and other information with the government. Agencies will create standards to ensure AI tools are safe before public release and issue guidance to label AI-generated content.
Biden’s order builds on voluntary commitments made earlier by technology companies including Amazon, Google, Meta, Microsoft to make sure their products are safe before they’re released.
China, meanwhile, has released ” interim measures ” for managing generative AI, which applies to text, pictures, audio, video and other content generated for people inside China.
President Xi Jinping has also proposed a Global AI Governance Initiative, calling for an open and fair environment for AI development.
How will the AI act affect ChatGPT?
The spectacular rise of OpenAI’s ChatGPT showed that the technology was making dramatic advances and forced European policymakers to update their proposal.
The AI Act includes provisions for chatbots and other so-called general purpose AI systems that can do many different tasks, from composing poetry to creating video and writing computer code.
Officials took a two-tiered approach, with most general-purpose systems facing basic transparency requirements like disclosing details about their data governance and, in a nod to the EU’s environmental sustainability efforts, how much energy they used to train the models on vast troves of written works and images scraped off the internet.
They also need to comply with EU copyright law and summarize the content they used for training.
Stricter rules are in store for the most advanced AI systems with the most computing power, which pose “systemic risks” that officials want to stop spreading to services that other software developers build on top.
…
Lawmakers and parents are blaming social media platforms for contributing to mental health problems in young people. A group of U.S. states is suing the owner of Instagram and Facebook for promoting their platforms to children despite knowing some of the psychological harms and safety risks they pose. From New York, VOA’s Tina Trinh reports that a cause-and-effect relationship between social media and mental health may not be so clear.
…