Міністр закордонних справ Росії поговорить зі своїм китайським колегою Ваном Ї на низку «гарячих тем», заявили в МЗС РФ
…
PENTAGON — Russian microchip company AO PKK Milandr continued to provide microchips to the Russian armed forces at least several months after Russia invaded Ukraine, despite public denials by company director Alexey Novoselov of any connection with Russia’s military.
A formal letter obtained by VOA dated February 10, 2023, shows a sale request for 4,080 military grade microchips for the Russian military. The sale request was addressed from a deputy commander of the 546 military representation of the Russian Ministry of Defense and the commercial director of Russian manufacturer NPO Poisk to Milandr CEO S.V. Tarasenko for delivery by April 2023, more than a year into the war.
The letter instructs Milandr to provide three types of microchip components to NPO Poisk, a well-established Russian defense manufacturer that makes detonators for weapons used by the Russian Armed Forces.
“Each of these three circuits that you have in the table on the document, each one of them is classed as a military-grade component … and each of these is manufactured specifically by Milandr,” said Denys Karlovskyi, a research fellow at the London-based Royal United Services Institute for Defense and Security Studies. VOA shared the document with him to confirm its authenticity.
In addition to Milandr CEO Tarasenko, the letter is addressed to a commander of the Russian Defense Ministry’s 514 military representation of the Russian Ministry of Defense named I.A. Shvid.
Karlovskyi says this inclusion shows that Milandr, like Poisk, appears to have a Russian commander from the Defense Ministry’s oversight unit assigned to it — a clear indicator that a company is part of Russia’s defense industry.
Milandr, headquartered near Moscow in an area known as “Soviet Silicon Valley,” was sanctioned by the United States in November 2022, for its illegal procurement of microelectronic components using front companies.
In the statement announcing the 2022 sanctions against Milandr and more than three dozen other entities and individuals, U.S. Treasury Secretary Janet Yellen said, “The United States will continue to expose and disrupt the Kremlin’s military supply chains and deny Russia the equipment and technology it needs to wage its illegal war against Ukraine.”
Karlovskyi said that in Russia’s database of public contracts, Milandr is listed in more than 500 contracts, supplying numerous state-owned and military-grade enterprises, including Ural Optical Mechanical Plant, Concern Avtomatika and Izhevsk Electromechanical Plant, or IEMZ Kupol, which also have been sanctioned by the United States.
“It clearly suggests that this entity is a crucial node in Russia’s military supply chain,” Karlovskyi told VOA.
Novoselov, Milandr’s current director, told Bloomberg News last August that he was not aware of any connections to the Russian military.
“I don’t know any military persons who would be interested in our product,” he told Bloomberg in a phone interview, adding that the company mostly produces electric power meters.
The U.S. allegations are “like a fantasy,” he said. “The United States’ State Department, they suppose that every electronics business in Russia is focused on the military. I think that is funny.”
But a U.S. defense official told VOA that helping Russia’s military kill tens of thousands of people in an illegal invasion “is no laughing matter.”
“The company is fueling microchips for missiles and heavily armored vehicles that are used to continue the war in Ukraine,” said the defense official, who spoke to VOA on the condition of anonymity due to the sensitivities of discussing U.S. intelligence.
Milandr’s co-founder Mikhail Pavlyuk was also sanctioned during the summer of 2022 for his involvement in microchip smuggling operations and was caught stealing from Milandr. Pavlyuk fled Russia and has claimed he was not involved.
Officials estimate that 500,000 Ukrainian and Russian troops have been killed or injured in the war, with tens of thousands of Ukrainian civilians killed in the fighting.
“There are consequences to their actions, and the U.S. will persist to expose and disrupt the Kremlin’s supply chain,” the U.S. defense official said.
…
washington — In recent weeks, the United States, Britain and the European Union have issued the strictest regulations yet on the use and development of artificial intelligence, setting a precedent for other countries.
This month, the United States and the U.K. signed a memorandum of understanding allowing for the two countries to partner in the development of tests for the most advanced artificial intelligence models, following through on commitments made at the AI Safety Summit last November.
These actions come on the heels of the European Parliament’s March vote to adopt its first set of comprehensive rules on AI. The landmark decision sets out a wide-ranging set of laws to regulate this exploding technology.
At the time, Brando Benifei, co-rapporteur on the Artificial Intelligence Act plenary vote, said, “I think today is again an historic day on our long path towards regulation of AI. … The first regulation in the world that is putting a clear path towards a safe and human-centric development of AI.”
The new rules aim to protect citizens from dangerous uses of AI, while exploring its boundless potential.
Beth Noveck, professor of experiential AI at Northeastern University, expressed enthusiasm about the rules.
“It’s really exciting that the EU has passed really the world’s first … binding legal framework addressing AI. It is, however, not the end; it is really just the beginning.”
The new rules will be applied according to risk level: the higher the risk, the stricter the rules.
“It’s not regulating the tech,” she said. “It’s regulating the uses of the tech, trying to prohibit and to restrict and to create controls over the most malicious uses — and transparency around other uses.
“So things like what China is doing around social credit scoring, and surveillance of its citizens, unacceptable.”
Noveck described what she called “high-risk uses” that would be subject to scrutiny. Those include the use of tools in ways that could deprive people of their liberty or within employment.
“Then there are lower risk uses, such as the use of spam filters, which involve the use of AI or translation,” she said. “Your phone is using AI all the time when it gives you the weather; you’re using Siri or Alexa, we’re going to see a lot less scrutiny of those common uses.”
But as AI experts point out, new laws just create a framework for a new model of governance on a rapidly evolving technology.
Dragos Tudorache, co-rapporteur on the AI Act plenary vote, said, “Because AI is going to have an impact that we can’t only measure through this act, we will have to be very mindful of this evolution of the technology in the future and be prepared.”
In late March, the Biden administration issued the first government-wide policy to mitigate the risks of artificial intelligence while harnessing its benefits.
The announcement followed President Joe Biden’s executive order last October, which called on federal agencies to lead the way toward better governance of the technology without stifling innovation.
“This landmark executive order is testament to what we stand for: safety, security, trust, openness,” Biden said at the time,” proving once again that America’s strength is not just the power of its example, but the example of its power.”
Looking ahead, experts say the challenge will be to update rules and regulations as the technology continues to evolve.
…