Юрій Кузнецов обіймає посаду начальника Головного управління кадрів Міноборони РФ з травня 2023 року
…
Washington — Two Air Force fighter jets recently squared off in a dogfight in California. One was flown by a pilot. The other wasn’t.
That second jet was piloted by artificial intelligence, with the Air Force’s highest-ranking civilian riding along in the front seat. It was the ultimate display of how far the Air Force has come in developing a technology with its roots in the 1950s. But it’s only a hint of the technology yet to come.
The United States is competing to stay ahead of China on AI and its use in weapon systems. The focus on AI has generated public concern that future wars will be fought by machines that select and strike targets without direct human intervention. Officials say this will never happen, at least not on the U.S. side. But there are questions about what a potential adversary would allow, and the military sees no alternative but to get U.S. capabilities fielded fast.
“Whether you want to call it a race or not, it certainly is,” said Adm. Christopher Grady, vice chairman of the Joint Chiefs of Staff. “Both of us have recognized that this will be a very critical element of the future battlefield. China’s working on it as hard as we are.”
A look at the history of military development of AI, what technologies are on the horizon and how they will be kept under control:
From machine learning to autonomy
AI’s military roots are a hybrid of machine learning and autonomy. Machine learning occurs when a computer analyzes data and rule sets to reach conclusions. Autonomy occurs when those conclusions are applied to act without further human input.
This took an early form in the 1960s and 1970s with the development of the Navy’s Aegis missile defense system. Aegis was trained through a series of human-programmed if/then rule sets to be able to detect and intercept incoming missiles autonomously, and more rapidly than a human could. But the Aegis system was not designed to learn from its decisions and its reactions were limited to the rule set it had.
“If a system uses ‘if/then’ it is probably not machine learning, which is a field of AI that involves creating systems that learn from data,” said Air Force Lt. Col. Christopher Berardi, who is assigned to the Massachusetts Institute of Technology to assist with the Air Force’s AI development.
AI took a major step forward in 2012 when the combination of big data and advanced computing power enabled computers to begin analyzing the information and writing the rule sets themselves. It is what AI experts have called AI’s “big bang.”
The new data created by a computer writing the rules is artificial intelligence. Systems can be programmed to act autonomously from the conclusions reached from machine-written rules, which is a form of AI-enabled autonomy.
Testing an AI alternative to GPS navigation
Air Force Secretary Frank Kendall got a taste of that advanced warfighting this month when he flew on Vista, the first F-16 fighter jet to be controlled by AI, in a dogfighting exercise over California’s Edwards Air Force Base.
While that jet is the most visible sign of the AI work underway, there are hundreds of ongoing AI projects across the Pentagon.
At MIT, service members worked to clear thousands of hours of recorded pilot conversations to create a data set from the flood of messages exchanged between crews and air operations centers during flights, so the AI could learn the difference between critical messages like a runway being closed and mundane cockpit chatter. The goal was to have the AI learn which messages are critical to elevate to ensure controllers see them faster.
In another significant project, the military is working on an AI alternative to GPS satellite-dependent navigation.
In a future war high-value GPS satellites would likely be hit or interfered with. The loss of GPS could blind U.S. communication, navigation and banking systems and make the U.S. military’s fleet of aircraft and warships less able to coordinate a response.
So last year the Air Force flew an AI program — loaded onto a laptop that was strapped to the floor of a C-17 military cargo plane — to work on an alternative solution using the Earth’s magnetic fields.
It has been known that aircraft could navigate by following the Earth’s magnetic fields, but so far that hasn’t been practical because each aircraft generates so much of its own electromagnetic noise that there has been no good way to filter for just the Earth’s emissions.
“Magnetometers are very sensitive,” said Col. Garry Floyd, director for the Department of Air Force-MIT Artificial Intelligence Accelerator program. “If you turn on the strobe lights on a C-17 we would see it.”
The AI learned through the flights and reams of data which signals to ignore and which to follow and the results “were very, very impressive,” Floyd said. “We’re talking tactical airdrop quality.”
“We think we may have added an arrow to the quiver in the things we can do, should we end up operating in a GPS-denied environment. Which we will,” Floyd said.
The AI so far has been tested only on the C-17. Other aircraft will also be tested, and if it works it could give the military another way to operate if GPS goes down.
Safety rails and pilot speak
Vista, the AI-controlled F-16, has considerable safety rails as the Air Force trains it. There are mechanical limits that keep the still-learning AI from executing maneuvers that would put the plane in danger. There is a safety pilot, too, who can take over control from the AI with the push of a button.
The algorithm cannot learn during a flight, so each time up it has only the data and rule sets it has created from previous flights. When a new flight is over, the algorithm is transferred back onto a simulator where it is fed new data gathered in-flight to learn from, create new rule sets and improve its performance.
But the AI is learning fast. Because of the supercomputing speed AI uses to analyze data, and then flying those new rule sets in the simulator, its pace in finding the most efficient way to fly and maneuver has already led it to beat some human pilots in dogfighting exercises.
But safety is still a critical concern, and officials said the most important way to take safety into account is to control what data is reinserted into the simulator for the AI to learn from.
…
NEW YORK — A newly released ad promoting Apple’s new iPad Pro has struck quite a nerve online.
The ad, which was released by the tech giant Tuesday, shows a hydraulic press crushing just about every creative instrument artists and consumers have used over the years — from a piano and record player, to piles of paint, books, cameras and relics of arcade games. Resulting from the destruction? A pristine new iPad Pro.
“The most powerful iPad ever is also the thinnest,” a narrator says at the end of the commercial.
Apple’s intention seems straightforward: Look at all the things this new product can do. But critics have called it tone-deaf — with several marketing experts noting the campaign’s execution didn’t land.
“I had a really disturbing reaction to the ad,” said Americus Reed II, professor of marketing at The Wharton School of the University of Pennsylvania. “I understood conceptually what they were trying to do, but … I think the way it came across is, here is technology crushing the life of that nostalgic sort of joy (from former times).”
The ad also arrives during a time many feel uncertain or fearful about seeing their work or everyday routines “replaced” by technological advances — particularly amid the rapid commercialization of generative artificial intelligence. And watching beloved items get smashed into oblivion doesn’t help curb those fears, Reed and others note.
Several celebrities were also among the voices critical of Apple’s “Crush!” commercial on social media this week.
“The destruction of the human experience. Courtesy of Silicon Valley,” actor Hugh Grant wrote on the social media platform X, in a repost of Apple CEO Tim Cook’s sharing of the ad.
Some found the ad to be a telling metaphor of the industry today — particularly concerns about big tech negatively impacting creatives. Filmmaker Justine Bateman wrote on X that the commercial “crushes the arts.”
Experts added that the commercial marked a notable difference to marketing seen from Apple in the past — which has often taken more positive or uplifting approaches.
“My initial thought was that Apple has become exactly what it never wanted to be,” Vann Graves, executive director of the Virginia Commonwealth University’s Brandcenter, said.
Graves pointed to Apple’s famous 1984 ad introducing the Macintosh computer, which he said focused more on uplifting creativity and thinking outside of the box as a unique individual. In contrast, Graves added, “this (new iPad) commercial says, ‘No, we’re going to take all the creativity in the world and use a hydraulic press to push it down into one device that everyone uses.'”
In a statement shared with Ad Age on Thursday, Apple apologized for the ad. The outlet also reported that Apple no longer plans to run the spot on TV.
“Creativity is in our DNA at Apple, and it’s incredibly important to us to design products that empower creatives all over the world,” Tor Myhren, the company’s vice president of marketing communications, told Ad Age. “Our goal is to always celebrate the myriad of ways users express themselves and bring their ideas to life through iPad. We missed the mark with this video, and we’re sorry.”
Cupertino, California-based Apple unveiled its latest generation of iPad Pros and Airs earlier this week in a showcase that lauded new features for both lines. The Pro sports a new thinner design, a new M4 processor for added processing power, slightly upgraded storage and incorporates dual OLED panels for a brighter, crisper display.
Apple is trying to juice demand for iPads after its sales of the tablets plunged 17% from last year during the January-March period. After its 2010 debut helped redefine the tablet market, the iPad has become a minor contributor to Apple’s success. It currently accounts for just 6% of the company’s sales.
…
sacramento, california — California could soon deploy generative artificial intelligence tools to help reduce traffic jams, make roads safer and provide tax guidance, among other things, under new agreements announced Thursday as part of Governor Gavin Newsom’s efforts to harness the power of new technologies for public services.
The state is partnering with five companies to create generative AI tools using technologies developed by tech giants such as Microsoft-backed OpenAI and Google- and Amazon-backed Anthropic that would ultimately help the state provide better services to the public, administration officials said.
“It is a very good sign that a lot of these companies are putting their focus on using GenAI for governmental service delivery,” said Amy Tong, secretary of government operations for California.
The companies will start a six-month internal trial in which state workers test and evaluate the tools. The companies will be paid $1 for their proposals. The state, which faces a significant budget deficit, can then reassess whether any tools could be fully implemented under new contracts. All the tools are considered low risk, meaning they don’t interact with confidential data or personal information, an administration spokesperson said.
Newsom, a Democrat, touts California as a global hub for AI technology, noting 35 of the world’s top 50 AI companies are located in the state. He signed an executive order last year requiring the state to start exploring responsible ways to incorporate generative AI by this summer, with a goal of positioning California as an AI leader.
In January, the state started asking technology companies to come up with generative AI tools for public services. Last month, California was one of the first states to roll out guidelines on when and how state agencies could buy such tools.
Generative AI, a branch of AI that can create new content such as text, audio and photos, has significant potential to help government agencies become more efficient, but there’s also an urgent need for safeguards to limit risks, state officials and experts said. In New York City, an AI-powered chatbot created by the city to help small businesses was found to dole out false guidance and advise companies to violate the law. The rapidly growing technology has also raised concerns about job losses, misinformation, privacy and automation bias.
While state governments are struggling to regulate AI in the private sector, many are exploring how public agencies can leverage the powerful technology for public good. California’s approach, which also requires companies to disclose what large language models they use to develop AI tools, is meant to build public trust, officials said.
The state’s testing of the tools and collecting of feedback from state workers are some of the best practices to limit potential risks, said Meredith Lee, chief technical adviser for the University of California-Berkeley’s College of Computing, Data Science and Society. The challenge is determining how to assure continued testing and learning about the tools’ potential risks after deployment.
“This is not something where you just work on testing for some small amount of time and that’s it,” Lee said. “Putting in the structures for people to be able to revisit and better understand the deployments further down the line is really crucial.”
The California Department of Transportation is looking for tools that would analyze traffic data and come up with solutions to reduce highway traffic and make roads safer. The state’s Department of Tax and Fee Administration, which administers more than 40 programs, wants an AI tool to help its call center cut wait times and call length. The state is also seeking technologies to provide non-English speakers information about health and social services benefits in their languages and to streamline the inspection process for health care facilities.
The tools are to be designed to assist state workers, not replace them, said Nick Maduros, director of the Department of Tax and Fee Administration.
Call center workers there took more than 660,000 calls last year. The state envisions the AI technology listening along to those calls and pulling up specific tax code information associated with the problems callers describe. Workers could decide whether to use the information.
Currently, call center workers have to simultaneously listen to the call and manually look up the code, Maduros said.
“If it turns out it doesn’t serve the public better, then we’re out $1,” Maduros said. “And I think that’s a pretty good deal for the citizens of California.”
Tong wouldn’t say when a successfully vetted tool would be deployed, but added that the state was moving as fast as it can.
“The whole essence of using GenAI is it doesn’t take years,” Tong said. “GenAI doesn’t wait for you.”
…