01001, Київ, Україна
info@ukrlines.com

Meta’s new AI agents confuse Facebook users 

CAMBRIDGE, Massachusetts — Facebook parent Meta Platforms has unveiled a new set of artificial intelligence systems that are powering what CEO Mark Zuckerberg calls “the most intelligent AI assistant that you can freely use.” 

But as Zuckerberg’s crew of amped-up Meta AI agents started venturing into social media in recent days to engage with real people, their bizarre exchanges exposed the ongoing limitations of even the best generative AI technology. 

One joined a Facebook moms group to talk about its gifted child. Another tried to give away nonexistent items to confused members of a Buy Nothing forum. 

Meta, along with leading AI developers Google and OpenAI, and startups such as Anthropic, Cohere and France’s Mistral, have been churning out new AI language models and hoping to convince customers they’ve got the smartest, handiest or most efficient chatbots. 

While Meta is saving the most powerful of its AI models, called Llama 3, for later, on Thursday it publicly released two smaller versions of the same Llama 3 system and said it’s now baked into the Meta AI assistant feature in Facebook, Instagram and WhatsApp. 

AI language models are trained on vast pools of data that help them predict the most plausible next word in a sentence, with newer versions typically smarter and more capable than their predecessors. Meta’s newest models were built with 8 billion and 70 billion parameters — a measurement of how much data the system is trained on. A bigger, roughly 400 billion-parameter model is still in training. 

“The vast majority of consumers don’t candidly know or care too much about the underlying base model, but the way they will experience it is just as a much more useful, fun and versatile AI assistant,” Nick Clegg, Meta’s president of global affairs, said in an interview. 

‘A little stiff’

He added that Meta’s AI agent is loosening up. Some people found the earlier Llama 2 model — released less than a year ago — to be “a little stiff and sanctimonious sometimes in not responding to what were often perfectly innocuous or innocent prompts and questions,” he said. 

But in letting down their guard, Meta’s AI agents have also been spotted posing as humans with made-up life experiences. An official Meta AI chatbot inserted itself into a conversation in a private Facebook group for Manhattan moms, claiming that it, too, had a child in the New York City school district. Confronted by group members, it later apologized before the comments disappeared, according to a series of screenshots shown to The Associated Press. 

“Apologies for the mistake! I’m just a large language model, I don’t have experiences or children,” the chatbot told the group. 

One group member who also happens to study AI said it was clear that the agent didn’t know how to differentiate a helpful response from one that would be seen as insensitive, disrespectful or meaningless when generated by AI rather than a human. 

“An AI assistant that is not reliably helpful and can be actively harmful puts a lot of the burden on the individuals using it,” said Aleksandra Korolova, an assistant professor of computer science at Princeton University. 

Clegg said Wednesday that he wasn’t aware of the exchange. Facebook’s online help page says the Meta AI agent will join a group conversation if invited, or if someone “asks a question in a post and no one responds within an hour.” The group’s administrators have the ability to turn it off. 

Need a camera?

In another example shown to the AP on Thursday, the agent caused confusion in a forum for swapping unwanted items near Boston. Exactly one hour after a Facebook user posted about looking for certain items, an AI agent offered a “gently used” Canon camera and an “almost-new portable air conditioning unit that I never ended up using.” 

Meta said in a written statement Thursday that “this is new technology and it may not always return the response we intend, which is the same for all generative AI systems.” The company said it is constantly working to improve the features. 

In the year after ChatGPT sparked a frenzy for AI technology that generates human-like writing, images, code and sound, the tech industry and academia introduced 149 large AI systems trained on massive datasets, more than double the year before, according to a Stanford University survey. 

They may eventually hit a limit, at least when it comes to data, said Nestor Maslej, a research manager for Stanford’s Institute for Human-Centered Artificial Intelligence. 

“I think it’s been clear that if you scale the models on more data, they can become increasingly better,” he said. “But at the same time, these systems are already trained on percentages of all the data that has ever existed on the internet.” 

More data — acquired and ingested at costs only tech giants can afford, and increasingly subject to copyright disputes and lawsuits — will continue to drive improvements. “Yet they still cannot plan well,” Maslej said. “They still hallucinate. They’re still making mistakes in reasoning.” 

Getting to AI systems that can perform higher-level cognitive tasks and common-sense reasoning — where humans still excel— might require a shift beyond building ever-bigger models. 

Seeing what works

For the flood of businesses trying to adopt generative AI, which model they choose depends on several factors, including cost. Language models, in particular, have been used to power customer service chatbots, write reports and financial insights, and summarize long documents. 

“You’re seeing companies kind of looking at fit, testing each of the different models for what they’re trying to do and finding some that are better at some areas rather than others,” said Todd Lohr, a leader in technology consulting at KPMG. 

Unlike other model developers selling their AI services to other businesses, Meta is largely designing its AI products for consumers — those using its advertising-fueled social networks. Joelle Pineau, Meta’s vice president of AI research, said at a recent London event that the company’s goal over time is to make a Llama-powered Meta AI “the most useful assistant in the world.” 

“In many ways, the models that we have today are going to be child’s play compared to the models coming in five years,” she said. 

But she said the “question on the table” is whether researchers have been able to fine-tune its bigger Llama 3 model so that it’s safe to use and doesn’t, for example, hallucinate or engage in hate speech. In contrast to leading proprietary systems from Google and OpenAI, Meta has so far advocated for a more open approach, publicly releasing key components of its AI systems for others to use. 

“It’s not just a technical question,” Pineau said. “It is a social question. What is the behavior that we want out of these models? How do we shape that? And if we keep on growing our model ever more in general and powerful without properly socializing them, we are going to have a big problem on our hands.”

Read More

Developers: Enhanced AI could outthink humans in 2 to 5 years

vancouver, british columbia — Just as the world is getting used to the rapidly expanding use of AI, or artificial intelligence, AGI is looming on the horizon.

Experts say when artificial general intelligence becomes reality, it could perform tasks better than human beings, with the possibility of higher cognitive abilities, emotions, and ability to self-teach and develop.

Ramin Hasani is a research scientist at the Massachusetts Institute of Technology and the CEO of Liquid AI, which builds specific AI systems for different organizations. He is also a TED Fellow, a program that helps develop what the nonprofit TED conference considers to be “game changers.”

Hasani says that the first signs of AGI are realistically two to five years away from being reality. He says it will have a direct impact on our everyday lives.

What’s coming, he says, will be “an AI system that can have the collective knowledge of humans. And that can beat us in tasks that we do in our daily life, something you want to do … your finances, you’re solving, you’re helping your daughter to solve their homework. And at the same time, you want to also read a book and do a summary. So an AGI would be able to do all that.”

Hasani says that advancing artificial intelligence will allow for things to move faster and can even be made to have emotions.

He says proper regulation can be achieved by better understanding how different AI systems are developed.

This thought is shared by Bret Greenstein, a partner at London-based  PricewaterhouseCoopers who leads its efforts on artificial intelligence.

“I think one is a personal responsibility for people in leadership positions, policymakers, to be educated on the topic, not in the fact that they’ve read it, but to experience it, live it and try it. And to be with people who are close to it, who understand it,” he says.

Greenstein warns that if it is over-regulated, innovation will be curtailed and access to AI will be limited to people who could benefit from it.

For musician, comedian and actor Reggie Watts, who was the bandleader on “The Late Late Show with James Corden” on CBS, AI and the coming of AGI will be a great way to find mediocre music, because it will be mimicked easily.

Calling it “artificial consciousness,” he says existing laws to protect intellectual property rights and creative industries, like music, TV and film, will work, provided they are properly adopted.

“I think it’s just about the usage of the tool, how it’s … how it’s used. Is there money being made off of it, so on, so forth. So, I think that that we already have … tools that exist that deal with these types of situations, but [the laws and regulations] need to be expanded to include AI because they’ll probably be a lot more nuance to it.”

Watts says that any form of AI is going to be smarter than one person, almost like all human intelligence collected into one point. He feels this will cause humanity to discover interesting things and the nature of reality itself.

This year’s conference was the 40th year for TED, the nonprofit organization that is an acronym for Technology, Entertainment and Design.

Read More

Google fires 28 workers protesting contract with Israel

New York — Google fired 28 employees following a disruptive sit-down protest over the tech giant’s contract with the Israeli government, a Google spokesperson said Thursday.

The Tuesday demonstration was organized by the group “No Tech for Apartheid,” which has long opposed “Project Nimbus,” Google’s joint $1.2 billion contract with Amazon to provide cloud services to the government of Israel.

Video of the demonstration showed police arresting Google workers in Sunnyvale, California, in the office of Google Cloud CEO Thomas Kurian’s, according to a post by the advocacy group on X, formerly Twitter.

Kurian’s office was occupied for 10 hours, the advocacy group said.

Workers held signs including “Googlers against Genocide,” a reference to accusations surrounding Israel’s attacks on Gaza.

“No Tech for Apartheid,” which also held protests in New York and Seattle, pointed to an April 12 Time magazine article reporting a draft contract of Google billing the Israeli Ministry of Defense more than $1 million for consulting services.

A “small number” of employees “disrupted” a few Google locations, but the protests are “part of a longstanding campaign by a group of organizations and people who largely don’t work at Google,” a Google spokesperson said.

“After refusing multiple requests to leave the premises, law enforcement was engaged to remove them to ensure office safety,” the Google spokesperson said. “We have so far concluded individual investigations that resulted in the termination of employment for 28 employees, and will continue to investigate and take action as needed.”

Israel is one of “numerous” governments for which Google provides cloud computing services, the Google spokesperson said.

“This work is not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services,” the Google spokesperson said.

Read More

New TikTok Lite app raises concerns in EU

Read More

US Commerce Dept. grants Samsung $6.4 billion for Texas chip plants

Read More

AI-generated fashion models could bring more diversity to industry — or leave it with less

Chicago, Illinois — London-based model Alexsandrah has a twin, but not in the way you’d expect: Her counterpart is made of pixels instead of flesh and blood.

The virtual twin was generated by artificial intelligence and has already appeared as a stand-in for the real-life Alexsandrah in a photo shoot. Alexsandrah, who goes by her first name professionally, in turn receives credit and compensation whenever the AI version of herself gets used — just like a human model.

Alexsandrah says she and her alter-ego mirror each other “even down to the baby hairs.” And it is yet another example of how AI is transforming creative industries — and the way humans may or may not be compensated.

Proponents say the growing use of AI in fashion modeling showcases diversity in all shapes and sizes, allowing consumers to make more tailored purchase decisions that in turn reduces fashion waste from product returns. And digital modeling saves money for companies and creates opportunities for people who want to work with the technology.

But critics raise concerns that digital models may push human models — and other professionals like makeup artists and photographers — out of a job. Unsuspecting consumers could also be fooled into thinking AI models are real, and companies could claim credit for fulfilling diversity commitments without employing actual humans.

“Fashion is exclusive, with limited opportunities for people of color to break in,” said Sara Ziff, a former fashion model and founder of the Model Alliance, a nonprofit aiming to advance workers’ rights in the fashion industry. “I think the use of AI to distort racial representation and marginalize actual models of color reveals this troubling gap between the industry’s declared intentions and their real actions.”  

Women of color in particular have long faced higher barriers to entry in modeling and AI could upend some of the gains they’ve made. Data suggests that women are more likely to work in occupations in which the technology could be applied and are more at risk of displacement than men.

In March 2023, iconic denim brand Levi Strauss & Co. announced that it would be testing AI-generated models produced by Amsterdam-based company Lalaland.ai to add a wider range of body types and underrepresented demographics on its website. But after receiving widespread backlash, Levi clarified that it was not pulling back on its plans for live photo shoots, the use of live models or its commitment to working with diverse models.

“We do not see this (AI) pilot as a means to advance diversity or as a substitute for the real action that must be taken to deliver on our diversity, equity and inclusion goals and it should not have been portrayed as such,” Levi said in its statement at the time.

The company last month said that it has no plans to scale the AI program.

The Associated Press reached out to several other retailers to ask whether they use AI fashion models. Target, Kohl’s and fast-fashion giant Shein declined to comment; Temu did not respond to a request for comment.

Meanwhile, spokespeople for Nieman Marcus, H&M, Walmart and Macy’s said their respective companies do not use AI models, although Walmart clarified that “suppliers may have a different approach to photography they provide for their products, but we don’t have that information.”

Nonetheless, companies that generate AI models are finding a demand for the technology, including Lalaland.ai, which was co-founded by Michael Musandu after he was feeling frustrated by the absence of clothing models who looked like him.

“One model does not represent everyone that’s actually shopping and buying a product,” he said. “As a person of color, I felt this painfully myself.”

Musandu says his product is meant to supplement traditional photo shoots, not replace them. Instead of seeing one model, shoppers could see nine to 12 models using different size filters, which would enrich their shopping experience and help reduce product returns and fashion waste.

The technology is actually creating new jobs, since Lalaland.ai pays humans to train its algorithms, Musandu said.

And if brands “are serious about inclusion efforts, they will continue to hire these models of color,” he added.

London-based model Alexsandrah, who is Black, says her digital counterpart has helped her distinguish herself in the fashion industry. In fact, the real-life Alexsandrah has even stood in for a Black computer-generated model named Shudu, created by Cameron Wilson, a former fashion photographer turned CEO of The Diigitals, a U.K.-based digital modeling agency.

Wilson, who is white and uses they/them pronouns, designed Shudu in 2017, described on Instagram as the “The World’s First Digital Supermodel.” But critics at the time accused Wilson of cultural appropriation and digital Blackface.

Wilson took the experience as a lesson and transformed The Diigitals to make sure Shudu — who has been booked by Louis Vuitton and BMW — didn’t take away opportunities but instead opened possibilities for women of color. Alexsandrah, for instance, has modeled in-person as Shudu for Vogue Australia, and writer Ama Badu came up with Shudu’s backstory and portrays her voice for interviews.

Alexsandrah said she is “extremely proud” of her work with The Diigitals, which created her own AI twin: “It’s something that even when we are no longer here, the future generations can look back at and be like, ‘These are the pioneers.'”

But for Yve Edmond, a New York City area-based model who works with major retailers to check the fit of clothing before it’s sold to consumers, the rise of AI in fashion modeling feels more insidious.

Edmond worries modeling agencies and companies are taking advantage of models, who are generally independent contractors afforded few labor protections in the U.S., by using their photos to train AI systems without their consent or compensation.

She described one incident in which a client asked to photograph Edmond moving her arms, squatting and walking for “research” purposes. Edmond refused and later felt swindled — her modeling agency had told her she was being booked for a fitting, not to build an avatar.

“This is a complete violation,” she said. “It was really disappointing for me.”

But absent AI regulations, it’s up to companies to be transparent and ethical about deploying AI technology. And Ziff, the founder of the Model Alliance, likens the current lack of legal protections for fashion workers to “the Wild West.”

That’s why the Model Alliance is pushing for legislation like the one being considered in New York state, in which a provision of the Fashion Workers Act would require management companies and brands to obtain models’ clear written consent to create or use a model’s digital replica; specify the amount and duration of compensation, and prohibit altering or manipulating models’ digital replica without consent.

Alexsandrah says that with ethical use and the right legal regulations, AI might open up doors for more models of color like herself. She has let her clients know that she has an AI replica, and she funnels any inquires for its use through Wilson, who she describes as “somebody that I know, love, trust and is my friend.” Wilson says they make sure any compensation for Alexsandrah’s AI is comparable to what she would make in-person.

Edmond, however, is more of a purist: “We have this amazing Earth that we’re living on. And you have a person of every shade, every height, every size. Why not find that person and compensate that person?”

Read More

Instagram blurring nudity in messages to protect teens, fight sexual extortion

LONDON — Instagram says it’s deploying new tools to protect young people and combat sexual extortion, including a feature that will automatically blur nudity in direct messages.

The social media platform said in a blog post Thursday that it’s testing out the features as part of its campaign to fight sexual scams and other forms of “image abuse,” and to make it tougher for criminals to contact teens.

Sexual extortion, or sextortion, involves persuading a person to send explicit photos online and then threatening to make the images public unless the victim pays money or engages in sexual favors. Recent high-profile cases include two Nigerian brothers who pleaded guilty to sexually extorting teen boys and young men in Michigan, including one who took his own life, and a Virginia sheriff’s deputy who sexually extorted and kidnapped a 15-year-old girl.

Instagram and other social media companies have faced growing criticism for not doing enough to protect young people. Mark Zuckerberg, the CEO of Instagram’s owner Meta Platforms, apologized to the parents of victims of such abuse during a Senate hearing earlier this year.

Meta, which is based in Menlo Park, California, also owns Facebook and WhatsApp but the nudity blur feature won’t be added to messages sent on those platforms.

Instagram said scammers often use direct messages to ask for “intimate images.” To counter this, it will soon start testing out a nudity-protection feature for direct messages that blurs any images with nudity “and encourages people to think twice before sending nude images.”

“The feature is designed not only to protect people from seeing unwanted nudity in their DMs, but also to protect them from scammers who may send nude images to trick people into sending their own images in return,” Instagram said.

The feature will be turned on by default globally for teens under 18. Adult users will get a notification encouraging them to activate it.

Images with nudity will be blurred with a warning, giving users the option to view it. They’ll also get an option to block the sender and report the chat.

For people sending direct messages with nudity, they will get a message reminding them to be cautious when sending “sensitive photos.” They’ll also be informed that they can unsend the photos if they change their mind, but that there’s a chance others may have already seen them.

As with many of Meta’s tools and policies around child safety, critics saw the move as a positive step, but one that does not go far enough.

“I think the tools announced can protect senders, and that is welcome. But what about recipients?” said Arturo Béjar, former engineering director at the social media giant who is known for his expertise in curbing online harassment. He said 1 in 8 teens receives an unwanted advance on Instagram every seven days, citing internal research he compiled while at Meta that he presented in November testimony before Congress. “What tools do they get? What can they do if they get an unwanted nude?”

Béjar said “things won’t meaningfully change” until there is a way for a teen to say they’ve received an unwanted advance, and there is transparency about it.

Instagram said it’s working on technology to help identify accounts that could be potentially be engaging in sexual extortion scams, “based on a range of signals that could indicate sextortion behavior.”

To stop criminals from connecting with young people, it’s also taking measures including not showing the “message” button on a teen’s profile to potential sextortion accounts, even if they already follow each other, and testing new ways to hide teens from these accounts.

In January, the FBI warned of a “huge increase” in sextortion cases targeting children — including financial sextortion, where someone threatens to release compromising images unless the victim pays. The targeted victims are primarily boys between the ages of 14 to 17, but the FBI said any child can become a victim. In the six-month period from October 2022 to March 2023, the FBI saw a more than 20% increase in reporting of financially motivated sextortion cases involving minor victims compared to the same period in the previous year.

Read More

Swarms of drones can be managed by a single person

The U.S. military says large groups of drones and ground robots can be managed by just one person without added stress to the operator. As VOA’s Julie Taboh reports, the technologies may be beneficial for civilian uses, too. VOA footage by Adam Greenbaum.

Read More

Indiana aspires to become next great tech hub

The Midwestern state of Indiana aspires to become the next great technology center as the United States ramps up investment in domestic microchip development and manufacturing. VOA’s Kane Farabaugh has more from Indianapolis. Videographer: Kane Farabaugh, Adam Greenbaum

Read More

Indiana aspires to become next great tech center

indianapolis, indiana — Semiconductors, or microchips, are critical to almost everything electronic used in the modern world. In 1990, the United States produced about 40% of the world’s semiconductors. As manufacturing migrated to Asia, U.S. production fell to about 12%.  

“During COVID, we got a wake-up call. It was like [a] Sputnik moment,” explained Mark Lundstrom, an engineer who has worked with microchips much of his life. 

The 2020 global coronavirus pandemic slowed production in Asia, creating a ripple through the global supply chain and leading to shortages of everything from phones to vehicles. Lundstrom said increasing U.S. reliance on foreign chip manufacturers exposed a major weakness. 

“We know that AI is going to transform society in the next several years, it requires extremely powerful chips. The most powerful leading-edge chips.” 

Today, Lundstrom is the acting dean of engineering at Purdue University in Lafayette, Indiana, a leader in cutting-edge semiconductor development, which has new importance amid the emerging field of artificial intelligence. 

“If we fall behind in AI, the consequences are enormous for the defense of our country, for our economic future,” Lundstrom told VOA. 

Amid the buzz of activity in a laboratory on Purdue’s campus, visitors can get a vision of what the future might look like in microchip technology. 

“The key metrics of the performance of the chips actually are the size of the transistors, the devices, which is the building block of the computer chips,” said Zhihong Chen, director of Purdue’s Birck Nanotechnology Center, where engineers work around the clock to push microchip technology into the future. 

“We are talking about a few atoms in each silicon transistor these days. And this is what this whole facility is about,” Chen said. “We are trying to make the next generation transistors better devices than current technologies. More powerful and more energy-efficient computer chips of the future.” 

Not just RVs anymore

Because of Purdue’s efforts, along with those on other university campuses in the state, Indiana believes it’s an attractive location for manufacturers looking to build new microchip facilities. 

“Purdue University alone, a top four-ranked engineering school, offers more engineers every year than the next top three,” said Eric Holcomb, Indiana’s Republican governor. “When you have access to that kind of talent, when you have access to the cost of doing business in the state of Indiana, that’s why people are increasingly saying, Indiana.” 

Holcomb is in the final year of his eight-year tenure in the state’s top position. He wants to transform Indiana beyond the recreational vehicle, or “RV capital” of the country.  

“We produce about plus-80% of all the RV production in North America in one state,” he told VOA. “We are not just living up to our reputation as being the number one manufacturing state per capita in America, but we are increasingly embracing the future of mobility in America.” 

Holcomb is spearheading an effort to make Indiana the next great technology center as the U.S. ramps up investment in domestic microchip development and manufacturing.  “If we want to compete globally, we have to get smarter and healthier and more equipped, and we have to continue to invest in our quality of place,” Holcomb told VOA in an interview. 

His vision is shared by other lawmakers, including U.S. Senator Todd Young of Indiana, who co-sponsored the bipartisan CHIPS and Science Act, which commits more than $50 billion in federal funding for domestic microchip development. 

‘We are committed’

Indiana is now home to one of 31 designated U.S. technology and innovation hubs, helping it qualify for hundreds of millions of dollars in grants designed to attract technology-driven businesses. 

“The signal that it sends to the rest of the world [is] that we are in it, we are committed, and we are focused,” said Holcomb. “We understand that economic development, economic security and national security complement one another.” 

Indiana’s efforts are paying off. 

In April, South Korean microchip manufacturer SK Hynix announced it was planning to build a $4 billion facility near Purdue University that would produce next-generation, high-bandwidth memory, or HBM chips, critical for artificial intelligence applications.  

The facility, slated to start operating in 2028, could create more than 1,000 new jobs. While U.S. chip manufacturer SkyWater also plans to invest nearly $2 billion in Indiana’s new LEAP Innovation District near Purdue, the state recently lost bidding to host chipmaker Intel, which selected Ohio for two new factories. 

“Companies tend to like to go to locations where there is already that infrastructure, where that supply chain is in place,” Purdue’s Lundstrom said. “That’s a challenge for us, because this is a new industry for us. So, we have a chicken-and- egg problem that we have to address, and we are beginning to address that.” 

Lundstrom said the CHIPS and Science Act and the federal money that comes with it are helping Indiana ramp up to compete with other U.S. locations already known for microchip development, such as Silicon Valley in California and Arizona. 

What could help Indiana gain an edge is its natural resources — plenty of land and water, and regular weather patterns, all crucial for the sensitive processes needed to manufacture microchips at large manufacturing centers. 

Read More

Biden administration imposes first-ever national drinking water limits on toxic PFAS 

Read More

Ukrainian civilians help build up their country’s drone fleet

Inexpensive first-person view – or radio controlled – drones have become a powerful weapon in Ukraine’s war against Russian invaders. As the country presses the West for more military aid, many Ukrainian civilians are stepping in to help by making homemade attack drones. Lesia Bakalets has the story from Kyiv.

Read More

Taliban’s plans to curtail access to Facebook in Afghanistan alarm critics

Read More

With $6.6B to Arizona hub, Biden touts big steps in US chipmaking

Washington; Flagstaff, Arizona — President Joe Biden on Monday announced a $6.6 billion grant to Taiwan’s top chip manufacturer to produce semiconductors in the southwestern U.S. state of Arizona, which includes a third facility that will bring the foreign tech giant’s investment in the state to $65 billion.

Biden said the move aims to perk up a decades-old slump in American chip manufacturing. Taiwan Semiconductor Manufacturing Company (TSMC), which is based in the Chinese-claimed island, claims more than half of the global market share in chip manufacturing.

The new facility, Biden said, will put the U.S. on track to produce 20% of the world’s leading-edge semiconductors by 2030.

“I was determined to turn that around, and thanks to my CHIPS and Science Act — a key part of my Investing in America agenda — semiconductor manufacturing and jobs are making a comeback,” Biden said in a statement.

U.S. production of this American-born technology has fallen steeply in recent decades, said Andy Wang, dean of engineering at Northern Arizona University.

“As a nation, we used to produce 40% of microchips for the whole world,” he told VOA. “Now, we produce less than 10%.”

A single semiconductor transistor is smaller than a grain of sand. But billions of them, packed neatly together, can connect the world through a mobile phone, control sophisticated weapons of war and satellites that orbit the Earth, and someday may even drive a car.

The immense value of these tiny chips has fueled fierce competition between the U.S. and China.

The U.S. Department of Commerce has taken several steps to hamper China’s efforts to build its own chip industry. Those include export controls and new rules to prevent “foreign countries of concern” — which it said includes China, Iran, North Korea and Russia — from benefiting from funding from the CHIPS and Science Act.

While analysts are divided over whether Taiwan’s dominance of this critical industry makes it more or less vulnerable to Chinese aggression, they agree it confers the island significant global status.

“It is debatable what, if any, role Taiwan’s semiconductor manufacturing prowess plays in deterrence,” said David Sacks, an analyst who focuses on U.S.-China relations at the Council on Foreign Relations. “What is not debatable is how devastating an attack on Taiwan would be for the global economy.”

Biden did not mention U.S. adversaries in his statement, but he noted the impact of Monday’s announcement, saying it “represent(s) a broader story for semiconductor manufacturing that’s made in America and with the strong support of America’s leading technology firms to build the products we rely on every day.”

VOA met with engineers in the new technological hub state, who said the legislation addresses a key weakness in American chip manufacturing.

“We’ve just gotten in the cycle of the last 15 to 20 years, where innovation has slowed down,” said Todd Achilles, who teaches innovation, strategy and policy analysis at the University of California-Berkeley. “It’s all about financial results, investor payouts and stock buybacks. And we’ve lost that innovation muscle. And the CHIPS Act — pulling that together with the CHIPS Act — is the perfect opportunity to restore that.”

The White House says this new investment could create 25,000 construction and manufacturing jobs. Academics say they’re churning out workers at a rapid pace, but that still, America lacks talent.

“Our engineering college is the largest in the country, with over 33,000 enrolled students, and still we’re hearing from companies across the semiconductor industry that they’re not able to get the talent they need in time,” Zachary Holman, vice dean for research and innovation at Arizona State University, told VOA.

And as the American industry stretches to keep pace, it races a technical trend known as t: that the number of transistors in a computer chip doubles about every two years. As a result, cutting-edge chips get ever smaller as they grow in computing power.

TSMC in 2022 broke ground on a facility that makes the smallest chip currently available, coming in at 3 nanometers — that’s just wider than a strand of DNA.

Reporter Levi Stallings contributed to this report from Flagstaff, Arizona.

Read More

With $6.6B to Arizona hub, Biden touts big steps in US chipmaking

President Joe Biden on Monday announced a $6.6 billion grant to Taiwan’s top chip manufacturer for semiconductor manufacturing in Arizona, which includes a third facility that will bring the tech giant’s investment in the state to $65 billion. VOA’s White House correspondent Anita Powell reports from Washington, with reporter Levi Stallings in Flagstaff, Arizona.

Read More

Experts fear Cambodian cybercrime law could aid crackdown

PHNOM PENH, CAMBODIA — The Cambodian government is pushing ahead with a cybercrime law experts say could be wielded to further curtail freedom of speech amid an ongoing crackdown on dissent. 

The cybercrime draft is the third controversial internet law authorities have pursued in the past year as the government, led by new Prime Minister Hun Manet, seeks greater oversight of internet activities. 

Obtained by VOA in both English and Khmer language versions, the latest draft of the cybercrime law is marked “confidential” and contains 55 articles. It lays out various offenses punishable by fines and jail time, including defamation, using “insulting, derogatory or rude language,” and sharing “false information” that could harm Cambodia’s public order and “traditional culture.”  

The law would also allow authorities to collect and record internet traffic data, in real time, of people under investigation for crimes, and would criminalize online material that “depicts any act or activity … intended to stimulate sexual desire” as pornography. 

Digital rights and legal experts who reviewed the law told VOA that its vague language, wide-ranging categories of prosecutable speech and lack of protections for citizens fall short of international standards, instead providing the government more tools to jail dissenters, opposition members, women and LGBTQ+ people. 

Although in the works since 2016, earlier drafts of the law, which sparked similar criticism, have not leaked since 2020 and 2021. Authorities hope to enact the law by the end of the year. 

“This cybercrime bill offers the government even more power to go after people expressing dissent,” Kian Vesteinsson, a senior research analyst for technology at the human rights organization Freedom House, told VOA.  

“These vague provisions around defamation, insults and disinformation are ripe for abuse, and we know that Cambodian authorities have deployed similarly vague criminal provisions in other contexts,” Vesteinsson said. 

Cambodian law already considers defamation a criminal offense, but the cybercrime draft would make it punishable by jail time up to six months, plus a fine of up to $5,000. The “false information” clause — defined as sharing information that “intentionally harms national defense, national security, relations with other countries, economy, public order, or causes discrimination, or affects traditional culture” — carries a three- to five-year sentence and fine of up to $25,000. 

Daron Tan, associate international legal adviser at the International Commission of Jurists, told VOA the defamation and false information articles do not comply with the International Covenant on Civil and Political Rights, to which Cambodia is a party, and that the United Nations Human Rights Committee is “very clear that imprisonment is never the appropriate penalty for defamation.” 

“It’s a step very much in the wrong direction,” Tan said. “We are very worried that this would expand the laws that the government can use against its critics.” 

Chea Pov, the deputy head of Cambodia’s National Police and former director of the Ministry of Interior’s Anti-Cybercrime Department that is overseeing the drafting process, told VOA the law “doesn’t restrict your rights” and claimed the U.S. companies which reviewed it “didn’t raise concerns.”  

Google, Meta and Amazon, which the government has said were involved in drafting the law, did not respond to requests for comment. 

“If you say something based on evidence, there is no problem,” Pov said. “But if there is no evidence, [you] defame others, which is also stated in the criminal law … we don’t regard this as a restriction.”  

The law also makes it illegal to use technology to display, trade, produce or disseminate pornography, or to advertise a “product or service mixed with pornography” online. Pornography is defined as anything that “describes a genital or depicts any act or activity involving a sexual organ or any part of the human body, animal, or object … or other similar pornography that is intended to stimulate sexual desire or cause sexual excitement.” 

Experts say this broad category is likely to be disproportionately deployed against women and LGBTQ+ people. 

Cambodian authorities have often rebuked or arrested women for dressing “too sexily” on social media, singing sexual songs or using suggestive speech. In 2020, an online clothes and cosmetics seller received a six-month suspended sentence after posting provocative photos; in another incident, a policewoman was forced to publicly apologize for posting photos of herself breastfeeding. 

Naly Pilorge, outreach director at Cambodian human rights organization Licadho, told VOA the draft law “could lead to more rights violations against women in the country.” 

“This vague definition of ‘pornography’ poses a serious threat to any woman whose online activity the government decides may ‘cause sexual excitement,’” Pilorge said. “The draft law does not acknowledge any legitimate artistic or educational purposes to depict or describe sexual organs, posing another threat to freedom of expression.” 

In March, authorities said they hosted civil society organizations to revisit the draft. They plan to complete the drafting process and send the law to Parliament for passage before the end of the year, according to Pov, the deputy head of police. 

Soeung Saroeun, executive director of the NGO Forum on Cambodia, told VOA “there was no consultation on each article” at the recent meeting. 

“The NGO representatives were unable to analyze and present their inputs,” said Saroeun, echoing concerns about its contents. “How is it [possible]? We need to debate on this.” 

The cybercrime law has resurfaced as the government works to complete two other draft internet laws, one covering cybersecurity and the other personal data protection. Experts have critiqued the drafts as providing expanded police powers to seize computer systems and making citizens’ data vulnerable to hacking and surveillance. 

Authorities have also sought to create a national internet gateway that would require traffic to run through centralized government servers, though the status of that project has been unclear since early 2022 when the government said it faced delays. 

Read More

Biden administration announces $6.6 billion to ensure leading-edge microchips are built in US 

WILMINGTON, Del. — The Biden administration pledged on Monday to provide up to $6.6 billion so that a Taiwanese semiconductor giant can expand the facilities it is already building in Arizona and better ensure that the most-advanced microchips are produced domestically for the first time. 

Commerce Secretary Gina Raimondo said the funding for Taiwan Semiconductor Manufacturing Co. means the company can expand on its existing plans for two facilities in Phoenix and add a third, newly announced production hub. 

“These are the chips that underpin all artificial intelligence, and they are the chips that are the necessary components for the technologies that we need to underpin our economy,” Raimondo said on a call with reporters, adding that they were vital to the “21st century military and national security apparatus.” 

The funding is tied to a sweeping 2022 law that President Joe Biden has celebrated and which is designed to revive U.S. semiconductor manufacturing. Known as the CHIPS and Science Act, the $280 billion package is aimed at sharpening the U.S. edge in military technology and manufacturing while minimizing the kinds of supply disruptions that occurred in 2021, after the start of the coronavirus pandemic, when a shortage of chips stalled factory assembly lines and fueled inflation. 

The Biden administration has promised tens of billions of dollars to support construction of U.S. chip foundries and reduce reliance on Asian suppliers, which Washington sees as a security weakness. 

“Semiconductors – those tiny chips smaller than the tip of your finger – power everything from smartphones to cars to satellites and weapons systems,” Biden said in a statement. “TSMC’s renewed commitment to the United States, and its investment in Arizona represent a broader story for semiconductor manufacturing that’s made in America and with the strong support of America’s leading technology firms to build the products we rely on every day.” 

Taiwan Semiconductor Manufacturing Co. produces nearly all of the leading-edge microchips in the world and plans to eventually do so in the U.S. 

It began construction of its first facility in Phoenix in 2021, and started work on a second hub last year, with the company increasing its total investment in both projects to $40 billion. The third facility should be producing microchips by the end of the decade and will see the company’s commitment increase to a total of $65 billion, Raimondo said. 

The investments would put the U.S. on track to produce roughly 20% of the world’s leading-edge chips by 2030, and Raimondo said they should help create 6,000 manufacturing jobs and 20,000 construction jobs, as well as thousands of new positions more indirectly tied to assorted suppliers in chip-related industries tied to Arizona projects. 

The potential incentives announced Monday include $50 million to help train the workforce in Arizona to be better equipped to work in the new facilities. Additionally, approximately $5 billion of proposed loans would be available through the CHIPS and Science Act. 

“TSMC’s commitment to manufacture leading-edge chips in Arizona marks a new chapter for America’s semiconductor industry,” Lael Brainard, director of the White House National Economic Council, told reporters. 

The announcement came as U.S. Treasury Secretary Janet Yellen is traveling in China. Senior administration officials were asked on the call with reporters if the Biden administration gave China a head’s up on the coming investment, given the delicate geopolitics surrounding Taiwan. The officials said only that their focus in making Monday’s announcement was solely on advancing U.S. manufacturing. 

“We are thrilled by the progress of our Arizona site to date,” C.C. Wei, CEO of TSMC, said in a statement, “And are committed to its long-term success.” 

Read More

Exclusive: Russian company supplies military with microchips despite denials

PENTAGON — Russian microchip company AO PKK Milandr continued to provide microchips to the Russian armed forces at least several months after Russia invaded Ukraine, despite public denials by company director Alexey Novoselov of any connection with Russia’s military.

A formal letter obtained by VOA dated February 10, 2023, shows a sale request for 4,080 military grade microchips for the Russian military. The sale request was addressed from a deputy commander of the 546 military representation of the Russian Ministry of Defense and the commercial director of Russian manufacturer NPO Poisk to Milandr CEO S.V. Tarasenko for delivery by April 2023, more than a year into the war.

The letter instructs Milandr to provide three types of microchip components to NPO Poisk, a well-established Russian defense manufacturer that makes detonators for weapons used by the Russian Armed Forces.

“Each of these three circuits that you have in the table on the document, each one of them is classed as a military-grade component … and each of these is manufactured specifically by Milandr,” said Denys Karlovskyi, a research fellow at the London-based Royal United Services Institute for Defense and Security Studies. VOA shared the document with him to confirm its authenticity.

In addition to Milandr CEO Tarasenko, the letter is addressed to a commander of the Russian Defense Ministry’s 514 military representation of the Russian Ministry of Defense named I.A. Shvid.

Karlovskyi says this inclusion shows that Milandr, like Poisk, appears to have a Russian commander from the Defense Ministry’s oversight unit assigned to it — a clear indicator that a company is part of Russia’s defense industry.

Milandr, headquartered near Moscow in an area known as “Soviet Silicon Valley,” was sanctioned by the United States in November 2022, for its illegal procurement of microelectronic components using front companies.

In the statement announcing the 2022 sanctions against Milandr and more than three dozen other entities and individuals, U.S. Treasury Secretary Janet Yellen said, “The United States will continue to expose and disrupt the Kremlin’s military supply chains and deny Russia the equipment and technology it needs to wage its illegal war against Ukraine.”

Karlovskyi said that in Russia’s database of public contracts, Milandr is listed in more than 500 contracts, supplying numerous state-owned and military-grade enterprises, including Ural Optical Mechanical Plant, Concern Avtomatika and Izhevsk Electromechanical Plant, or IEMZ Kupol, which also have been sanctioned by the United States.

“It clearly suggests that this entity is a crucial node in Russia’s military supply chain,” Karlovskyi told VOA.

Novoselov, Milandr’s current director, told Bloomberg News last August that he was not aware of any connections to the Russian military.

“I don’t know any military persons who would be interested in our product,” he told Bloomberg in a phone interview, adding that the company mostly produces electric power meters.

The U.S. allegations are “like a fantasy,” he said. “The United States’ State Department, they suppose that every electronics business in Russia is focused on the military. I think that is funny.”

But a U.S. defense official told VOA that helping Russia’s military kill tens of thousands of people in an illegal invasion “is no laughing matter.”

“The company is fueling microchips for missiles and heavily armored vehicles that are used to continue the war in Ukraine,” said the defense official, who spoke to VOA on the condition of anonymity due to the sensitivities of discussing U.S. intelligence.

Milandr’s co-founder Mikhail Pavlyuk was also sanctioned during the summer of 2022 for his involvement in microchip smuggling operations and was caught stealing from Milandr. Pavlyuk fled Russia and has claimed he was not involved.

Officials estimate that 500,000 Ukrainian and Russian troops have been killed or injured in the war, with tens of thousands of Ukrainian civilians killed in the fighting.

“There are consequences to their actions, and the U.S. will persist to expose and disrupt the Kremlin’s supply chain,” the U.S. defense official said.

Read More

US, Europe, Issue Strictest Rules Yet on AI

washington — In recent weeks, the United States, Britain and the European Union have issued the strictest regulations yet on the use and development of artificial intelligence, setting a precedent for other countries.

This month, the United States and the U.K. signed a memorandum of understanding allowing for the two countries to partner in the development of tests for the most advanced artificial intelligence models, following through on commitments made at the AI Safety Summit last November.

These actions come on the heels of the European Parliament’s March vote to adopt its first set of comprehensive rules on AI. The landmark decision sets out a wide-ranging set of laws to regulate this exploding technology.

At the time, Brando Benifei, co-rapporteur on the Artificial Intelligence Act plenary vote, said, “I think today is again an historic day on our long path towards regulation of AI. … The first regulation in the world that is putting a clear path towards a safe and human-centric development of AI.”

The new rules aim to protect citizens from dangerous uses of AI, while exploring its boundless potential.

Beth Noveck, professor of experiential AI at Northeastern University, expressed enthusiasm about the rules.

“It’s really exciting that the EU has passed really the world’s first … binding legal framework addressing AI. It is, however, not the end; it is really just the beginning.”

The new rules will be applied according to risk level: the higher the risk, the stricter the rules.

“It’s not regulating the tech,” she said. “It’s regulating the uses of the tech, trying to prohibit and to restrict and to create controls over the most malicious uses — and transparency around other uses.

“So things like what China is doing around social credit scoring, and surveillance of its citizens, unacceptable.”

Noveck described what she called “high-risk uses” that would be subject to scrutiny. Those include the use of tools in ways that could deprive people of their liberty or within employment.

“Then there are lower risk uses, such as the use of spam filters, which involve the use of AI or translation,” she said. “Your phone is using AI all the time when it gives you the weather; you’re using Siri or Alexa, we’re going to see a lot less scrutiny of those common uses.”

But as AI experts point out, new laws just create a framework for a new model of governance on a rapidly evolving technology.

Dragos Tudorache, co-rapporteur on the AI Act plenary vote, said, “Because AI is going to have an impact that we can’t only measure through this act, we will have to be very mindful of this evolution of the technology in the future and be prepared.”

In late March, the Biden administration issued the first government-wide policy to mitigate the risks of artificial intelligence while harnessing its benefits.

The announcement followed President Joe Biden’s executive order last October, which called on federal agencies to lead the way toward better governance of the technology without stifling innovation.

“This landmark executive order is testament to what we stand for: safety, security, trust, openness,” Biden said at the time,” proving once again that America’s strength is not just the power of its example, but the example of its power.”

Looking ahead, experts say the challenge will be to update rules and regulations as the technology continues to evolve.

Read More

Hybrids, electric vehicles shine at New York auto show

The 2024 New York International Auto Show kicked off in Manhattan in late March — and visitors have until April 7 to admire some of the coolest new car technology. Evgeny Maslov has the story, narrated by Anna Rice. Camera: Michael Eckels.

Read More

Scathing federal report rips Microsoft for response to Chinese hack

BOSTON — In a scathing indictment of Microsoft corporate security and transparency, a Biden administration-appointed review board issued a report Tuesday saying “a cascade of errors” by the tech giant let state-backed Chinese cyber operators break into email accounts of senior U.S. officials including Commerce Secretary Gina Raimondo.

The Cyber Safety Review Board, created in 2021 by executive order, describes shoddy cybersecurity practices, a lax corporate culture and a lack of sincerity about the company’s knowledge of the targeted breach, which affected multiple U.S. agencies that deal with China.

It concluded that “Microsoft’s security culture was inadequate and requires an overhaul” given the company’s ubiquity and critical role in the global technology ecosystem. Microsoft products “underpin essential services that support national security, the foundations of our economy, and public health and safety.”

The panel said the intrusion, discovered in June by the State Department and dating to May, “was preventable and should never have occurred,” and it blamed its success on “a cascade of avoidable errors.” What’s more, the board said, Microsoft still doesn’t know how the hackers got in.

The panel made sweeping recommendations, including urging Microsoft to put on hold adding features to its cloud computing environment until “substantial security improvements have been made.”

It said Microsoft’s CEO and board should institute “rapid cultural change,” including publicly sharing “a plan with specific timelines to make fundamental, security-focused reforms across the company and its full suite of products.”

In a statement, Microsoft said it appreciated the board’s investigation and would “continue to harden all our systems against attack and implement even more robust sensors and logs to help us detect and repel the cyber-armies of our adversaries.”

In all, the state-backed Chinese hackers broke into the Microsoft Exchange Online email of 22 organizations and more than 500 individuals around the world — including the U.S. ambassador to China, Nicholas Burns — accessing some cloud-based email boxes for at least six weeks and downloading some 60,000 emails from the State Department alone, the 34-page report said. Three think tanks and foreign government entities, including a number of British organizations, were among those compromised, it said.

The board, convened by Homeland Security Secretary Alejandro Mayorkas in August, accused Microsoft of making inaccurate public statements about the incident — including issuing a statement saying it believed it had determined the likely root cause of the intrusion “when, in fact, it still has not.” Microsoft did not update that misleading blog post, published in September, until mid-March, after the board repeatedly asked if it planned to issue a correction, it said.

Separately, the board expressed concern about a separate hack disclosed by the Redmond, Washington, company in January, this one of email accounts — including those of an undisclosed number of senior Microsoft executives and an undisclosed number of Microsoft customers — and attributed to state-backed Russian hackers.

The board lamented “a corporate culture that deprioritized both enterprise security investments and rigorous risk management.”

The Chinese hack was initially disclosed in July by Microsoft in a blog post and carried out by a group the company calls Storm-0558. That same group, the panel noted, has been engaged in similar intrusions — compromising cloud providers or stealing authentication keys so it can break into accounts — since at least 2009, targeting companies including Google, Yahoo, Adobe, Dow Chemical and Morgan Stanley.

Microsoft noted in its statement that the hackers involved are “well-resourced nation state threat actors who operate continuously and without meaningful deterrence.”

The company said that it recognized that recent events “have demonstrated a need to adopt a new culture of engineering security in our own networks,” and added that it had “mobilized our engineering teams to identify and mitigate legacy infrastructure, improve processes, and enforce security benchmarks.”

Read More

Google to delete billions of records following private browsing settlement

Read More

US, Britain announce partnership on AI safety, testing

WASHINGTON — The United States and Britain on Monday announced a new partnership on the science of artificial intelligence safety, amid growing concerns about upcoming next-generation versions.

Commerce Secretary Gina Raimondo and British Technology Secretary Michelle Donelan signed a memorandum of understanding in Washington to jointly develop advanced AI model testing, following commitments announced at an AI Safety Summit in Bletchley Park in November.

“We all know AI is the defining technology of our generation,” Raimondo said. “This partnership will accelerate both of our institutes work across the full spectrum to address the risks of our national security concerns and the concerns of our broader society.”

Britain and the United States are among countries establishing government-led AI safety institutes.

Britain said in October its institute would examine and test new types of AI, while the United States said in November it was launching its own safety institute to evaluate risks from so-called frontier AI models and is now working with 200 companies and entites.

Under the formal partnership, Britain and the United States plan to perform at least one joint testing exercise on a publicly accessible model and are considering exploring personnel exchanges between the institutes. Both are working to develop similar partnerships with other countries to promote AI safety.

“This is the first agreement of its kind anywhere in the world,” Donelan said. “AI is already an extraordinary force for good in our society and has vast potential to tackle some of the world’s biggest challenges, but only if we are able to grip those risks.”

Generative AI, which can create text, photos and videos in response to open-ended prompts, has spurred excitement as well as fears it could make some jobs obsolete, upend elections and potentially overpower humans and catastrophic effects.

In a joint interview with Reuters Monday, Raimondo and Donelan urgent joint action was needed to address AI risks.

“Time is of the essence because the next set of models are about to be released, which will be much, much more capable,” Donelan said. “We have a focus one the areas that we are dividing and conquering and really specializing.”

Raimondo said she would raise AI issues at a meeting of the U.S.-EU Trade and Technology Council in Belgium Thursday.

The Biden administration plans to soon announce additions to its AI team, Raimondo said. “We are pulling in the full resources of the U.S. government.”

Both countries plan to share key information on capabilities and risks associated with AI models and systems and technical research on AI safety and security.

In October, Biden signed an executive order that aims to reduce the risks of AI. In January, the Commerce Department said it was proposing to require U.S. cloud companies to determine whether foreign entities are accessing U.S. data centers to train AI models.

Britain said in February it would spend more than 100 million pounds ($125.5 million) to launch nine new research hubs and AI train regulators about the technology.

Raimondo said she was especially concerned about the threat of AI applied to bioterrorism or a nuclear war simulation.

“Those are the things where the consequences could be catastrophic and so we really have to have zero tolerance for some of these models being used for that capability,” she said.

Read More

Trump Media shares plummet 21% days after debut

Read More

Romania center explores world’s most powerful laser

Read More

Kia Recalls 427,000 Telluride SUVs; Could Roll Away While Parked

New York — Kia is recalling more than 427,000 of its Telluride SUVs due to a defect that may cause the cars to roll away while they’re parked.

According to documents published by the National Highway Traffic Safety Administration, the intermediate shaft and right front driveshaft of certain 2020-2024 Tellurides may not be fully engaged. Over time, this can lead to “unintended vehicle movement” while the cars are in park — increasing potential crash risks.

Kia America decided to recall all 2020-2023 model year and select 2024 model year Tellurides earlier this month, NHTSA documents show. At the time, no injuries or crashes were reported.

Improper assembly is suspected to be the cause of the shaft engagement problem — with the recall covering 2020-2024 Tellurides that were manufactured between Jan. 9, 2019, and Oct. 19, 2023. Kia America estimates that 1% have the defect.

To remedy this issue, recall documents say, dealers will update the affected cars’ electronic parking brake software and replace any damaged intermediate shafts for free. Owners who already incurred repair expenses will also be reimbursed.

In the meantime, drivers of the impacted Tellurides are instructed to manually engage the emergency brake before exiting the vehicle. Drivers can also confirm if their specific vehicle is included in this recall and find more information using the NHTSA site and/or Kia’s recall lookup platform.

Owner notification letters are otherwise set to be mailed out on May 15, with dealer notification beginning a few days prior.

The Associated Press reached out to Irvine, California-based Kia America for further comment Sunday. No comment was received.

Read More

Gmail Revolutionized Email 20 Years Ago

San Francisco — Google co-founders Larry Page and Sergey Brin loved pulling pranks, so they began rolling out outlandish ideas every April Fool’s Day not long after starting their company more than a quarter century ago. One year, Google posted a job opening for a Copernicus research center on the moon. Another year, the company said it planned to roll out a “scratch and sniff” feature on its search engine.

The jokes were consistently over-the-top, and people learned to laugh them off as another example of Google mischief. That’s why Page and Brin decided to unveil something no one would believe was possible 20 years ago on April Fool’s Day.

It was Gmail, a free service boasting 1 gigabyte of storage per account, an amount that sounds almost pedestrian in an age of 1-terabyte iPhones. But it sounded like a preposterous amount of email capacity back then, enough to store about 13,500 emails before running out of space compared to just 30 to 60 emails in the then-leading webmail services run by Yahoo and Microsoft. That translated into 250 to 500 times more email storage space.

Besides the quantum leap in storage, Gmail also came equipped with Google’s search technology so users could quickly retrieve a tidbit from an old email, photo or other personal information stored on the service. It also automatically threaded together a string of communications about the same topic, so everything flowed together as if it was a single conversation.

“The original pitch we put together was all about the three ‘S’s’ — storage, search and speed,” said former Google executive Marissa Mayer, who helped design Gmail and other company products before later becoming Yahoo’s CEO.

It was such a mind-bending concept that shortly after The Associated Press published a story about Gmail late on the afternoon of April Fool’s 2004, readers began calling and emailing to inform the news agency it had been duped by Google’s pranksters.

“That was part of the charm, making a product that people won’t believe is real. It kind of changed people’s perceptions about the kinds of applications that were possible within a web browser,” former Google engineer Paul Buchheit recalled during a recent AP interview about his efforts to build Gmail.

It took three years to do as part of a project called “Caribou” — a reference to a running gag in the Dilbert comic strip. “There was something sort of absurd about the name Caribou, it just made make me laugh,” said Buchheit, the 23rd employee hired at a company that now employs more than 180,000 people.

The AP knew Google wasn’t joking about Gmail because an AP reporter had been abruptly asked to come down from San Francisco to the company’s Mountain View, California, headquarters to see something that would make the trip worthwhile.

After arriving at a still-developing corporate campus that would soon blossom into what became known as the “Googleplex,” the AP reporter was ushered into a small office where Page was wearing an impish grin while sitting in front of his laptop computer.

Page, then just 31 years old, proceeded to show off Gmail’s sleekly designed inbox and demonstrated how quickly it operated within Microsoft’s now-retired Explorer web browser. And he pointed out there was no delete button featured in the main control window because it wouldn’t be necessary, given Gmail had so much storage and could be so easily searched. “I think people are really going to like this,” Page predicted.

As with so many other things, Page was right. Gmail now has an estimated 1.8 billion active accounts — each one now offering 15 gigabytes of free storage bundled with Google Photos and Google Drive. Even though that’s 15 times more storage than Gmail initially offered, it’s still not enough for many users who rarely see the need to purge their accounts, just as Google hoped.

The digital hoarding of email, photos and other content is why Google, Apple and other companies now make money from selling additional storage capacity in their data centers. (In Google’s case, it charges anywhere from $30 annually for 200 gigabytes of storage to $250 annually for 5 terabytes of storage). Gmail’s existence is also why other free email services and the internal email accounts that employees use on their jobs offer far more storage than was fathomed 20 years ago.

“We were trying to shift the way people had been thinking because people were working in this model of storage scarcity for so long that deleting became a default action,” Buchheit said.

Gmail was a game changer in several other ways while becoming the first building block in the expansion of Google’s internet empire beyond its still-dominant search engine.

After Gmail came Google Maps and Google Docs with word processing and spreadsheet applications. Then came the acquisition of video site YouTube, followed by the introduction of the Chrome browser and the Android operating system that powers most of the world’s smartphones. With Gmail’s explicitly stated intention to scan the content of emails to get a better understanding of users’ interests, Google also left little doubt that digital surveillance in pursuit of selling more ads would be part of its expanding ambitions.

Although it immediately generated a buzz, Gmail started out with a limited scope because Google initially only had enough computing capacity to support a small audience of users.

But that scarcity created an air of exclusivity around Gmail that drove feverish demand for elusive invitations to sign up. At one point, invitations to open a Gmail account were selling for $250 apiece on eBay. “It became a bit like a social currency, where people would go, ‘Hey, I got a Gmail invite, you want one?’” Buchheit said.

Although signing up for Gmail became increasingly easier as more of Google’s network of massive data centers came online, the company didn’t begin accepting all comers to the email service until it opened the floodgates as a Valentine’s Day present to the world in 2007.

Read More

Swedish Embassy Exhibit Highlights Uses of Artificial Intelligence

WASHINGTON — Artificial Intelligence for good is the subject of a new exhibit at the Embassy of Sweden in Washington, showing how Swedish companies and organizations are using AI for a more open society, a healthier world, and a greener planet.

Ambassador Urban Ahlin told an embassy reception that Sweden’s broad collaboration across industry, academia and government makes it a leader in applying AI in public-interest areas, such as clean tech, social sciences, medical research, and greener food supply chains. That includes tracking the mood and health of cows.

Fitbit for cows

It is technology developed by DeLaval, a producer of dairy and farming machinery. The firm’s Market Solution Manager in North America Joaquin Azocar says the small wearable device the size of an earring fits in a cow’s ear and tracks the animal’s movements 24/7, much like a Fitbit.

The ear-mounted tags send out signals to receivers across the farm. DeLaval’s artificial intelligence system analyzes the data and looks for correlations in patterns, trends, and deviations in the animals’ activities, to predict if a cow is sick, in heat, or not eating well.

As a trained veterinarian, Azocar says dairy farmers being alerted sooner to changes in their animals’ behavior means they can provide treatment earlier which translates to less recovery time.

AI helping in childbirth

There are also advances in human health. The developing Pelvic Floor AI project is an AI-based solution to identify high-risk cases of pelvic floor injury and facilitate timely interventions to prevent and limit harm.

It was developed by a team of gynecologists and women’s health care professionals from Sweden’s Sahlgrenska University Hospital to help the nearly 20% of women who experience injury to their pelvic floor during childbirth.

The exhibition “is a great way to showcase the many ways AI is being adapted and used, in medicine and in many other areas,” said exhibition attendee Jesica Lindgren, general counsel for international consulting firm BlueStar Strategies. “It’s important to know how AI is evolving and affecting our everyday life.”

Green solutions using AI

The exhibition includes examples of what AI can do about climate change, including rising sea levels and declining biodiversity.

AirForestry is developing technology “for precise forestry that will select and harvest trees fully autonomously.” The firm says that “harvesting the right trees in the right place could significantly improve overall carbon sequestration and resilience.”

AI & the defense industry

Outlining the development of artificial intelligence for the defense industry, the exhibit admits that “can be controversial.”

“There are exciting possibilities to use AI to solve problems that cannot be solved using traditional algorithms due to their complexity and limitations in computational power,” the exhibit states. “But it requires thorough consideration of how AI should and shouldn’t be utilized. Proactively engaging in AI research is necessary to understand the technology’s capabilities and limitations and help shape its ethical standards.”

AI and privacy

Exhibition participant Quentin Black is an engineer with Axis Communications, an industry leader in video surveillance. He said the project came out of GDPR, or General Data Protection Regulation; an EU policy that provides privacy to citizens who are out in public whose image could be picked up on video surveillance cameras.

The regulations surrounding privacy are stricter in Europe than they are in the U.S., Black said.

“In the U.S. the public doesn’t really have an expectation of privacy; there’s cameras everywhere. In Europe, it’s different.” That regulation inspired Axis Communications to develop AI that provides privacy, he explained.

Black pointed to a large monitor divided into four windows, to show how AI is being used to set up four different filters to provide privacy.

The Axis Live Privacy Shield remotely monitors activities both indoors and outdoors while safeguarding privacy in real time. The technology is downloadable and free, to provide privacy to people and/or environments, using a variety of filters.

In the monitor on display in the exhibition, Black explained the four quadrants. The upper right window of the monitor displays privacy with a full color block out of all humans, using AI to distinguish the difference between the people and the environment.

The upper left window provides privacy to the person’s head. The bottom left corner provides pixelization, or a mosaic, of the person’s entire/whole body, and the immediate environment surrounding the person. And the bottom right corner shows blockage of the environment, so “an inverse of the personal privacy,” Black explained.

“So, if it was a top secret facility, or you want to see the people walking up to your door without a view of your neighbor’s house, this is where this can this be applied.”

Tip of the iceberg

“I think that AI is on everybody’s thoughts, and what I appreciate about the House of Sweden’s approach in this exhibition is highlighting a thoughtful, scientific, business-oriented and human-oriented perspective on AI in society today,” said Molly Steenson, President and CEO of the American Swedish Institute.

Though AI and machine learning have been around since the 1950s, she says it is only now that we are seeing “the contemporary upswing and acceleration of AI, especially generative AI in things like large language models.”

“So, while large companies and tech companies might want us to speed up and believe that it is only scary or it is only good, I think it’s a lot more nuanced than that,” she said.

Read More