01001, Київ, Україна
info@ukrlines.com

Worker at South Korea Vegetable Packing Plant Crushed to Death by Industrial Robot

An industrial robot grabbed and crushed a worker to death at a vegetable packaging plant in South Korea, police said Thursday, as they investigated whether the machine was defective or improperly designed.

Police said early evidence suggests that human error was more likely to blame rather than problems with the machine itself. But the incident still triggered public concern about the safety of industrial robots and the false sense of security they may give to humans working nearby in a country that increasingly relies on such machines to automate its industries.

Police in the southern county of Goseong said the man died of head and chest injuries Tuesday evening after he was snatched and pressed against a conveyor belt by the machine’s robotic arms.

Police did not identify the man but said he was an employee of a company that installs industrial robots and was sent to the plant to examine whether the machine was working properly.

South Korea has had other accidents involving industrial robots in recent years. In March, a manufacturing robot crushed and seriously injured a worker who was examining it at an auto parts factory in Gunsan. Last year, a robot installed near a conveyor belt fatally crushed a worker at a milk factory in Pyeongtaek.

The machine that caused the death on Tuesday was one of two pick-and-place robots used at the facility, which packages bell peppers and other vegetables exported to other Asian countries, police said. Such machines are common in South Korea’s agricultural communities, which are struggling with a declining and aging workforce.

“It wasn’t an advanced, artificial intelligence-powered robot, but a machine that simply picks up boxes and puts them on pallets,” said Kang Jin-gi, who heads the investigations department at Gosong Police Station. He said police were working with related agencies to determine whether the machine had technical defects or safety issues.

Another police official, who did not want to be identified because he wasn’t authorized to talk to reporters, said police were also looking into the possibility of human error. The robot’s sensors are designed to identify boxes, and security video indicated the man had moved near the robot with a box in his hands which likely triggered the machine’s reaction, the official said.

“It’s clearly not a case where a robot confused a human with a box -– this wasn’t a very sophisticated machine,” he said.

According to data from the International Federation of Robotics, South Korea had 1,000 industrial robots per 10,000 employees in 2021, the highest density in the world and more than three times the number in China that year. Many of South Korea’s industrial robots are used in major manufacturing plants such as electronics and auto-making.

Read More

Star-filled Euclid Images Spur Mission to Probe ‘Dark Universe’

European astronomers on Tuesday released the first images from the newly launched Euclid space telescope, designed to unlock the secrets of dark matter and dark energy — hidden forces thought to make up 95% of the universe.

The European Space Agency, which leads the six-year mission with NASA as a partner, said the images were the sharpest of their kind, showcasing the telescope’s ability to monitor billions of galaxies up to 10 billion light years away.

The images spanned four areas of the relatively nearby universe, including 1,000 galaxies belonging to the massive Perseus cluster just 240 million light years away, and more than 100,000 galaxies spread out in the background, ESA said.

Scientists believe vast, seemingly organized structures such as Perseus could have formed only if dark matter exists.

“We think we understand only 5% of the universe. That’s the matter that we can see,” ESA’s science director Carole Mundell told Reuters.

“The rest of the universe we call dark because it doesn’t produce light in the normal electromagnetic spectrum,” she said. “But we know its effect because we see the effect on visible matter.”

Tell-tale signs of the hidden force exerted by dark matter include galaxies rotating more quickly than scientists would expect from the amount of visible matter that can be detected.

Its influence is also implicated in pulling together some of the most massive structures in the universe, such as clusters of galaxies, Mundell said.

Dark energy is even more enigmatic.

Its hypothetical existence was established only in the 1990s by studying exploding stars called supernovas, resulting in a 2011 Nobel prize shared between three U.S.-born scientists.

Thanks in part to observations from the earlier Hubble Space Telescope, they concluded that the universe was not only expanding but that the pace of expansion was accelerating — a stunning discovery attributed to the new concept of dark energy.

After initial commissioning and technical teething problems, including stray light and guidance issues, Euclid will now start piecing together a 3D map encompassing about a third of the sky to detect tiny variations attributable to the dark universe.

By gaining new insights into dark energy and matter, scientists hope to better grasp the formation and distribution of galaxies across the so-called cosmic web of the universe.

The release of the images in Darmstadt, Germany, coincided with the second of two days of European space talks in Spain dominated by Europe’s continued dependency on foreign launches.

Read More

Logon: US Farmers Cautious About Autonomous Farm Tech 

For farmers in the Midwest United States, emerging autonomous technology could reduce costs and increase efficiency in the agricultural supply chain. Kane Farabaugh shows the promise of the technology on the farm in this edition of LogOn. 

Read More

AI Experts Weigh in on Biden’s Executive Order   

President Joe Biden recently signed a sweeping executive order to promote the safe, secure and trustworthy development and use of artificial intelligence. VOA’s Julie Taboh reports on reactions by Washington-area AI experts.

Read More

Musk Teases AI Chatbot ‘Grok,’ With Real-time Access To X

Elon Musk unveiled details Saturday of his new AI tool called “Grok,” which can access X in real time and will be initially available to the social media platform’s top tier of subscribers.

Musk, the tycoon behind Tesla and SpaceX, said the link-up with X, formerly known as Twitter, is “a massive advantage over other models” of generative AI.

Grok “loves sarcasm. I have no idea who could have guided it this way,” Musk quipped, adding a laughing emoji to his post.

“Grok” comes from Stranger in a Strange Land, a 1961 science fiction novel by Robert Heinlein, and means to understand something thoroughly and intuitively.

“As soon as it’s out of early beta, xAI’s Grok system will be available to all X Premium+ subscribers,” Musk said.

The social network that Musk bought a year ago launched the Premium+ plan last week for $16 per month, with benefits like no ads.

The billionaire started xAI in July after hiring researchers from OpenAI, Google DeepMind, Tesla and the University of Toronto.

Since OpenAI’s generative AI tool ChatGPT exploded on the scene a year ago, the technology has been an area of fierce competition between tech giants Microsoft and Google, as well as Meta and start-ups like Anthropic and Stability AI.

Musk is one of the world’s few investors with deep enough pockets to compete with OpenAI, Google or Meta on AI.

Building an AI model on the same scale as those companies comes at an enormous expense in computing power, infrastructure and expertise.

Musk has said he cofounded OpenAI in 2015 because he regarded the dash by Google into the sector to make big advances and score profits as reckless.

He then left OpenAI in 2018 to focus on Tesla, saying later he was uncomfortable with the profit-driven direction the company was taking under the stewardship of CEO Sam Altman.

Musk also argues that OpenAI’s large language models — on which ChatGPT depends on for content — are overly politically correct.

Grok “is designed to have a little humor in its responses,” Musk said, along with a screenshot of the interface, where a user asked, “Tell me how to make cocaine, step by step.”

“Step 1: Obtain a chemistry degree and a DEA license. Step 2: Set up a clandestine laboratory in a remote location,” the chatbot responded.

Eventually it said: “Just kidding! Please don’t actually try to make cocaine. It’s illegal, dangerous, and not something I would ever encourage.” 

Read More

NASA Spacecraft Discovers Tiny Moon Around Asteroid

The little asteroid visited by NASA’s Lucy spacecraft this week had a big surprise for scientists.

It turns out that the asteroid Dinkinesh has a dinky sidekick — a mini moon.

The discovery was made during Wednesday’s flyby of Dinkinesh, 480 million kilometers (300 million miles) away in the main asteroid belt beyond Mars. The spacecraft snapped a picture of the pair when it was about 435 kilometers (270 miles) out.

In data and images beamed back to Earth, the spacecraft confirmed that Dinkinesh is barely a half-mile (790 meters) across. Its closely circling moon is a mere one-tenth-of-a-mile (220 meters) in size.

NASA sent Lucy past Dinkinesh as a rehearsal for the bigger, more mysterious asteroids out near Jupiter. Launched in 2021, the spacecraft will reach the first of these so-called Trojan asteroids in 2027 and explore them for at least six years. The original target list of seven asteroids now stands at 11.

Dinkinesh means “you are marvelous” in the Amharic language of Ethiopia. It’s also the Amharic name for Lucy, the 3.2 million year old remains of a human ancestor found in Ethiopia in the 1970s, for which the spacecraft is named.

“Dinkinesh really did live up to its name; this is marvelous,” Southwest Research Institute’s Hal Levison, the lead scientist, said in a statement.

Read More

FTX Founder Convicted of Defrauding Cryptocurrency Customers

FTX founder Sam Bankman-Fried’s spectacular rise and fall in the cryptocurrency industry — a journey that included his testimony before Congress, a Super Bowl advertisement and dreams of a future run for president — hit rock bottom Thursday when a New York jury convicted him of fraud in a scheme that cheated customers and investors of at least $10 billion.

After the monthlong trial, jurors rejected Bankman-Fried’s claim during four days on the witness stand in Manhattan federal court that he never committed fraud or meant to cheat customers before FTX, once the world’s second-largest crypto exchange, collapsed into bankruptcy a year ago.

“His crimes caught up to him. His crimes have been exposed,” Assistant U.S. Attorney Danielle Sassoon told the jury of the onetime billionaire just before they were read the law by Judge Lewis A. Kaplan and began deliberations. Sassoon said Bankman-Fried turned his customers’ accounts into his “personal piggy bank” as up to $14 billion disappeared.

She urged jurors to reject Bankman-Fried’s insistence when he testified over three days that he never committed fraud or plotted to steal from customers, investors and lenders and didn’t realize his companies were at least $10 billion in debt until October 2022.

Bankman-Fried was required to stand and face the jury as guilty verdicts on all seven counts were read. He kept his hands clasped tightly in front of him. When he sat down after the reading, he kept his head tilted down for several minutes.

After the judge set a sentencing date of March 28, Bankman-Fried’s parents moved to the front row behind him. His father put his arm around his wife. As Bankman-Fried was led out of the courtroom, he looked back and nodded toward his mother, who nodded back and then became emotional, wiping her hand across her face after he left the room.

U.S. Attorney Damian Williams told reporters after the verdict that Bankman-Fried “perpetrated one of the biggest financial frauds in American history, a multibillion-dollar scheme designed to make him the king of crypto.”

“But here’s the thing: The cryptocurrency industry might be new. The players like Sam Bankman-Fried might be new. This kind of fraud, this kind of corruption is as old as time, and we have no patience for it,” he said.

Bankman-Fried’s attorney, Mark Cohen, said in a statement they “respect the jury’s decision. But we are very disappointed with the result.”

“Mr. Bankman Fried maintains his innocence and will continue to vigorously fight the charges against him,” Cohen said.

The trial attracted intense interest with its focus on fraud on a scale not seen since the 2009 prosecution of Bernard Madoff, whose Ponzi scheme over decades cheated thousands of investors out of about $20 billion. Madoff pleaded guilty and was sentenced to 150 years in prison, where he died in 2021.

The prosecution of Bankman-Fried, 31, put a spotlight on the emerging industry of cryptocurrency and a group of young executives in their 20s who lived together in a $30 million luxury apartment in the Bahamas as they dreamed of becoming the most powerful player in a new financial field.

Prosecutors made sure jurors knew that the defendant they saw in court with short hair and a suit was also the man with big messy hair and shorts that became his trademark appearance after he started his cryptocurrency hedge fund, Alameda Research, in 2017 and FTX, his cryptocurrency exchange, two years later.

They showed the jury pictures of Bankman-Fried sleeping on a private jet, sitting with a deck of cards and mingling at the Super Bowl with celebrities including the singer Katy Perry. Assistant U.S. Attorney Nicolas Roos called Bankman-Fried someone who liked “celebrity chasing.”

In a closing argument, defense lawyer Mark Cohen said prosecutors were trying to turn “Sam into some sort of villain, some sort of monster.”

“It’s both wrong and unfair, and I hope and believe that you have seen that it’s simply not true,” he said. “According to the government, everything Sam ever touched and said was fraudulent.”

The government relied heavily on the testimony of three former members of Bankman-Fried’s inner circle, his top executives including his former girlfriend, Caroline Ellison, to explain how Bankman-Fried used Alameda Research to siphon billions of dollars from customer accounts at FTX.

With that money, prosecutors said, the Massachusetts Institute of Technology graduate gained influence and power through investments, contributions, tens of millions of dollars in political contributions, congressional testimony and a publicity campaign that enlisted celebrities like comedian Larry David and football quarterback Tom Brady.

Ellison, 28, testified that Bankman-Fried directed her while she was chief executive of Alameda Research to commit fraud as he pursued ambitions to lead huge companies, spend money influentially and run for U.S. president someday. She said he thought he had a 5% chance to be U.S. president someday.

Becoming tearful as she described the collapse of the cryptocurrency empire last November, Ellison said the revelations that caused customers collectively to demand their money back, exposing the fraud, brought a “relief that I didn’t have to lie anymore.”

FTX cofounder Gary Wang, who was FTX’s chief technology officer, revealed in his testimony that Bankman-Fried directed him to insert code into FTX’s operations so that Alameda Research could make unlimited withdrawals from FTX and have a credit line of up to $65 billion. Wang said the money came from customers.

Nishad Singh, the former head of engineering at FTX, testified that he felt “blindsided and horrified” at the result of the actions of a man he once admired when he saw the extent of the fraud as the collapse last November left him suicidal.

Ellison, Wang and Singh all pleaded guilty to fraud charges and testified against Bankman-Fried in the hopes of leniency at sentencing.

Bankman-Fried was arrested in the Bahamas in December and extradited to the United States, where he was freed on a $250 million personal recognizance bond with electronic monitoring and a requirement that he remain at the home of his parents in Palo Alto, California.

His communications, including hundreds of phone calls with journalists and internet influencers, along with emails and texts, eventually got him into trouble when the judge concluded he was trying to influence prospective trial witnesses and ordered him jailed in August.

During the trial, prosecutors used Bankman-Fried’s public statements, online announcements and his congressional testimony against him, showing how the entrepreneur repeatedly promised customers that their deposits were safe and secure as late as last Nov. 7 when he tweeted, “FTX is fine. Assets are fine” as customers furiously tried to withdraw their money. He deleted the tweet the next day. FTX filed for bankruptcy four days later.

In his closing, Roos mocked Bankman-Fried’s testimony, saying that under questioning from his lawyer, the defendant’s words were “smooth, like it had been rehearsed a bunch of times?”

But under cross examination, “he was a different person,” the prosecutor said. “Suddenly on cross-examination he couldn’t remember a single detail about his company or what he said publicly. It was uncomfortable to hear. He never said he couldn’t recall during his direct examination, but it happened over 140 times during his cross-examination.”

Former federal prosecutors said the quick verdict — after only half a day of deliberation — showed how well the government tried the case.

“The government tried the case as we expected,” said Joshua A. Naftalis, a partner at Pallas Partners LLP and a former Manhattan prosecutor. “It was a massive fraud, but that doesn’t mean it had to be a complicated fraud, and I think the jury understood that argument.”

Read More

World Leaders Agree on Artificial Intelligence Risks

World leaders at a safety summit have agreed on the importance of mitigating risks posed by rapid advancements in the emerging technology of artificial intelligence.

The inaugural two-day AI Safety Summit, hosted by British Prime Minister Rishi Sunak in Bletchley Park, England, started Wednesday with leaders from 28 nations, including the United States and China. The leaders agreed to work toward a “shared agreement and responsibility” about AI risks, with plans in place for further meetings to be held later this year in South Korea and France.

Leaders including European Commission President Ursula von der Leyen, U.S. Vice President Kamala Harris, U.N. Secretary-General António Guterres and others discussed each of their individual testing models to ensure safety within the growth of AI.

On Thursday, the summit continued, with focused conversations among what the U.K. called a small group of countries “with shared values.” The leaders in the group came from the EU, the U.N., Italy, Germany, France and Australia.

Some leaders, including Sunak, said immediate, sweeping regulation is not the way forward, and some AI companies have feared that regulation could thwart the technology before it can reach its full potential.

At a Thursday news conference, Sunak announced another landmark agreement by countries pledging to “work together on testing the safety of new AI models before they are released.”

The countries involved in the talks included the U.S., EU, France, Germany, Italy, Japan, South Korea, Singapore, Canada and Australia. China did not participate in the second day of talks.

The summit concluded with a discussion between Sunak and billionaire Elon Musk in front of a group of invited business leaders and journalists.

Musk praised the inclusion of China in the AI safety agreement, a decision that some condemned after many Western governments reduced their tech cooperation with China. Musk went on to stress the importance of the U.S., the U.K. and China working together to promote AI safety.

The discussion between Sunak and Musk was scheduled to air online later on Thursday.

Some information in this report came from The Associated Press and Reuters.

Read More

India Probing Phone Hacking Complaints by Opposition Politicians, Minister Says

India’s cybersecurity agency is investigating complaints of mobile phone hacking by senior opposition politicians who reported receiving warning messages from Apple, Information Technology Minister Ashwini Vaishnaw said.

Vaishnaw was quoted in the Indian Express newspaper as saying Thursday that CERT-In, the computer emergency response team based in New Delhi, had started the probe, adding that “Apple confirmed it has received the notice for investigation.”

A political aide to Vaishnaw and two officials in the federal home ministry told Reuters that all the cyber security concerns raised by the politicians were being scrutinized.

There was no immediate comment from Apple about the investigation.

This week, Indian opposition leader Rahul Gandhi accused Prime Minister Narendra Modi’s government of trying to hack into opposition politicians’ mobile phones after some lawmakers shared screenshots on social media of a notification quoting the iPhone manufacturer as saying: “Apple believes you are being targeted by state-sponsored attackers who are trying to remotely compromise the iPhone associated with your Apple ID.”

A senior minister from Modi’s government also said he had received the same notification on his phone.

Apple said it did not attribute the threat notifications to “any specific state-sponsored attacker,” adding that “it’s possible that some Apple threat notifications may be false alarms, or that some attacks are not detected.”

In 2021, India was rocked by reports that the government had used Israeli-made Pegasus spyware to snoop on scores of journalists, activists and politicians, including Gandhi.

The government has declined to reply to questions about whether India or any of its state agencies had purchased Pegasus spyware for surveillance.

Read More

US Pushes for Global Protections Against Threats Posed by AI

U.S. Vice President Kamala Harris said Wednesday that leaders have “a moral, ethical and societal duty” to protect people from the dangers posed by artificial intelligence, as she leads the Biden administration’s push for a global AI roadmap.

Analysts, in commending the effort, say human oversight is crucial to preventing the weaponization or misuse of this technology, which has applications in everything from military intelligence to medical diagnosis to making art.

“To provide order and stability in the midst of global technological change, I firmly believe that we must be guided by a common set of understandings among nations,” Harris said. “And that is why the United States will continue to work with our allies and partners to apply existing international rules and norms to AI, and work to create new rules and norms.”

Harris also announced the founding of the government’s AI Safety Institute and released draft policy guidance on the government’s use of AI and a declaration of its responsible military applications.

Just days earlier, President Joe Biden – who described AI as “the most consequential technology of our time” – signed an executive order establishing new standards, including requiring that major AI developers report their safety test results and other critical information to the U.S. government.

AI is increasingly used for a wide range of applications. For example: on Wednesday, the Defense Intelligence Agency announced that its AI-enabled military intelligence database will soon achieve “initial operational capability.”

And perhaps on the opposite end of the spectrum, some programmer decided to “train an AI model on over 1,000 human farts so it would learn to create realistic fart sounds.”

Like any other tool, AI is subject to its users’ intentions and can be used to deceive, misinform or hurt people – something that billionaire tech entrepreneur Elon Musk stressed on the sidelines of the London summit, where he said he sees AI as “one of the biggest threats” to society. He called for a “third-party referee.”

Earlier this year, Musk was among the more than 33,000 people to sign an open letter calling on AI labs “to immediately pause for at least six months the training of AI systems more powerful than GPT-4.”

“Here we are, for the first time, really in human history, with something that’s going to be far more intelligent than us,” said Musk, who is looking at creating his own generative AI program. “So it’s not clear to me we can actually control such a thing. But I think we can aspire to guide it in a direction that’s beneficial to humanity. But I do think it’s one of the existential risks that we face and it’s potentially the most pressing one.”

This is also something industry leaders like OpenAI CEO Sam Altman have told U.S. lawmakers in testimony before congressional committees earlier this year.

“My worst fears are that we cause significant – we, the field, the technology, the industry – cause significant harm to the world. I think that could happen in a lot of different ways,” he told lawmakers at a Senate Judiciary Committee on May 16.

That’s because, said Jessica Brandt, policy director for the AI and Emerging Technology Initiative at the Brookings Institution, while “AI has been used to do pretty remarkable things” – especially in the field of scientific research – it is limited by its creators.

“It’s not necessarily doing something that humans don’t know how to do, but it’s making discoveries that humans would be unlikely to be able to make in any meaningful timeframe, because they can just perform so many calculations so quickly,” she told VOA on Zoom.

And, she said, “AI is not objective, or all-knowing. There’s been plenty of studies showing that AI is really only as good as the data that the model is trained on and that the data can have or reflect human bias. This is one of the major concerns.”

Or, as AI Now Executive Director Amba Kak said earlier this year in a magazine interview about AI systems: “The issue is not that they’re omnipotent. It is that they’re janky now. They’re being gamed. They’re being misused. They’re inaccurate. They’re spreading disinformation.”

Analysts say these government and tech officials don’t need a one-size-fits-all solution, but rather an alignment of values – and critically, human oversight and moral use.

“It’s OK to have multiple different approaches, and then also, where possible, coordinate to ensure that democratic values take root in the systems that govern technology globally,” Brandt said.

Industry leaders tend to agree, with Mira Murati, Open AI’s chief technology officer, saying: “AI systems are becoming a part of everyday life. The key is to ensure that these machines are aligned with human intentions and values.”

Analysts watching regulation say the U.S. is unlikely to come up with one, coherent solution for the problems posed by AI.

“The most likely outcome for the United States is a bottom-up patchwork quilt of executive branch actions,” said Bill Whyman, a senior adviser in the Strategic Technologies Program at the Center for Strategic and International Studies. “Unlike Europe, the United States is not likely to pass a broad national AI law over the next few years. Successful legislation is likely focused on less controversial and targeted measures like funding AI research and AI child safety.”

Read More

US Pushes for Global Protections for Threats Posed by AI

U.S. Vice President Kamala Harris says leaders have “a moral, ethical and societal duty” to protect humans from dangers posed by artificial intelligence, and is pushing for a global road map during an AI summit in London. Analysts agree and say one element needs to be constant: human oversight. VOA’s Anita Powell reports from Washington.

Read More

Electric Vehicles Hit the Roads in Malawi

Drivers in Malawi are getting an opportunity to purchase electric vehicles through a local startup company. The handful of buyers so far say they no longer have to struggle daily to get fuel at pump stations. Lameck Masina reports from Blantyre.

Read More

UK Summit Aims to Tackle Thorny Issues Around Cutting-Edge AI Risks 

Digital officials, tech company bosses and researchers are converging Wednesday at a former codebreaking spy base near London to discuss and better understand the extreme risks posed by cutting-edge artificial intelligence. 

The two-day summit focuses on so-called frontier AI — the latest and most powerful systems that take the technology right up to its limits, but could come with as-yet-unknown dangers. They’re underpinned by foundation models, which power chatbots like OpenAI’s ChatGPT and Google’s Bard and are trained on vast pools of information scraped from the internet. 

Some 100 people from 28 countries are expected to attend Prime Minister Rishi Sunak’s two-day AI Safety Summit, though the British government has refused to disclose the guest list. 

The event is a labor of love for Sunak, a tech-loving former banker who wants the U.K. to be a hub for computing innovation and has framed the summit as the start of a global conversation about the safe development of AI. But Vice President Kamala Harris is due to steal the focus on Wednesday with a separate speech in London setting out the U.S. administration’s more hands-on approach. 

She’s due to attend the summit on Thursday alongside government officials from more than two dozen countries including Canada, France, Germany, India, Japan, Saudi Arabia — and China, invited over the protests of some members of Sunak’s governing Conservative Party. 

Tesla CEO Elon Musk is also scheduled to discuss AI with Sunak in a livestreamed conversation on Thursday night. The tech billionaire was among those who signed a statement earlier this year raising the alarm about the perils that AI poses to humanity. 

European Commission President Ursula von der Leyen, United Nations Secretary-General Antonio Guterres and executives from U.S. artificial intelligence companies such as Anthropic and influential computer scientists like Yoshua Bengio, one of the “godfathers” of AI, are also expected. 

The meeting is being held at Bletchley Park, a former top secret base for World War II codebreakers that’s seen as a birthplace of modern computing. 

One of Sunak’s major goals is to get delegates to agree on a first-ever communique about the nature of AI risks. He said the technology brings new opportunities but warns about frontier AI’s threat to humanity, because it could be used to create biological weapons or be exploited by terrorists to sow fear and destruction. 

Only governments, not companies, can keep people safe from AI’s dangers, Sunak said last week. However, in the same speech, he also urged against rushing to regulate AI technology, saying it needs to be fully understood first. 

In contrast, Harris will stress the need to address the here and now, including “societal harms that are already happening such as bias, discrimination and the proliferation of misinformation.” 

Harris plans to stress that the Biden administration is “committed to hold companies accountable, on behalf of the people, in a way that does not stifle innovation,” including through legislation. 

“As history has shown in the absence of regulation and strong government oversight, some technology companies choose to prioritize profit over: The wellbeing of their customers; the security of our communities; and the stability of our democracies,” she plans to say. 

She’ll point to President Biden’s executive order this week, setting out AI safeguards, as evidence the U.S. is leading by example in developing rules for artificial intelligence that work in the public interest. Among measures she will announce is an AI Safety Institute, run through the Department of Commerce, to help set the rules for “safe and trusted AI.” 

Harris also will encourage other countries to sign up to a U.S.-backed pledge to stick to “responsible and ethical” use of AI for military aims. 

A White House official gave details of Harris’s speech, speaking on condition of anonymity to discuss her remarks in advance. 

Read More

UK Kicks Off World’s First AI Safety Summit

The world’s first major summit on artificial intelligence (AI) safety opens in Britain Wednesday, with political and tech leaders set to discuss possible responses to the society-changing technology.

British Prime Minister Rishi Sunak, U.S. Vice President Kamala Harris, EU chief Ursula von der Leyen and U.N. Secretary-General Antonio Guterres will all attend the two-day conference, which will focus on growing fears about the implications of so-called frontier AI.

The release of the latest models has offered a glimpse into the potential of AI, but has also prompted concerns around issues ranging from job losses to cyber-attacks and the control that humans actually have over the systems.

Sunak, whose government initiated the gathering, said in a speech last week that his “ultimate goal” was “to work towards a more international approach to safety where we collaborate with partners to ensure AI systems are safe before they are released.

“We will push hard to agree the first ever international statement about the nature of these risks,” he added, drawing comparisons to the approach taken to climate change.

But London has reportedly had to scale back its ambitions around ideas such as launching a new regulatory body amid a perceived lack of enthusiasm.

Italian Prime Minister Giorgia Meloni is one of the only world leaders, and only one from the G7, attending the conference.

Elon Musk is due to appear, but it is not clear yet whether he will be physically at the summit in Bletchley Park, north of London, where top British codebreakers cracked Nazi Germany’s “Enigma” code.

‘Talking shop’

While the potential of AI raises many hopes, particularly for medicine, its development is seen as largely unchecked.

In his speech, Sunak stressed the need for countries to develop “a shared understanding of the risks that we face.”

But lawyer and investigator Cori Crider, a campaigner for “fair” technology, warned that the summit could be “a bit of a talking shop.

“If he were serious about safety, Rishi Sunak needed to roll deep and bring all of the U.K. majors and regulators in tow and he hasn’t,” she told a press conference in San Francisco.

“Where is the labor regulator looking at whether jobs are being made unsafe or redundant? Where’s the data protection regulator?” she asked.

Having faced criticism for only looking at the risks of AI, the U.K. Wednesday pledged $46 million to fund AI projects around the world, starting in Africa.

Ahead of the meeting, the G7 powers agreed on Monday on a non-binding “code of conduct” for companies developing the most advanced AI systems.

The White House announced its own plan to set safety standards for the deployment of AI that will require companies to submit certain systems to government review.

 

And in Rome, ministers from Italy, Germany and France called for an “innovation-friendly approach” to regulating AI in Europe, as they urged more investment to challenge the U.S. and China.

China will be present, but it is unclear at what level.

News website Politico reported London invited President Xi Jinping, to signify its eagerness for a senior representative.

Beijing’s invitation has raised eyebrows amid heightened tensions with Western nations and accusations of technological espionage. 

 

Read More

Electric Vehicle ‘Fast Charger’ Seen as Game Changer

With White House funding to put more electric cars on the road, some states are using the money to build out their part of a fast-charging EV network. Deana Mitchell has the story.

Read More

Biden Signs Sweeping Executive Order on AI Oversight

President Joe Biden on Monday signed a wide-ranging executive order on artificial intelligence, covering topics as varied as national security, consumer privacy, civil rights and commercial competition. The administration heralded the order as taking “vital steps forward in the U.S.’s approach on safe, secure, and trustworthy AI.”

The order directs departments and agencies across the U.S. federal government to develop policies aimed at placing guardrails alongside an industry that is developing newer and more powerful systems at a pace rate that has many concerned it will outstrip effective regulation.

“To realize the promise of AI and avoid the risk, we need to govern this technology,” Biden said during a signing ceremony at the White House. The order, he added, is “the most significant action any government anywhere in the world has ever taken on AI safety, security and trust.” 

‘Red teaming’ for security 

One of the marquee requirements of the new order is that it will require companies developing advanced artificial intelligence systems to conduct rigorous testing of their products to ensure that bad actors cannot use them for nefarious purposes. The process, known as red teaming, will assess, among other things, “AI systems threats to critical infrastructure, as well as chemical, biological, radiological, nuclear and cybersecurity risks.” 

The National Institute of Standards and Technology will set the standards for such testing, and AI companies will be required to report their results to the federal government prior to releasing new products to the public. The Departments of Homeland Security and Energy will be closely involved in the assessment of threats to vital infrastructure. 

To counter the threat that AI will enable the creation and dissemination of false and misleading information, including computer-generated images and “deep fake” videos, the Commerce Department will develop guidance for the creation of standards that will allow computer-generated content to be easily identified, a process commonly called “watermarking.” 

The order directs the White House chief of staff and the National Security Council to develop a set of guidelines for the responsible and ethical use of AI systems by the U.S. national defense and intelligence agencies.

Privacy and civil rights

The order proposes a number of steps meant to increase Americans’ privacy protections when AI systems access information about them. That includes supporting the development of privacy-protecting technologies such as cryptography and creating rules for how government agencies handle data containing citizens’ personally identifiable information.

However, the order also notes that the United States is currently in need of legislation that codifies the kinds of data privacy protections that Americans are entitled to. Currently, the U.S. lags far behind Europe in the development of such rules, and the order calls on Congress to “pass bipartisan data privacy legislation to protect all Americans, especially kids.”

The order recognizes that the algorithms that enable AI to process information and answer users’ questions can themselves be biased in ways that disadvantage members of minority groups and others often subject to discrimination. It therefore calls for the creation of rules and best practices addressing the use of AI in a variety of areas, including the criminal justice system, health care system and housing market.

The order covers several other areas, promising action on protecting Americans whose jobs may be affected by the adoption of AI technology; maintaining the United States’ market leadership in the creation of AI systems; and assuring that the federal government develops and follows rules for its own adoption of AI systems.

Open questions

Experts say that despite the broad sweep of the executive order, much remains unclear about how the Biden administration will approach the regulations of AI in practice.

Benjamin Boudreaux, a policy researcher at the RAND Corporation, told VOA that while it is clear the administration is “trying to really wrap their arms around the full suite of AI challenges and risks,” much work remains to be done.

“The devil is in the details here about what funding and resources go to executive branch agencies to actually enact many of these recommendations, and just what models a lot of the norms and recommendations suggested here will apply to,” Boudreaux said.

International leadership

Looking internationally, the order says the administration will work to take the lead in developing “an effort to establish robust international frameworks for harnessing AI’s benefits and managing its risks and ensuring safety.”

James A. Lewis, senior vice president and director of the strategic technologies program at the Center for Strategic and International Studies, told VOA that the executive order does a good job of laying out where the U.S. stands on many important issues related to the global development of AI.

“It hits all the right issues,” Lewis said. “It’s not groundbreaking in a lot of places, but it puts down the marker for companies and other countries as to how the U.S. is going to approach AI.”

That’s important, Lewis said, because the U.S. is likely to play a leading role in the development of the international rules and norms that grow up around the technology.

“Like it or not — and certainly some countries don’t like it — we are the leaders in AI,” Lewis said. “There’s a benefit to being the place where the technology is made when it comes to making the rules, and the U.S. can take advantage of that.”

‘Fighting the last war’ 

Not all experts are certain the Biden administration’s focus is on the real threats that AI might present to consumers and citizens. 

Louis Rosenberg, a 30-year veteran of AI development and the CEO of American tech firm Unanimous AI, told VOA he is concerned the administration may be “fighting the last war.”

“I think it’s great that they’re making a bold statement that this is a very important issue,” Rosenberg said. “It definitely shows that the administration is taking it seriously and that they want to protect the public from AI.”

However, he said, when it comes to consumer protection, the administration seems focused on how AI might be used to advance existing threats to consumers, like fake images and videos and convincing misinformation — things that already exist today.

“When it comes to regulating technology, the government has a track record of underestimating what’s new about the technology,” he said.

Rosenberg said he is more concerned about the new ways in which AI might be used to influence people. For example, he noted that AI systems are being built to interact with people conversationally.

“Very soon, we’re not going to be typing in requests into Google. We’re going to be talking to an interactive AI bot,” Rosenberg said. “AI systems are going to be really effective at persuading, manipulating, potentially even coercing people conversationally on behalf of whomever is directing that AI. This is the new and different threat that did not exist before AI.” 

Read More

Musk Pulls Plug on Paying for X Factchecks

Elon Musk has said that corrections to posts on X would no longer be eligible for payment as the social network comes under mounting criticism as becoming a conduit for misinformation.

In the year since taking over Twitter, now rebranded as X, Musk has gutted content moderation, restored accounts of previously banned extremists, and allowed users to purchase account verification, helping them profit from viral — but often inaccurate — posts.

Musk has instead promoted Community Notes, in which X users police the platform, as a tool to combat misinformation. 

But on Sunday, Musk tweeted a modification in how Community Notes works.

“Making a slight change to creator monetization: Any posts that are corrected by @CommunityNotes become ineligible for revenue share,” he wrote.  

“The idea is to maximize the incentive for accuracy over sensationalism,” he added. 

X pays content creators whose work generates lots of views a share of advertising revenue. 

Musk warned against using corrections to make X users ineligible for receiving payouts.

“Worth ‘noting’ that any attempts to weaponize @CommunityNotes to demonetize people will be immediately obvious, because all code and data is open source,” he posted.

Musk’s announcement follows the unveiling Friday of a $16-a-month subscription plan that users who pay more get the biggest boost for their replies. Earlier this year it unveiled an $8-a-month plan to get a “verified” account.

A recent study by the disinformation monitoring group NewsGuard found that verified, paying subscribers were the big spreaders of misinformation about the Israel-Hamas war. 

“Nearly three-fourths of the most viral posts on X advancing misinformation about the Israel-Hamas War are being pushed by ‘verified’ X accounts,” the group said.

It said the 250 most-engaged posts that promoted one of 10 prominent false or unsubstantiated narratives related to the war were viewed more than 100 million times globally in just one week. 

NewsGuard said 186 of those posts were made from verified accounts and only 79 had been fact-checked by Community Notes. 

Verified accounts “turned out to be a boon for bad actors sharing misinformation,” said NewsGuard.

“For less than the cost of a movie ticket, they have gained the added credibility associated with the once-prestigious blue checkmark and enabling them to reach a larger audience on the platform,” it said.

While the organization said it found misinformation spreading widely on other social media platforms such as Facebook, Instagram, TikTok and Telegram, it added that it found false narratives about the Israel-Hamas war tend to go viral on X before spreading elsewhere. 

Read More

Musk Says Starlink to Provide Connectivity in Gaza

Elon Musk said on Saturday that SpaceX’s Starlink will support communication links in Gaza with “internationally recognized aid organizations.”

A telephone and internet blackout isolated people in the Gaza Strip from the world and from each other on Saturday, with calls to loved ones, ambulances or colleagues elsewhere all but impossible as Israel widened its air and ground assault.

International humanitarian organizations said the blackout, which began on Friday evening, was worsening an already desperate situation by impeding lifesaving operations and preventing them from contacting their staff on the ground.

Following Russia’s February 2022 invasion of Ukraine, Starlink satellites were reported to have been critical to maintaining internet connectivity in some areas despite attempted Russian jamming.

Since then, Musk has said he declined to extend coverage over Russian-occupied Crimea, refusing to allow his satellites to be used for Ukrainian attacks on Russian forces there.

Read More

UN Announces Advisory Body on Artificial Intelligence 

The United Nations has begun an effort to help the world manage the risks and benefits of artificial intelligence.

U.N. Secretary-General Antonio Guterres on Thursday launched a 39-member advisory body of tech company executives, government officials and academics from countries spanning six continents.

The panel aims to issue preliminary recommendations on AI governance by the end of the year and finalize them before the U.N. Summit of the Future next September.

“The transformative potential of AI for good is difficult even to grasp,” Guterres said. He pointed to possible uses including predicting crises, improving public health and education, and tackling the climate crisis.

However, he cautioned, “it is already clear that the malicious use of AI could undermine trust in institutions, weaken social cohesion and threaten democracy itself.”

Widespread concern about the risks associated with AI has grown since tech company OpenAI launched ChatGPT last year. Its ease of use has raised concern that the tool could replace writing tasks that previously only humans could perform.

With many calling for regulation of AI, researchers and lawmakers have stressed the need for global cooperation on the matter.

The U.N.’s new body on AI will hold its first meeting Friday.

Some information for this report came from Reuters. 

Read More

Inside a Drone Factory: How It Helps Ukraine’s Defense Efforts

Brinc Drones is one of the U.S. companies shipping hundreds of drones to Ukraine. These drones are designed to help first responders survey the impacted areas of Russian shelling and find survivors. Adriy Borys visited the Brink manufacturing facility. Anna Rice narrates his story. Camera — Dmitriy Savchuk.

Read More

Zara Owner Inditex to Buy Recycled Polyester From US Start-Up

Zara-owner Inditex, the world’s biggest clothing retailer, has agreed to buy recycled polyester from a U.S. start-up as it aims for 25% of its fibers to come from “next-generation” materials by 2030.

As fast-fashion retailers face pressure to reduce waste and use recycled fabrics, Inditex is spending more than $74 million to secure supply from Los Angeles-based Ambercycle of its recycled polyester made from textile waste.

Polyester, a product of the petroleum industry, is widely used in sportswear as it is quick-drying and durable.

Under the offtake deal, Inditex will buy 70% of Ambercycle’s production of recycled polyester, which is sold under the brand cycora, over three years, Inditex CEO Oscar Garcia Maceiras said at a business event in Zaragoza, Spain.

Garcia Maceiras said Inditex is also working with other companies and start-ups in its innovation hub, a unit looking for ways to curb the environmental impact of its products.

“The sustainable transformation of Inditex … is not possible without the collaboration of the different stakeholders,” he said.

The Inditex investment will help Ambercycle fund its first commercial-scale textile recycling factory. Production of cycora at the plant is expected to begin around 2025, and the material will be used in Inditex products over the following three years.

Zara Athleticz, a sub-brand of sportswear for men, launched a collection on Wednesday of “technical pieces” containing up to 50% cycora. Inditex said the collection would be available from Zara.com.

Some apparel brands seeking to reduce their reliance on virgin polyester have switched to recycled polyester derived from plastic bottles, but that practice has come under criticism as it has created more demand for used plastic bottles, pushing up prices.

Textile-to-textile polyester recycling is in its infancy, though, and will take time to reach the scale required by global fashion brands.

“We want to drive innovation to scale-up new solutions, processes and materials to achieve textile-to-textile recycling,” Inditex’s chief sustainability officer Javier Losada said in a statement.

The Ambercycle deal marks the latest in a series of investments made by Inditex into textile recycling start-ups.

Last year it signed a $104 million, three-year deal to buy 30% of the recycled fiber produced by Finland’s Infinited Fiber Co., and also invested in Circ, another U.S. firm focused on textile-to-textile recycling.

In Spain, Inditex has joined forces with rivals, including H&M and Mango, in an association to manage clothing waste, as the industry prepares for EU legislation requiring member states to separately collect textile waste beginning January 2025.

Read More

33 US States Sue Meta, Accusing Platform of Harming Children

Thirty-three U.S. states are suing Meta Platforms Inc., accusing it of damaging young people’s mental health through the addictive nature of their social media platforms.

The suit filed Tuesday in federal court in Oakland, California, alleges Meta knowingly installed addictive features on its social media platforms, Instagram and Facebook, and has collected data on children younger than 13, without their parents’ consent, violating federal law.

“Research has shown that young people’s use of Meta’s social media platforms is associated with depression, anxiety, insomnia, interference with education and daily life, and many other negative outcomes,” the complaint says.

The filing comes after Meta’s own research in 2021 found that the company was aware of the damage Instagram can do to teenagers, especially girls.

In Meta’s 2021 study, 13.5% of teen girls said Instagram makes thoughts of suicide worse and 17% of teen girls said it makes eating disorders worse.

Meta responded to the lawsuit by saying it has “already introduced over 30 tools to support teens and their families.”

“We’re disappointed that instead of working productively with companies across the industry to create clear, age-appropriate standards for the many apps teens use, the attorneys general have chosen this path,” the company added.

Meta is one of many social media companies facing criticism and legal action, with lawsuits also filed against ByteDance’s TikTok and Google’s YouTube.

Measures to protect children on social media exist, but they are easily circumvented, such as a federal law that bans kids under 13 from setting up accounts.

The dangers of social media for children have been highlighted by U.S. Surgeon General Dr. Vivek Murthy, who said the effects of social media require “immediate action to protect kids now.”

In addition to the 33 states suing, nine more state attorneys general are expected to join and file similar lawsuits.

Some information in this report came from The Associated Press and Reuters. 

Read More

Taiwan Computer Chip Workers Adjust to Life in American Desert

Phoenix, Arizona, in America’s Southwest, is the site of a Taiwanese semiconductor chip making facility. One part of President Joe Biden’s cornerstone agenda is to rely less on manufacturing from overseas and boost domestic production of chips that run everything from phones to cars. Many Taiwanese workers who moved to the U.S. to work at the facility — face the challenges of living in a new land. VOA’s Stella Hsu, Enming Liu and Elizabeth Lee have the story.

Read More

Governments, Firms Should Spend More on AI Safety, Top Researchers Say

Artificial intelligence companies and governments should allocate at least one third of their AI research and development funding to ensuring the safety and ethical use of the systems, top AI researchers said in a paper on Tuesday. 

The paper, issued a week before the international AI Safety Summit in London, lists measures that governments and companies should take to address AI risks. 

“Governments should also mandate that companies are legally liable for harms from their frontier AI systems that can be reasonably foreseen and prevented,” according to the paper written by three Turing Award winners, a Nobel laureate, and more than a dozen top AI academics. 

Currently there are no broad-based regulations focusing on AI safety, and the first set of legislation by the European Union is yet to become law as lawmakers are yet to agree on several issues.

“Recent state of the art AI models are too powerful, and too significant, to let them develop without democratic oversight,” said Yoshua Bengio, one of the three people known as the godfather of AI.

“It [investments in AI safety] needs to happen fast, because AI is progressing much faster than the precautions taken,” he said.

Authors include Geoffrey Hinton, Andrew Yao, Daniel Kahneman, Dawn Song and Yuval Noah Harari.

Since the launch of OpenAI’s generative AI models, top academics and prominent CEOs such as Elon Musk have warned about the risks on AI, including calling for a six-month pause in developing powerful AI systems.

Some companies have countered this, saying they will face high compliance costs and disproportionate liability risks.

“Companies will complain that it’s too hard to satisfy regulations — that ‘regulation stifles innovation’ — that’s ridiculous,” said British computer scientist Stuart Russell.

“There are more regulations on sandwich shops than there are on AI companies.” 

Read More

Kenyan Developers Launch App to Prevent Phone Theft

Kenyan developers have designed a mobile phone application that police say is helping to safeguard smartphones from theft, recover stolen cell phones and prevent loss of data. Victoria Amunga reports from Nairobi. Camera: Jimmy Makhulo

Read More

India Conducts Space Flight Test Ahead Of 2025 Crewed Mission

India successfully carried out Saturday the first of a series of key test flights after overcoming a technical glitch ahead of its planned mission to take astronauts into space by 2025, the space agency said.

The test involved launching a module to outer space and bringing it back to earth to test the spacecraft’s crew escape system, said the Indian Space Research Organization chief S. Somanath, and was being recovered after its touchdown in the Bay of Bengal.

The launch was delayed by 45 minutes in the morning because of weather conditions. The attempt was again deferred by more than an hour because of an issue with the engine, and the ground computer put the module’s liftoff on hold, said Somanath.

The glitch caused by a monitoring anomaly in the system was rectified and the test was carried out successfully 75 minutes later from the Sriharikota satellite launching station in southern India, Somanath told reporters.

It would pave the way for other unmanned missions, including sending a robot into space next year.

In September, India successfully launched its first space mission to study the sun, less than two weeks after a successful uncrewed landing near the south pole region of the moon.

After a failed attempt to land on the moon in 2019, India in September joined the United States, the Soviet Union and China as only the fourth country to achieve the milestone.

The successful mission showcased India’s rising standing as a technology and space powerhouse and dovetails with Prime Minister Narendra Modi’s desire to project an image of an ascendant country asserting its place among the global elite.

Signaling a roadmap for India’s future space ambitions, Modi earlier this week announced that India’s space agency will set up an Indian-crafted space station by 2035 and land an Indian astronaut on the moon by 2040.

Active since the 1960s, India has launched satellites for itself and other countries, and successfully put one in orbit around Mars in 2014. India is planning its first mission to the International Space Station next year in collaboration with the United States.

Read More

US Sounds Alarm on Russian Election Efforts

Russia’s efforts to discredit and undermine democratic elections appears to be expanding rapidly, according to newly declassified intelligence, spurred on by what the Kremlin sees as its success in disrupting the past two U.S. presidential elections.

The U.S. intelligence findings, shared in a diplomatic cable sent to more than 100 countries and obtained by VOA, are based on a review of Russian information operations between January 2020 and December 2022 that found Moscow “engaged in a concerted effort … to undermine public confidence in at least 11 elections across nine democracies.”

The review also found what the cable describes as “a less pronounced level of Russian messaging and social media activity” that targeted another 17 democracies.

“These figures represent a snapshot of Russian activities,” the cable warned. “Russia likely has sought to undermine confidence in democratic elections in additional cases that have gone undetected.

“Our information indicates that senior Russian government officials, including in the Kremlin, see value in this type of influence operation and perceive it to be effective,” the cable added.

VOA reached out to the Russian Embassy for comment on the cable warnings but so far has not received a response.

Russia has routinely denied allegations it interferes in foreign elections. However, last November, Wagner chief Yevgeny Prigozhin appeared to admit culpability for interfering in U.S. elections in a social media post.

“Gentlemen, we interfered, we interfere and we will interfere,” Prigozhin said.

U.S. officials assess that, in addition to Russia’s efforts to sow doubt surrounding the 2016 and 2020 elections in the United States, Russian campaigns have targeted countries in Asia, Europe, the Middle East and South America.

The goal, they say, is specifically to erode public confidence in election results and to paint the newly elected governments as illegitimate — using internet trolls, social media influencers, proxy websites linked to Russian intelligence and even Russian state-run media channels like RT and Sputnik.

And even though Russia’s resources have been strained due to its invasion of Ukraine, Moscow election interference efforts do not seem to be slowing down.

It is “a fairly low cost, low barrier to entry operation,” said a senior U.S. intelligence official, who spoke on the condition of anonymity in order to discuss the intelligence assessment.

“In many cases they’re amplifying existing domestic narratives that kind of question the integrity of elections,” the official said. “This is a very efficient use of resources. All they’re doing is magnifying claims that it’s unfair or it didn’t work or it’s chaotic.”

U.S. officials said they have started giving more detailed, confidential briefings to select countries that are being targeted by Russia. Some of the countries, they said, have likewise promised to share intelligence gathered from their own investigations.

Additionally, the cable makes a series of recommendations to counter the threat from the Russian disinformation campaigns, including for countries to expose, sanction and even expel any Russian officials involved in spreading misinformation or disinformation.

The cable also encourages democratic countries to engage in information campaigns to share factual information about their elections and to turn to independent election observers to assess and affirm the integrity of any elections.

Read More

Philippines Orders Military to Stop Using AI Apps Due to Security Risks

The Philippine defense chief has ordered all defense personnel and the 163,000-member military to refrain from using digital applications that harness artificial intelligence to generate personal portraits, saying they could pose security risks.

Defense Secretary Gilberto Teodoro Jr. issued the order in a Saturday memorandum, as Philippine forces have been working to weaken decades-old communist and Muslim insurgencies and defend territorial interests in the disputed South China Sea.

The Department of National Defense on Friday confirmed the authenticity of the memo, which has been circulating online in recent days, but did not provide other details, including what prompted Teodoro to issue the prohibition.

Teodoro specifically warned against the use of a digital app that requires users to submit at least 10 pictures of themselves and then harnesses AI to create “a digital person that mimics how a real individual speaks and moves.” Such apps pose “significant privacy and security risks,” he said.

“This seemingly harmless and amusing AI-powered application can be maliciously used to create fake profiles that can lead to identity theft, social engineering, phishing attacks and other malicious activities,” Teodoro said. “There has already been a report of such a case.”

Teodoro ordered all defense and military personnel “to refrain from using AI photo generator applications and practice vigilance in sharing information online” and said their actions should adhere to the Philippines Defense Department’s values and policies.

Read More