The Tana River delta on the Kenyan coast includes a vast range of habitats and a remarkably productive ecosystem, says UNESCO. It is also home to many bird species, including some that are nearly threatened. Residents are helping local conservation efforts with an app called eBird. Juma Majanga reports.
…
U.S. federal prosecutors are stepping up their pursuit of suspects who use artificial intelligence tools to manipulate or create child sex abuse images, as law enforcement fears the technology could spur a flood of illicit material.
The U.S. Justice Department has brought two criminal cases this year against defendants accused of using generative AI systems, which create text or images in response to user prompts, to produce explicit images of children.
“There’s more to come,” said James Silver, the chief of the Justice Department’s Computer Crime and Intellectual Property Section, predicting further similar cases.
“What we’re concerned about is the normalization of this,” Silver said in an interview. “AI makes it easier to generate these kinds of images, and the more that are out there, the more normalized this becomes. That’s something that we really want to stymie and get in front of.”
The rise of generative AI has sparked concerns at the Justice Department that the rapidly advancing technology will be used to carry out cyberattacks, boost the sophistication of cryptocurrency scammers and undermine election security.
Child sex abuse cases mark some of the first times that prosecutors are trying to apply existing U.S. laws to alleged crimes involving AI, and even successful convictions could face appeals as courts weigh how the new technology may alter the legal landscape around child exploitation.
Prosecutors and child safety advocates say generative AI systems can allow offenders to morph and sexualize ordinary photos of children and warn that a proliferation of AI-produced material will make it harder for law enforcement to identify and locate real victims of abuse.
The National Center for Missing and Exploited Children, a nonprofit group that collects tips about online child exploitation, receives an average of about 450 reports each month related to generative AI, according to Yiota Souras, the group’s chief legal officer.
That’s a fraction of the average of 3 million monthly reports of overall online child exploitation the group received last year.
Untested ground
Cases involving AI-generated sex abuse imagery are likely to tread new legal ground, particularly when an identifiable child is not depicted.
Silver said in those instances, prosecutors can charge obscenity offenses when child pornography laws do not apply.
Prosecutors indicted Steven Anderegg, a software engineer from Wisconsin, in May on charges including transferring obscene material. Anderegg is accused of using Stable Diffusion, a popular text-to-image AI model, to generate images of young children engaged in sexually explicit conduct and sharing some of those images with a 15-year-old boy, according to court documents.
Anderegg has pleaded not guilty and is seeking to dismiss the charges by arguing that they violate his rights under the U.S. Constitution, court documents show.
He has been released from custody while awaiting trial. His attorney was not available for comment.
Stability AI, the maker of Stable Diffusion, said the case involved a version of the AI model that was released before the company took over the development of Stable Diffusion. The company said it has made investments to prevent “the misuse of AI for the production of harmful content.”
Federal prosecutors also charged a U.S. Army soldier with child pornography offenses in part for allegedly using AI chatbots to morph innocent photos of children he knew to generate violent sexual abuse imagery, court documents show.
The defendant, Seth Herrera, pleaded not guilty and has been ordered held in jail to await trial. Herrera’s lawyer did not respond to a request for comment.
Legal experts said that while sexually explicit depictions of actual children are covered under child pornography laws, the landscape around obscenity and purely AI-generated imagery is less clear.
The U.S. Supreme Court in 2002 struck down as unconstitutional a federal law that criminalized any depiction, including computer-generated imagery, appearing to show minors engaged in sexual activity.
“These prosecutions will be hard if the government is relying on the moral repulsiveness alone to carry the day,” said Jane Bambauer, a law professor at the University of Florida who studies AI and its impact on privacy and law enforcement.
Federal prosecutors have secured convictions in recent years against defendants who possessed sexually explicit images of children that also qualified as obscene under the law.
Advocates are also focusing on preventing AI systems from generating abusive material.
Two nonprofit advocacy groups, Thorn and All Tech Is Human, secured commitments in April from some of the largest players in AI including Alphabet’s Google, Amazon.com, Facebook and Instagram parent Meta Platforms, OpenAI and Stability AI to avoid training their models on child sex abuse imagery and to monitor their platforms to prevent its creation and spread.
“I don’t want to paint this as a future problem, because it’s not. It’s happening now,” said Rebecca Portnoff, Thorn’s director of data science.
“As far as whether it’s a future problem that will get completely out of control, I still have hope that we can act in this window of opportunity to prevent that.”
…
BEIJING — Intel products sold in China should be subject to a security review, the Cybersecurity Association of China (CSAC) said on Wednesday, alleging the U.S. chipmaker has “constantly harmed” the country’s national security and interests.
While CSAC is an industry group rather than a government body, it has close ties to the Chinese state and the raft of accusations against Intel, published in a long post on its official WeChat group, could trigger a security review from China’s powerful cyberspace regulator, the Cyberspace Administration of China (CAC).
“It is recommended that a network security review is initiated on the products Intel sells in China, so as to effectively safeguard China’s national security and the legitimate rights and interests of Chinese consumers,” CSAC said.
Last year, the CAC barred domestic operators of key infrastructure from buying products made by U.S. memory chipmaker Micron Technology Inc after deeming the company’s products had failed its network security review.
Intel did not immediately respond to a request for comment. The company’s shares were down 2.7% in U.S. premarket trading.
read moreBEIJING — China’s state security ministry said that a foreign company had been found to have illegally conducted geographic mapping activities in the country under the guise of autonomous driving research and outsourcing to a licensed Chinese mapping firm.
The ministry did not disclose the names of either company in a statement on its WeChat account on Wednesday.
The foreign company, ineligible for geographic surveying and mapping activities in China, “purchased a number of cars and equipped them with high-precision radar, GPS, optical lenses and other gear,” read the statement.
In addition to directly instructing the Chinese company to conduct surveying and mapping in many Chinese provinces, the foreign company appointed foreign technicians to give “practical guidance” to mapping staffers with the Chinese firm, enabling the latter to transfer its acquired data overseas, the ministry alleged.
Most of the data the foreign company has collected have been determined to be state secrets, according to the ministry, which said state security organs, together with relevant departments, had carried out joint law enforcement activities.
The affected companies and relevant responsible personnel have been held legally accountable, the state security ministry said, without elaborating.
China has strictly regulated mapping activities and data, which are key to developing autonomous driving, due to national security concerns. No foreign firm is qualified for mapping in China and data collected by vehicles made by foreign automakers such as Tesla in China has to be stored locally.
The U.S. Commerce Department has also proposed prohibiting Chinese software and hardware in connected and autonomous vehicles on American roads due to national security concerns.
Also on Wednesday, a Chinese cybersecurity industry group recommended that Intel products sold in China should be subject to a security review, alleging the U.S. chipmaker has “constantly harmed” the country’s national security and interests.
…
LONDON — Some of the most prominent artificial intelligence models are falling short of European regulations in key areas such as cybersecurity resilience and discriminatory output, according to data seen by Reuters.
The EU had long debated new AI regulations before OpenAI released ChatGPT to the public in late 2022. The record-breaking popularity and ensuing public debate over the supposed existential risks of such models spurred lawmakers to draw up specific rules around “general-purpose” AIs.
Now a new tool designed by Swiss startup LatticeFlow and partners, and supported by European Union officials, has tested generative AI models developed by big tech companies like Meta and OpenAI across dozens of categories in line with the bloc’s wide-sweeping AI Act, which is coming into effect in stages over the next two years.
Awarding each model a score between 0 and 1, a leaderboard published by LatticeFlow on Wednesday showed models developed by Alibaba, Anthropic, OpenAI, Meta and Mistral all received average scores of 0.75 or above.
However, the company’s “Large Language Model (LLM) Checker” uncovered some models’ shortcomings in key areas, spotlighting where companies may need to divert resources in order to ensure compliance.
Companies failing to comply with the AI Act will face fines of $38 million or 7% of global annual turnover.
Mixed results
At present, the EU is still trying to establish how the AI Act’s rules around generative AI tools like ChatGPT will be enforced, convening experts to craft a code of practice governing the technology by spring 2025.
But LatticeFlow’s test, developed in collaboration with researchers at Swiss university ETH Zurich and Bulgarian research institute INSAIT, offers an early indicator of specific areas where tech companies risk falling short of the law.
For example, discriminatory output has been a persistent issue in the development of generative AI models, reflecting human biases around gender, race and other areas when prompted.
When testing for discriminatory output, LatticeFlow’s LLM Checker gave OpenAI’s “GPT-3.5 Turbo” a relatively low score of 0.46. For the same category, Alibaba Cloud’s 9988.HK “Qwen1.5 72B Chat” model received only a 0.37.
Testing for “prompt hijacking,” a type of cyberattack in which hackers disguise a malicious prompt as legitimate to extract sensitive information, the LLM Checker awarded Meta’s “Llama 2 13B Chat” model a score of 0.42. In the same category, French startup Mistral’s “8x7B Instruct” model received 0.38.
“Claude 3 Opus,” a model developed by Google-backed Anthropic, received the highest average score, 0.89.
The test was designed in line with the text of the AI Act, and will be extended to encompass further enforcement measures as they are introduced. LatticeFlow said the LLM Checker would be freely available for developers to test their models’ compliance online.
Petar Tsankov, the firm’s CEO and cofounder, told Reuters the test results were positive overall and offered companies a roadmap for them to fine-tune their models in line with the AI Act.
“The EU is still working out all the compliance benchmarks, but we can already see some gaps in the models,” he said. “With a greater focus on optimizing for compliance, we believe model providers can be well-prepared to meet regulatory requirements.”
Meta declined to comment. Alibaba, Anthropic, Mistral, and OpenAI did not immediately respond to requests for comment.
While the European Commission cannot verify external tools, the body has been informed throughout the LLM Checker’s development and described it as a “first step” in putting the new laws into action.
A spokesperson for the European Commission said: “The Commission welcomes this study and AI model evaluation platform as a first step in translating the EU AI Act into technical requirements.”
…
LONDON — The world is on the brink of a new age of electricity with fossil fuel demand set to peak by the end of the decade, meaning surplus oil and gas supplies could drive investment into green energy, the International Energy Agency said on Wednesday.
But it also flagged a high level of uncertainty as conflicts embroil the oil and gas-producing Middle East and Russia and as countries representing half of global energy demand have elections in 2024.
“In the second half of this decade, the prospect of more ample – or even surplus – supplies of oil and natural gas, depending on how geopolitical tensions evolve, would move us into a very different energy world,” IEA Executive Director Fatih Birol said in a release alongside its annual report.
Surplus fossil fuel supplies would likely lead to lower prices and could enable countries to dedicate more resources to clean energy, moving the world into an “age of electricity,” Birol said.
In the nearer term, there is also the possibility of reduced supplies should the Middle East conflict disrupt oil flows.
The IEA said such conflicts highlighted the strain on the energy system and the need for investment to speed up the transition to “cleaner and more secure technologies.”
A record-high level of clean energy came online globally last year, the IEA said, including more than 560 gigawatts (GW) of renewable power capacity. Around $2 trillion is expected to be invested in clean energy in 2024, almost double the amount invested in fossil fuels.
In its scenario based on current government policies, global oil demand peaks before 2030 at just less than 102 million barrels/day (mb/d), and then falls back to 2023 levels of 99 mb/d by 2035, largely because of lower demand from the transport sector as electric vehicle use increases.
The report also lays out the likely impact on future oil prices if stricter environmental policies are implemented globally to combat climate change.
In the IEA’s current policies scenario, oil prices decline to $75 per barrel in 2050 from $82 per barrel in 2023.
That compares to $25 per barrel in 2050 should government actions fall in line with the goal of cutting energy sector emissions to net zero by then.
Although the report forecasts an increase in demand for liquefied natural gas (LNG) of 145 billion cubic meters (bcm) between 2023 and 2030, it said this would be outpaced by an increase in export capacity of around 270 bcm over the same period.
“The overhang in LNG capacity looks set to create a very competitive market at least until this is worked off, with prices in key importing regions averaging $6.5-8 per million British thermal units (mmBtu) to 2035,” the report said.
Asian LNG prices, regarded as an international benchmark are currently around $13 mmBtu.
…