США: останнім часом пакети допомоги для України були меншими, оскільки , оскільки немає має додаткового фінансування
…
Indigenous rangers in northern Australia have started managing herds of feral animals from space. In the largest project of its kind in Australia, the so-called Space Cows project involves tagging and then tracking a thousand wild cattle and buffalo via satellite.
Water buffalo were imported into Australia’s Northern Territory in the 19th century as working animals and meat for remote settlements. When those communities were abandoned, the animals were released into the wild.
Their numbers have grown, and feral buffaloes can cause huge environmental damage. In wetlands, they move along pathways called swim channels, which have caused salt water to flow into freshwater plains. This has led to the degradation and loss of large areas of paperbark forest and natural waterholes, as well as spreading weeds.
Under the so-called Space Cows program, feral cattle and buffaloes are being rounded up, often by helicopter, tied to trees, and fitted with solar-powered tags that can be tracked by satellite.
Scientists say the real-time data will be critical to controlling and predicting the movement of the feral herds, which are notorious for trashing the landscape.
Most feral buffalo are found on Aboriginal land, and researchers are working closely with Indigenous rangers. They carry out sporadic buffalo culls, and there are hopes that First Nations communities can benefit economically from well-managed feral herds.
The technology will allow Indigenous rangers to predict where cattle and buffalo are going and cull them or fence off important cultural or environmental sites. The data will help rangers stop the animals trampling sacred ceremonial areas and destroying culturally significant waterways. Scientists say the satellite information will allow them to predict when herds might head to certain waterways in warm weather allowing rangers to intervene.
In recent years, thousands of wild buffalo have been exported from Australia to Southeast Asia.
Andrew Hoskins is a biologist at the CSIRO, the Commonwealth Scientific and Industrial Research Organization, Australia’s national science agency.
He told the Australian Broadcasting Corp’s AM Program this is the first time feral animals have been monitored from space.
“This really, you know, large scale tracking project, (is) probably the largest from a wildlife or a buffalo tracking perspective that has ever been done. The novel part, I suppose, is then that links through to a space-based satellite system,” said Hoskins.
Australia has had an often-disastrous experience with bringing in animals from overseas since European colonization in the later 1800s. It is not just buffaloes that cause immense environmental damage.
Cane toads — brought to the country in a failed attempt to control pests on sugar cane plantations in the 1930s — are prolific breeders and feeders that can dramatically attack native insects, frogs, reptiles and other small creatures. Their skin contains toxic venom that can also kill native predators.
Feral cats kill millions of birds in Australia each year, while foxes, pigs and camels cause widespread ecological damage across Australia.
Yellow crazy ants are one of the world’s worst invasive species. Authorities believe they arrived in Australia accidentally through shipping ports. They have been recorded in Queensland and New South Wales states as well as the Northern Territory. The ants are a highly aggressive species and spit a formic acid, which burns the skin of their prey, including small mammals, turtle hatchlings and bird chicks.
…
Country singers, romance novelists, video game artists and voice actors are appealing to the U.S. government for relief — as soon as possible — from the threat that artificial intelligence poses to their livelihoods.
“Please regulate AI. I’m scared,” wrote a podcaster concerned about his voice being replicated by AI in one of thousands of letters recently submitted to the U.S. Copyright Office.
Technology companies, by contrast, are largely happy with the status quo that has enabled them to gobble up published works to make their AI systems better at mimicking what humans do.
The nation’s top copyright official hasn’t yet taken sides. She told The Associated Press she’s listening to everyone as her office weighs whether copyright reforms are needed for a new era of generative AI tools that can spit out compelling imagery, music, video and passages of text.
“We’ve received close to 10,000 comments,” said Shira Perlmutter, the U.S. register of copyrights, in an interview. “Every one of them is being read by a human being, not a computer. And I myself am reading a large part of them.”
What’s at stake?
Perlmutter directs the U.S. Copyright Office, which registered more than 480,000 copyrights last year covering millions of individual works but is increasingly being asked to register works that are AI-generated. So far, copyright claims for fully machine-generated content have been soundly rejected because copyright laws are designed to protect works of human authorship.
But, Perlmutter asks, as humans feed content into AI systems and give instructions to influence what comes out, “is there a point at which there’s enough human involvement in controlling the expressive elements of the output that the human can be considered to have contributed authorship?”
That’s one question the Copyright Office has put to the public.
A bigger one — the question that’s fielded thousands of comments from creative professions — is what to do about copyrighted human works that are being pulled from the internet and other sources and ingested to train AI systems, often without permission or compensation.
More than 9,700 comments were sent to the Copyright Office, part of the Library of Congress, before an initial comment period closed in late October. Another round of comments is due by December 6. After that, Perlmutter’s office will work to advise Congress and others on whether reforms are needed.
What are artists saying?
Addressing the “Ladies and Gentlemen of the US Copyright Office,” the Family Ties actor and filmmaker Justine Bateman said she was disturbed that AI models were “ingesting 100 years of film” and TV in a way that could destroy the structure of the film business and replace large portions of its labor pipeline.
It “appears to many of us to be the largest copyright violation in the history of the United States,” Bateman wrote. “I sincerely hope you can stop this practice of thievery.”
Airing some of the same AI concerns that fueled this year’s Hollywood strikes, television showrunner Lilla Zuckerman (Poker Face) said her industry should declare war on what is “nothing more than a plagiarism machine” before Hollywood is “coopted by greedy and craven companies who want to take human talent out of entertainment.”
The music industry is also threatened, said Nashville-based country songwriter Marc Beeson, who’s written tunes for Carrie Underwood and Garth Brooks. Beeson said AI has potential to do good but “in some ways, it’s like a gun — in the wrong hands, with no parameters in place for its use, it could do irreparable damage to one of the last true American art forms.”
While most commenters were individuals, their concerns were echoed by big music publishers — Universal Music Group called the way AI is trained “ravenous and poorly controlled” — as well as author groups and news organizations including The New York Times and The Associated Press.
Is it fair use?
What leading tech companies like Google, Microsoft and ChatGPT-maker OpenAI are telling the Copyright Office is that their training of AI models fits into the “fair use” doctrine that allows for limited uses of copyrighted materials such as for teaching, research or transforming the copyrighted work into something different.
“The American AI industry is built in part on the understanding that the Copyright Act does not proscribe the use of copyrighted material to train Generative AI models,” says a letter from Meta Platforms, the parent company of Facebook, Instagram and WhatsApp. The purpose of AI training is to identify patterns “across a broad body of content,” not to “extract or reproduce” individual works, it added.
So far, courts have largely sided with tech companies in interpreting how copyright laws should treat AI systems. In a defeat for visual artists, a federal judge in San Francisco last month dismissed much of the first big lawsuit against AI image-generators, though allowed some of the case to proceed.
Most tech companies cite as precedent Google’s success in beating back legal challenges to its online book library. The U.S. Supreme Court in 2016 let stand lower court rulings that rejected authors’ claim that Google’s digitizing of millions of books and showing snippets of them to the public amounted to copyright infringement.
But that’s a flawed comparison, argued former law professor and bestselling romance author Heidi Bond, who writes under the pen name Courtney Milan. Bond said she agrees that “fair use encompasses the right to learn from books,” but Google Books obtained legitimate copies held by libraries and institutions, whereas many AI developers are scraping works of writing through “outright piracy.”
Perlmutter said this is what the Copyright Office is trying to help sort out.
“Certainly, this differs in some respects from the Google situation,” Perlmutter said. “Whether it differs enough to rule out the fair use defense is the question in hand.”
…