Кореспондент адресував питання Лаврову, коли міністр із супроводом ішов коридором під час саміту БРІКС у Йоганнесбурзі
…
New artificial intelligence tools that write human-like prose and create stunning images have taken the world by storm. But these awe-inspiring technologies are not creating something out of nothing; they’re trained on lots and lots of data, some of which come from works under copyright protection.
Now, the writers, artists and others who own the rights to the material used to teach ChatGPT and other generative AI tools want to stop what they see as blatant copyright infringement of mass proportions.
With billions of dollars at stake, U.S. courts will most likely have to sort out who owns what, using the 1976 Copyright Act, the same law that has determined who owns much of the content published on the internet.
U.S. copyright law seeks to strike a balance between protecting the rights of content creators and fostering creativity and innovation. Among other things, the law gives content creators the exclusive right to reproduce their original work and to prepare derivative works.
But it also provides for an exception. Known as “fair use,” it permits the use of copyrighted material without the copyright holder’s permission for content such as criticism, comment, news reporting, teaching and research.
On the one hand, “we want to allow people who have currently invested time, money, creativity to reap the rewards of what they have done,” said Sean O’Connor, a professor of law at George Mason University. “On the other hand, we don’t want to give them such strong rights that we inhibit the next generation of innovation.”
Is AI ‘scraping’ fair use?
The development of generative AI tools is testing the limits of “fair use,” pitting content creators against technology companies, with the outcome of the dispute promising wide-ranging implications for innovation and society at large.
In the 10 months since ChatGPT’s groundbreaking launch, AI companies have faced a rapidly increasing number of lawsuits over content used to train generative AI tools. The plaintiffs are seeking damages and want the courts to end the alleged infringement.
In January, three visual artists filed a proposed class-action lawsuit against Stability AI Ltd. and two others in San Francisco, alleging that Stability “scraped” more than 5 billion images from the internet to train its popular image generator Stable Diffusion, without the consent of copyright holders.
Stable Diffusion is a “21st-century collage tool” that “remixes the copyrighted works of millions of artists whose work was used as training data,” according to the lawsuit.
In February, stock photo company Getty Images filed its own lawsuit against Stability AI in both the United States and Britain, saying the company copied more than 12 million photos from Getty’s collection without permission or compensation.
In June, two U.S.-based authors sued OpenAI, the creator of ChatGPT, claiming the company’s training data included nearly 300,000 books pulled from illegal “shadow library” websites that offer copyrighted books.
“A large language model’s output is entirely and uniquely reliant on the material in its training dataset,” the lawsuit says.
Last month, American comedian and author Sarah Silverman and two other writers sued OpenAI and Meta, the parent company of Facebook, over the same claims, saying their chatbots were trained on books that had been illegally acquired.
The lawsuit against OpenAI includes what it describes as “very accurate summaries” of the authors’ books generated by ChatGPT, suggesting the company illegally “copied” and then used them to train the chatbot.
The artificial intelligence companies have rejected the allegations and asked the courts to dismiss the lawsuits.
In a court filing in April, Stability AI, research lab Midjourney and online art gallery DeviantArt wrote that visual artists who sue “fail to identify a single allegedly infringing output image, let alone one that is substantially similar to any of their copyrighted works.”
For its part, OpenAI has defended its use of copyrighted material as “fair use,” saying it pulled the works from publicly available datasets on the internet.
The cases are slowly making their way through the courts. It is too early to say how judges will decide.
Last month, a federal judge in San Francisco said he was inclined to toss out most of a lawsuit brought by the three artists against Stability AI but indicated that the claim of direct infringement may continue.
“The big question is fair use,” said Robert Brauneis, a law professor and co-director of the Intellectual Property Program at George Washington University. “I would not be surprised if some of the courts came out in different ways, that some of the cases said, ‘Yes, fair use.’ And others said, ‘No.’”
If the courts are split, the question could eventually go to the Supreme Court, Brauneis said.
Assessing copyright claims
Training generative AI tools to create new works raises two legal questions: Is the data use authorized? And is the new work it creates “derivative” or “transformative”?
The answer is not clear-cut, O’Connor said.
“On the one hand, what the supporters of the generative AI models are saying is that they are acting not much differently than we as humans would do,” he said. “When we read books, watch movies, listen to music, and if we are talented, then we use those to train ourselves as models.
“The counterargument is that … it is categorically different from what humans do when they learn how to become creative themselves.”
While artificial intelligence companies claim their use of the data is fair, O’Connor said they still have to prove that the use was authorized.
“I think that’s a very close call, and I think they may lose on that,” he said.
On the other hand, the AI models can probably avoid liability for generating content that “seems sort of the style of a current author” but is not the same.
“That claim is probably not going to succeed,” O’Connor said. “It will be seen as just a different work.”
But Brauneis said content creators have a strong claim: The AI-generated output will likely compete with the original work.
Imagine you’re a magazine editor who wants an illustration to accompany an article about a particular bird, Brauneis suggested. You could do one of two things: Commission an artist or ask a generative AI tool like Stable Diffusion to create it for you. After a few attempts with the latter, you’ll probably get an image that you can use.
“One of the most important questions to ask about in fair use is, ‘Is this use a substitute, or is it competing with the work of art that is being copied?’” Brauneis said. “And the answer here may be yes. And if it is [competing], that really weighs strongly against fair use.”
This is not the first time that technology companies have been sued over their use of copyrighted material.
In 2015, the Authors Guild filed a class-action lawsuit against Google and three university libraries over Google’s digital books project, alleging “massive copyright infringement.”
In 2014, an appeals court ruled that the project, by then renamed Google Books, was protected under the fair use doctrine.
In 2007, Viacom sued both Google and YouTube for allowing users to upload and view copyrighted material owned by Viacom, including complete episodes of TV shows. The case was later settled out of court.
For Brauneis, the current “Wild West era of creating AI models” recalls YouTube’s freewheeling early days.
“They just wanted to get viewers, and they were willing to take a legal risk to do that,” Brauneis said. “That’s not the way YouTube operates now. YouTube has all sorts of precautions to identify copyrighted content that has not been permitted to be placed on YouTube and then to take it down.”
Artificial intelligence companies may make a similar pivot.
They may have justified using copyrighted material to test out their technology. But now that their models are working, they “may be willing to sit down and think about how to license content,” Brauneis said.
…
The U.S. State Department, in coordination with other agencies from President Joe Biden’s administration, is seeking a six-month extension of the U.S.-China Science and Technology Agreement (STA) that is due to expire on August 27.
The short-term extension comes as several Republican congressional members voiced concerns that China has previously leveraged the agreement to advance its military objectives and may continue to do so.
The State Department said the brief extension will keep the STA in force while the United States negotiates with China to amend and strengthen the agreement. It does not commit the U.S. to a longer-term extension.
“We are clear-eyed to the challenges posed by the PRC’s national strategies on science and technology, Beijing’s actions in this space, and the threat they pose to U.S. national security and intellectual property, and are dedicated to protecting the interests of the American people,” a State Department spokesperson said Wednesday.
But congressional critics worry that research partnerships organized under the STA could have developed technologies that could later be used against the United States.
“In 2018, the National Oceanic and Atmospheric Administration (NOAA) organized a project with China’s Meteorological Administration — under the STA — to launch instrumented balloons to study the atmosphere,” said Republican Representatives Mike Gallagher, Elise Stefanik and others in a June 27 letter to U.S. Secretary of State Antony Blinken.
“As you know, a few years later, the PRC used similar balloon technology to surveil U.S. military sites on U.S. territory — a clear violation of our sovereignty.”
The STA was originally signed in 1979 by then-U.S. President Jimmy Carter and then-PRC leader Deng Xiaoping. Under the agreement, the two countries cooperate in fields including agriculture, energy, space, health, environment, earth sciences and engineering, as well as educational and scholarly exchanges.
The agreement has been renewed roughly every five years since its inception.
The most recent extension was in 2018.
…
A Kenyan court has given Facebook’s parent company, Meta, and the content moderators who are suing it for unfair dismissal 21 days to resolve their dispute out of court, a court order showed on Wednesday.
The 184 content moderators are suing Meta and two subcontractors after they say they lost their jobs with one of the firms, Sama, for organizing a union.
The plaintiffs say they were then blacklisted from applying for the same roles at the second firm, Luxembourg-based Majorel, after Facebook switched contractors.
“The parties shall pursue an out of court settlement of this petition through mediation,” said the order by the Employment and Labour Relations Court, which was signed by lawyers for the plaintiffs, Meta, Sama and Majorel.
Kenya’s former chief justice, Willy Mutunga, and Hellen Apiyo, the acting commissioner for labor, will serve as mediators, the order said. If the parties fail to resolve the case within 21 days, the case will proceed before the court, it said.
Meta, Sama and Majorel did not immediately respond to requests for comment.
A judge ruled in April that Meta could be sued by the moderators in Kenya, even though it has no official presence in the east African country.
The case could have implications for how Meta works with content moderators globally. The U.S. social media giant works with thousands of moderators around the world, who review graphic content posted on its platform.
Meta has also been sued in Kenya by a former moderator over accusations of poor working conditions at Sama, and by two Ethiopian researchers and a rights institute, which accuse it of letting violent and hateful posts from Ethiopia flourish on Facebook.
Those cases are ongoing.
Meta said in May 2022, in response to the first case, that it required partners to provide industry-leading conditions. On the Ethiopia case, it said in December that hate speech and incitement to violence were against the rules of Facebook and Instagram.
…
An Indian spacecraft has landed on the moon, becoming the first craft to touch down on the lunar surface’s south pole, the country’s space agency said.
India’s attempt to land on the moon Wednesday came days after Russia’s Luna-25 lander, also headed for the unexplored south pole, crashed into the moon.
It was India’s second attempt to reach the south pole — four years ago, India’s lander crashed during its final approach.
India has become the fourth country to achieve what is called a “soft-landing” on the moon – a feat accomplished by the United States, China and the former Soviet Union.
However, none of those lunar missions landed at the south pole.
The south side, where the terrain is rough and rugged, has never been explored.
The current mission, called Chandrayaan-3, blasted into space on July 14.
…
Війна в Україні не зайшла в глухий кут на тлі зростання занепокоєння щодо темпів територіальних здобутків у результаті контрнаступу Києва. Про це сказав радник Білого дому з національної безпеки Джейк Салліван, відповідаючи на запитання журналіста.
«Ні, ми не вважаємо, що конфлікт зайшов у глухий кут», – сказав Салліван.
Він назвав ситуацію «динамічною», коли і українські, і російські війська одночасно обороняються і наступають залежно від розташування сил на лінії фронту.
«Росія атакуватиме місцями, і вони атакують. Але, звичайно, Україна також атакує, Україна також досягає успіхів», – сказав Салліван. Особливо це відбувається на півдні, додав він.
Радник Білого дому з національної безпеки назвав контрнаступ України методичним і таким, що адаптується до викликів, і сказав, що військовому командуванню також необхідно воювати «стійко», щоб мати можливість чинити тиск на російські сили в довгостроковій перспективі.
Салліван сказав, що Сполучені Штати залишаються повністю відданими підтримці боротьби України.
Він додав, що не береться прогнозувати, як розгортатимуться події на полі бою.
«Нам потрібно продовжувати розвивати фундаментальні елементи як оборони, так і наступу, зокрема артилерійські боєприпаси та мобільність, яка потрібна Україні, щоб мати можливість як утримувати позиції, так і захоплювати їх», – сказав він.
Салліван сказав: «Я не можу спрогнозувати чи передбачити, як все закінчиться в ході цієї війни…, але ми продовжуємо підтримувати Україну в її контрнаступальних зусиллях».
У вечірньому зведенні Генштаб ЗСУ повідомив, що сили оборони України продовжують ведення наступальної операції на Мелітопольському напрямку, закріплюються на досягнутих рубежах, здійснюють заходи контрбатарейної боротьби.
Read MoreGoogle, Facebook, TikTok and other Big Tech companies operating in Europe are facing one of the most far-reaching efforts to clean up what people encounter online.
The first phase of the European Union’s groundbreaking new digital rules will take effect this week. The Digital Services Act is part of a suite of tech-focused regulations crafted by the 27-nation bloc — long a global leader in cracking down on tech giants.
The DSA, which the biggest platforms must start following Friday, is designed to keep users safe online and stop the spread of harmful content that’s either illegal or violates a platform’s terms of service, such as promotion of genocide or anorexia. It also looks to protect Europeans’ fundamental rights like privacy and free speech.
Some online platforms, which could face billions in fines if they don’t comply, have already started making changes.
Here’s a look at what’s happening this week:
Which platforms are affected?
So far, 19. They include eight social media platforms: Facebook, TikTok, Twitter, YouTube, Instagram, LinkedIn, Pinterest and Snapchat.
There are five online marketplaces: Amazon, Booking.com, China’s Alibaba AliExpress and Germany’s Zalando.
Mobile app stores Google Play and Apple’s App Store are subject, as are Google’s Search and Microsoft’s Bing search engine.
Google Maps and Wikipedia round out the list.
What about other online companies?
The EU’s list is based on numbers submitted by the platforms. Those with 45 million or more users — or 10% of the EU’s population — will face the DSA’s highest level of regulation.
Brussels insiders, however, have pointed to some notable omissions from the EU’s list, like eBay, Airbnb, Netflix and even PornHub. The list isn’t definitive, and it’s possible other platforms may be added later on.
Any business providing digital services to Europeans will eventually have to comply with the DSA. They will face fewer obligations than the biggest platforms, however, and have another six months before they must fall in line.
Citing uncertainty over the new rules, Meta Platforms has held off launching its Twitter rival, Threads, in the EU.
What’s changing?
Platforms have started rolling out new ways for European users to flag illegal online content and dodgy products, which companies will be obligated to take down quickly and objectively.
Amazon opened a new channel for reporting suspected illegal products and is providing more information about third-party merchants.
TikTok gave users an “additional reporting option” for content, including advertising, that they believe is illegal. Categories such as hate speech and harassment, suicide and self-harm, misinformation or frauds and scams, will help them pinpoint the problem.
Then, a “new dedicated team of moderators and legal specialists” will determine whether flagged content either violates its policies or is unlawful and should be taken down, according to the app from Chinese parent company ByteDance.
TikTok says the reason for a takedown will be explained to the person who posted the material and the one who flagged it, and decisions can be appealed.
TikTok users can turn off systems that recommend videos based on what a user has previously viewed. Such systems have been blamed for leading social media users to increasingly extreme posts. If personalized recommendations are turned off, TikTok’s feeds will instead suggest videos to European users based on what’s popular in their area and around the world.
The DSA prohibits targeting vulnerable categories of people, including children, with ads.
Snapchat said advertisers won’t be able to use personalization and optimization tools for teens in the EU and U.K. Snapchat users who are 18 and older also would get more transparency and control over ads they see, including “details and insight” on why they’re shown specific ads.
TikTok made similar changes, stopping users 13 to 17 from getting personalized ads “based on their activities on or off TikTok.”
Is there pushback?
Zalando, a German online fashion retailer, has filed a legal challenge over its inclusion on the DSA’s list of the largest online platforms, arguing that it’s being treated unfairly.
Nevertheless, Zalando is launching content flagging systems for its website even though there’s little risk of illegal material showing up among its highly curated collection of clothes, bags and shoes.
The company has supported the DSA, said Aurelie Caulier, Zalando’s head of public affairs for the EU.
“It will bring loads of positive changes” for consumers, she said. But “generally, Zalando doesn’t have systemic risk [that other platforms pose]. So that’s why we don’t think we fit in that category.”
Amazon has filed a similar case with a top EU court.
What happens if companies don’t follow the rules?
Officials have warned tech companies that violations could bring fines worth up to 6% of their global revenue — which could amount to billions — or even a ban from the EU. But don’t expect penalties to come right away for individual breaches, such as failing to take down a specific video promoting hate speech.
Instead, the DSA is more about whether tech companies have the right processes in place to reduce the harm that their algorithm-based recommendation systems can inflict on users. Essentially, they’ll have to let the European Commission, the EU’s executive arm and top digital enforcer, look under the hood to see how their algorithms work.
EU officials “are concerned with user behavior on the one hand, like bullying and spreading illegal content, but they’re also concerned about the way that platforms work and how they contribute to the negative effects,” said Sally Broughton Micova, an associate professor at the University of East Anglia.
That includes looking at how the platforms work with digital advertising systems, which could be used to profile users for harmful material like disinformation, or how their livestreaming systems function, which could be used to instantly spread terrorist content, said Broughton Micova, who’s also academic co-director at the Centre on Regulation in Europe, a Brussels-based think tank.
Under the rules, the biggest platforms will have to identify and assess potential systemic risks and whether they’re doing enough to reduce them. These risk assessments are due by the end of August and then they will be independently audited.
The audits are expected to be the main tool to verify compliance — though the EU’s plan has faced criticism for lacking details that leave it unclear how the process will work.
What about the rest of the world?
Europe’s changes could have global impact. Wikipedia is tweaking some policies and modifying its terms of service to provide more information on “problematic users and content.” Those alterations won’t be limited to Europe, said the nonprofit Wikimedia Foundation, which hosts the community-powered encyclopedia.
“The rules and processes that govern Wikimedia projects worldwide, including any changes in response to the DSA, are as universal as possible. This means that changes to our Terms of Use and Office Actions Policy will be implemented globally,” it said in a statement.
It’s going to be hard for tech companies to limit DSA-related changes, said Broughton Micova, adding that digital ad networks aren’t isolated to Europe and that social media influencers can have global reach.
The regulations are “dealing with multichannel networks that operate globally. So there is going to be a ripple effect once you have kind of mitigations that get taken into place,” she said.
…
Meta Platforms on Tuesday launched the web version of its new text-first social media platform Threads, in a bid to retain professional users and gain an edge over rival X, formerly Twitter.
Threads’ users will now be able to access the microblogging platform by logging-in to its website from their computers, the Facebook and Instagram owner said.
The widely anticipated roll out could help Threads gain broader acceptance among power users like brands, company accounts, advertisers and journalists, who can now take advantage of the platform by using it on a bigger screen.
Threads, which crossed 100 million sign-ups for the app within five days of its launch on July 5, saw a decline in its popularity as users returned to the more familiar platform X after the initial rush.
In just over a month, daily active users on Android version of Threads app dropped to 10.3 million from the peak of 49.3 million, according to a report, dated August 10, by analytics platform Similarweb.
The company will be adding more functionality to the web experience in the coming weeks, Meta said.
…