01001, Київ, Україна
info@ukrlines.com

Thailand Threatens Facebook Shutdown Over Scam Ads

Thailand said this week it is preparing to sue Facebook in a move that could see the platform shut down nationwide over scammers allegedly exploiting the social networking site to cheat local users out of tens of millions of dollars a year.

The country’s minister of digital economy and society, Chaiwut Thanakamanusorn, announced the planned lawsuit after a ministry meeting on Monday.

Ministry spokesperson Wetang Phuangsup told VOA on Thursday the case would be filed in one to two weeks, possibly by the end of the month.

“We are in the stage of gathering information, gathering evidence, and we will file to the court to issue the final judgment on how to deal with Facebook since they are a part of the scamming,” he said.

Some of the most common scams, Wetang said, involve paid advertisements on the site urging people to invest in fake companies, often using the logo of Thailand’s Securities and Exchange Commission or sham endorsements from local celebrities to lure them in.

Of the roughly 16,000 online scamming complaints filed in Thailand last year, he said, 70% to 80% involved Facebook and cost users upwards of $100 million.

“We believe that Facebook has a responsibility,” Wetang said. “Facebook is taking money from advertising a lot, and basically even taking money from Thai society as a whole. Facebook should be more responsible to society, should screen the advertising. … We believe that by doing so it would definitely decrease the investment scam in Thailand on the Facebook.”

Wetang said the ministry had been urging the company to do more to screen and vet paid ads for the past year and was now turning to the courts to possibly shut the site down as a last resort.

“If you are supporting the crime, especially on the internet, you will be liable [for] the crime, and by the law, it’s possible the court can issue the shutdown of Facebook,” he said. “By law, we can ask the court to suspend or punish all the people who support the crime, of course with evidence.”

Neither Facebook nor its parent company, Meta, replied to VOA’s repeated requests for comment or interviews.

The Asia Internet Coalition, an industry association that counts Meta among its members, acknowledged that online scamming was a growing problem across the region. Other members include Google, Amazon, Apple and X, formerly known as Twitter.

“While it’s getting challenging from the scale perspective, it’s also getting complicated and sophisticated because of the technology that has been used when it comes to application on the platforms but also how this technology can be misused,” the coalition’s secretariat, Sarthak Luthra, told VOA.

Luthra would not speak for Meta or address Thailand’s specific complaints against Facebook but said tech companies were taking steps to thwart scammers, including teaching users how to spot them.

Last year, for example, Meta launched a #StayingSafeOnline campaign in Thailand “to raise awareness about some of the most common kinds of online scams, including helping people understand the different kinds of scamsters, their tricks, and tips to stay safe online,” according to the company’s website.

Luthra said tech companies have been facing a growing number of criminal and civil penalties for their content across the region while urging governments to give them more room to regulate themselves and to apply “safe harbor” rules that shield the companies from legal liability for content created by users.

Shutting down any platform on a nationwide scale is not the answer, he said, and he warned of the unintended consequences.

“It really, first, impacts the ease of doing business and also the perception around the digital economy development of a country, so shutting down a platform is of course not a solution to a challenge in this case,” Luthra said.

“A government really needs to think of how do we promote online safety while maintaining an open internet environment,” he said. “From the economic perspective, it does impact investment sentiment, business sentiment and the ability to operate in that particular country.”

At a recent company event in Thailand, Meta said there were some 65 million Facebook users in the country, which also has the second-largest economy in Southeast Asia.

Shutting down the platform would have a “huge” impact on the vast majority of people using the site to make money legally and honestly, said Sutawan Chanprasert, executive director of DigitalReach, a digital rights group based in Thailand.

She said a shutdown would cut off a vital channel for free speech in Thailand and an important tool for independent local media outlets.

“Some of them rely predominantly on Facebook because it’s the most popular social media platform in Thailand, so they publish their content on Facebook in order to reach out to audiences because they don’t have a means to set up … a full-fledged media channel,” she said.

Taking all that away to foil scammers would be “too extreme,” Sutawan said, suggesting the government focus instead on strengthening the country’s cybercrime and security laws and enforcing them.

Ministry spokesperson Wetang said the government was aware of the collateral damage a shutdown could cause but had to risk a lawsuit that could bring it on.

“Definitely we are really concerned about the people on Facebook,” he said. “But since this is a crime that already happened, the evidence is so clear … it is impossible that we don’t take action.”

Read More

Meta Faces Backlash Over Canada News Block as Wildfires Rage

Meta is being accused of endangering lives by blocking news links in Canada at a crucial moment, when thousands have fled their homes and are desperate for wildfire updates that once would have been shared widely on Facebook.

The situation “is dangerous,” said Kelsey Worth, 35, one of nearly 20,000 residents of Yellowknife and thousands more in small towns ordered to evacuate the Northwest Territories as wildfires advanced.

She described to AFP how “insanely difficult” it has been for herself and other evacuees to find verifiable information about the fires blazing across the near-Arctic territory and other parts of Canada.

“Nobody’s able to know what’s true or not,” she said.

“And when you’re in an emergency situation, time is of the essence,” she said, explaining that many Canadians until now have relied on social media for news.

Meta on Aug. 1 started blocking the distribution of news links and articles on its Facebook and Instagram platforms in response to a recent law requiring digital giants to pay publishers for news content.

The company has been in a virtual showdown with Ottawa over the bill passed in June, but which only takes effect next year.

Building on similar legislation introduced in Australia, the bill aims to support a struggling Canadian news sector that has seen a flight of advertising dollars and hundreds of publications closed in the last decade.

It requires companies like Meta and Google to make fair commercial deals with Canadian outlets for the news and information — estimated in a report to parliament to be worth US$250 million per year — that is shared on their platforms or face binding arbitration.

But Meta has said the bill is flawed and insisted that news outlets share content on its Facebook and Instagram platforms to attract readers, benefiting them and not the Silicon Valley firm.

Profits over safety

Canadian Prime Minister Justin Trudeau this week assailed Meta, telling reporters it was “inconceivable that a company like Facebook is choosing to put corporate profits ahead of (safety)… and keeping Canadians informed about things like wildfires.”

Almost 80% of all online advertising revenues in Canada go to Meta and Google, which has expressed its own reservations about the new law.

Ollie Williams, director of Cabin Radio in the far north, called Meta’s move to block news sharing “stupid and dangerous.”

He suggested in an interview with AFP that “Meta could lift the ban temporarily in the interests of preservation of life and suffer no financial penalty because the legislation has not taken effect yet.”

Nicolas Servel, over at Radio Taiga, a French-language station in Yellowknife, noted that some had found ways of circumventing Meta’s block.

They “found other ways to share” information, he said, such as taking screen shots of news articles and sharing them from personal — rather than corporate — social media accounts.

‘Life and death’

Several large newspapers in Canada such as The Globe and Mail and the Toronto Star have launched campaigns to try to attract readers directly to their sites.

But for many smaller news outlets, workarounds have proven challenging as social media platforms have become entrenched.

Public broadcaster CBC in a letter this week pressed Meta to reverse course.

“Time is of the essence,” wrote CBC president Catherine Tait. “I urge you to consider taking the much-needed humanitarian action and immediately lift your ban on vital Canadian news and information to communities dealing with this wildfire emergency.”

As more than 1,000 wildfires burn across Canada, she said, “The need for reliable, trusted, and up-to-date information can literally be the difference between life and death.”

Meta — which did not respond to AFP requests for comment — rejected CBC’s suggestion. Instead, it urged Canadians to use the “Safety Check” function on Facebook to let others know if they are safe or not.

Patrick White, a professor at the University of Quebec in Montreal, said Meta has shown itself to be a “bad corporate citizen.”

“It’s a matter of public safety,” he said, adding that he remains optimistic Ottawa will eventually reach a deal with Meta and other digital giants that addresses their concerns.

Read More

Уряд Данії пропонує визнати незаконним спалювання релігійних текстів

Серія публічних осквернень Корану групою антиісламських активістів викликала гнівні демонстрації в мусульманських країнах

Read More

У Франції справу колишнього президента Саркозі про «гроші Каддафі» передали до суду

Головний свідок обвинувачення стверджував, що наприкінці 2006 – на початку 2007 років передав 5 мільйонів євро Ніколя Саркозі, який тоді обіймав посаду міністра внутрішніх справ

Read More

Q&A: How Do Europe’s Sweeping Rules for Tech Giants Work?

Google, Facebook, TikTok and other Big Tech companies operating in Europe must comply with one of the most far-reaching efforts to clean up what people see online.

The European Union’s groundbreaking new digital rules took effect Friday for the biggest platforms. The Digital Services Act is part of a suite of tech-focused regulations crafted by the 27-nation bloc, long a global leader in cracking down on tech giants.

The DSA is designed to keep users safe online and stop the spread of harmful content that’s either illegal or violates a platform’s terms of service, such as promotion of genocide or anorexia. It also looks to protect Europeans’ fundamental rights like privacy and free speech.

Some online platforms, which could face billions in fines if they don’t comply, already have made changes.

Here’s a look at what has changed:

Which platforms are affected? 

So far, 19. They include eight social media platforms: Facebook; TikTok; X, formerly known as Twitter; YouTube; Instagram; LinkedIn; Pinterest; and Snapchat.

There are five online marketplaces: Amazon, Booking.com, China’s Alibaba and AliExpress, and Germany’s Zalando.

Mobile app stores Google Play and Apple’s App Store are subject to the new rules, as are Google’s Search and Microsoft’s Bing search engines.

Google Maps and Wikipedia round out the list. 

What about other online companies?

The EU’s list is based on numbers submitted by the platforms. Those with 45 million or more users — or 10% of the EU’s population — face the DSA’s highest level of regulation. 

Brussels insiders, however, have pointed to some notable omissions, like eBay, Airbnb, Netflix and even PornHub. The list isn’t definitive, and it’s possible other platforms may be added later. 

Any business providing digital services to Europeans will eventually have to comply with the DSA. They will face fewer obligations than the biggest platforms, however, and have another six months before they must fall in line.

What’s changing?

Platforms have rolled out new ways for European users to flag illegal online content and dodgy products, which companies will be obligated to take down quickly. 

The DSA “will have a significant impact on the experiences Europeans have when they open their phones or fire up their laptops,” Nick Clegg, Meta’s president for global affairs, said in a blog post. 

Facebook’s and Instagram’s existing tools to report content will be easier to access. Amazon opened a new channel for reporting suspect goods. 

TikTok gave users an extra option for flagging videos, such as for hate speech and harassment, or frauds and scams, which will be reviewed by an additional team of experts, according to the app from Chinese parent company ByteDance. 

Google is offering more “visibility” into content moderation decisions and different ways for users to contact the company. It didn’t offer specifics. Under the DSA, Google and other platforms have to provide more information behind why posts are taken down. 

Facebook, Instagram, TikTok and Snapchat also are giving people the option to turn off automated systems that recommend videos and posts based on their profiles. Such systems have been blamed for leading social media users to increasingly extreme posts. 

The DSA also prohibits targeting vulnerable categories of people, including children, with ads. Platforms like Snapchat and TikTok will stop allowing teen users to be targeted by ads based on their online activities. 

Google will provide more information about targeted ads shown to people in the EU and give researchers more access to data on how its products work. 

Is there pushback?

Zalando, a German online fashion retailer, has filed a legal challenge over its inclusion on the DSA’s list of the largest online platforms, arguing it’s being treated unfairly. 

Nevertheless, Zalando is launching content-flagging systems for its website, even though there’s little risk of illegal material showing up among its highly curated collection of clothes, bags and shoes. 

Amazon has filed a similar case with a top EU court.

What if companies don’t follow the rules?

Officials have warned tech companies that violations could bring fines worth up to 6% of their global revenue — which could amount to billions — or even a ban from the EU. 

“The real test begins now,” said European Commissioner Thierry Breton, who oversees digital policy. He vowed to “thoroughly enforce the DSA and fully use our new powers to investigate and sanction platforms where warranted.” 

But don’t expect penalties to come right away for individual breaches, such as failing to take down a specific video promoting hate speech. 

Instead, the DSA is more about whether tech companies have the right processes in place to reduce the harm that their algorithm-based recommendation systems can inflict on users. Essentially, they’ll have to let the European Commission, the EU’s executive arm and top digital enforcer, look under the hood to see how their algorithms work. 

EU officials “are concerned with user behavior on the one hand, like bullying and spreading illegal content, but they’re also concerned about the way that platforms work and how they contribute to the negative effects,” said Sally Broughton Micova, an associate professor at the University of East Anglia. 

That includes looking at how the platforms work with digital advertising systems, which could be used to profile users for harmful material like disinformation, or how their livestreaming systems function, which could be used to instantly spread terrorist content, said Broughton Micova, who’s also academic co-director at the Centre on Regulation in Europe, a Brussels think tank. 

Big platforms have to identify and assess potential systemic risks and whether they’re doing enough to reduce them. These assessments are due by the end of August and then they will be independently audited. 

The audits are expected to be the main tool to verify compliance — though the EU’s plan has faced criticism for lacking details that leave it unclear how the process will work. 

What about the rest of the world? 

Europe’s changes could have global impact. Wikipedia is tweaking some policies and modifying its terms of use to provide more information on “problematic users and content.” Those alterations won’t be limited to Europe and “will be implemented globally,” said the nonprofit Wikimedia Foundation, which hosts the community-powered encyclopedia. 

Snapchat said its new reporting and appeal process for flagging illegal content or accounts that break its rules will be rolled out first in the EU and then globally in the coming months. 

It’s going to be hard for tech companies to limit DSA-related changes, said Broughton Micova, adding that digital ad networks aren’t isolated to Europe and that social media influencers can have global reach.

Read More

Electric Vehicle ‘Fast Chargers’ Seen as Game Changer

With White House funding to help get more electric cars on the road, some states are creating local rules to get top technologies into their charging stations. Deana Mitchell has the story.

Read More

У Росії назвали нових «іноагентів» – серед них журналісти Димарський та Еггерт

Внесення до реєстру цих людей міністерство пояснило тим, що вони нібито поширювали «недостовірну інформацію» про рішення російської влади, а також дії російської армії

Read More

Хунта Нігеру наказала французькому послу залишити країну – МЗС у Ніамеї

26–27 липня уряд Нігеру був повалений військовою хунтою на чолі з Абдурахманом Тчіані

Read More

Лукашенко заявив, що не повинен гарантувати безпеку Пригожина

Я йому кажу: «Жень, ти розумієш, що знищиш людей і сам загинеш?»

Read More

Кремль: Путін не поїде до Індії на саміт G20

За словами Пєскова, наразі поїздки не входять до порядку денного Володимира Путіна

Read More

Фінляндія надішле Україні пакет військової допомоги на 94 млн євро

Загальна вартість наданої Фінляндією Україні допомоги становитиме близько 1,3 млрд євро

Read More

Кремль: «є розуміння», що зустріч Путіна й Ердогана відбудеться незабаром

Коли саме і де планується ця зустріч – Пєсков не уточнив

Read More

Розвідка Британії прокоментувала повідомлення про загибель Пригожина

«Наразі немає остаточних доказів того, що Пригожин був на борту, він відомий тим, що вживає виняткових заходів безпеки. Однак велика ймовірність того, що він дійсно мертвий»

Read More

Аеропорти Москви знову не працювали після повідомлень про вибухи в кількох областях РФ

Телеграм-канал Baza повідомив, що в районі російського військового аеродрому «Шайковка» збили ракету

Read More

Кадиров заявив, що просив Пригожина «залишити особисті амбіції»

«Останнім часом він або не бачив, або не хотів бачити повну картину того, що відбувається в країні» – сказав Кадиров про Пригожина

Read More

US Sues SpaceX for Discriminating Against Refugees, Asylum-Seekers

The U.S. Justice Department is suing Elon Musk’s SpaceX for refusing to hire refugees and asylum-seekers at the rocket company.

In a lawsuit filed on Thursday, the Justice Department said SpaceX routinely discriminated against these job applicants between 2018 and 2022, in violation of U.S. immigration laws.

The lawsuit says that Musk and other SpaceX officials falsely claimed the company was allowed to hire only U.S. citizens and permanent residents due to export control laws that regulate the transfer of sensitive technology.

“U.S. law requires at least a green card to be hired at SpaceX, as rockets are advanced weapons technology,” Musk wrote in a June 16, 2020, tweet cited in the lawsuit.

In fact, U.S. export control laws impose no such restrictions, according to the Justice Department.

Those laws limit the transfer of sensitive technology to foreign entities, but they do not prevent high-tech companies such as SpaceX from hiring job applicants who have been granted refugee or asylum status in the U.S. (Foreign nationals, however, need a special permit.)

“Under these laws, companies like SpaceX can hire asylees and refugees for the same positions they would hire U.S. citizens and lawful permanent residents,” the Department said in a statement. “And once hired, asylees and refugees can access export-controlled information and materials without additional government approval, just like U.S. citizens and lawful permanent residents.”

The company did not respond to a VOA request for comment on the lawsuit and whether it had changed its hiring policy.

Recruiters discouraged refugees, say investigators

The Justice Department’s civil rights division launched an investigation into SpaceX in 2020 after learning about the company’s alleged discriminatory hiring practices.

The inquiry discovered that SpaceX “failed to fairly consider or hire asylees and refugees because of their citizenship status and imposed what amounted to a ban on their hire regardless of their qualification, in violation of federal law,” Assistant Attorney General Kristen Clarke said in a statement.

“Our investigation also found that SpaceX recruiters and high-level officials took actions that actively discouraged asylees and refugees from seeking work opportunities at the company,” Clarke said.

According to data SpaceX provided to the Justice Department, out of more than 10,000 hires between September 2018 and May 2022, SpaceX hired only one person described as an asylee on his application.

The company hired the applicant about four months after the Justice Department notified it about its investigation, according to the lawsuit.

No refugees were hired during this period.

“Put differently, SpaceX’s own hiring records show that SpaceX repeatedly rejected applicants who identified as asylees or refugees because it believed that they were ineligible to be hired due to” export regulations, the lawsuit says.

On one occasion, a recruiter turned down an asylee “who had more than nine years of relevant engineering experience and had graduated from Georgia Tech University,” the lawsuit says.

Suit seeks penalties, change

SpaceX, based in Hawthorne, California, designs, manufactures and launches advanced rockets and spacecraft.

The Justice Department’s lawsuit asks an administrative judge to order SpaceX to “cease and desist” its alleged hiring practices and seeks civil penalties and policy changes.

Read More

Розвідка США все ще визначає причини катастрофи з літаком Пригожина – CNN

Увечері 23 серпня у Тверській області впав бізнес-джет, який, за даними ЗМІ, належав Євгену Пригожину. Загинули всі 10 людей, які перебували на борту

Read More

На концерті у Варшаві полякам дякували за підтримку, українським воїнам – за захист (фото)

Зі сцени на великому екрані показали фото тих, хто віддав життя за незалежну Україну – воїни-українці, а також поляки, які добровольцями пішли воювати на боці України

Read More

«Звідки ви взялися, йо…?» – Лавров на запитання про вплив смерті Пригожина на російські справи в Африці

Кореспондент адресував питання Лаврову, коли міністр із супроводом ішов коридором під час саміту БРІКС у Йоганнесбурзі

Read More

США закликають Росію «негайно» звільнити журналіста Геншковича

ФСБ стверджує, що Гершкович збирав на замовлення США секретні відомості про одне із підприємств військово-промислового комплексу

Read More

Шістьох українців, які намагаються залишити Росію, майже тиждень утримують на кордоні між РФ і Грузією

Майже всі ці українці – колишні ув’язнені, насильно вивезені з окупованої Росією частини Херсонської області

Read More

Табір «вагнерівців» у Білорусі активно демонтують – супутниковий знімок

Наскільки дозволяє роздивитися знімок за 23 серпня 2023 року, з 273 житлових наметів демонтовано близько 101

Read More

AI Firms Under Fire for Allegedly Infringing on Copyrights

New artificial intelligence tools that write human-like prose and create stunning images have taken the world by storm. But these awe-inspiring technologies are not creating something out of nothing; they’re trained on lots and lots of data, some of which come from works under copyright protection.

Now, the writers, artists and others who own the rights to the material used to teach ChatGPT and other generative AI tools want to stop what they see as blatant copyright infringement of mass proportions.

With billions of dollars at stake, U.S. courts will most likely have to sort out who owns what, using the 1976 Copyright Act, the same law that has determined who owns much of the content published on the internet.

U.S. copyright law seeks to strike a balance between protecting the rights of content creators and fostering creativity and innovation. Among other things, the law gives content creators the exclusive right to reproduce their original work and to prepare derivative works.

But it also provides for an exception. Known as “fair use,” it permits the use of copyrighted material without the copyright holder’s permission for content such as criticism, comment, news reporting, teaching and research.

On the one hand, “we want to allow people who have currently invested time, money, creativity to reap the rewards of what they have done,” said Sean O’Connor, a professor of law at George Mason University. “On the other hand, we don’t want to give them such strong rights that we inhibit the next generation of innovation.”

Is AI ‘scraping’ fair use?

The development of generative AI tools is testing the limits of “fair use,” pitting content creators against technology companies, with the outcome of the dispute promising wide-ranging implications for innovation and society at large.

In the 10 months since ChatGPT’s groundbreaking launch, AI companies have faced a rapidly increasing number of lawsuits over content used to train generative AI tools.  The plaintiffs are seeking damages and want the courts to end the alleged infringement.

In January, three visual artists filed a proposed class-action lawsuit against Stability AI Ltd. and two others in San Francisco, alleging that Stability “scraped” more than 5 billion images from the internet to train its popular image generator Stable Diffusion, without the consent of copyright holders.

Stable Diffusion is a “21st-century collage tool” that “remixes the copyrighted works of millions of artists whose work was used as training data,” according to the lawsuit.

In February, stock photo company Getty Images filed its own lawsuit against Stability AI in both the United States and Britain, saying the company copied more than 12 million photos from Getty’s collection without permission or compensation.

In June, two U.S.-based authors sued OpenAI, the creator of ChatGPT, claiming the company’s training data included nearly 300,000 books pulled from illegal “shadow library” websites that offer copyrighted books.

“A large language model’s output is entirely and uniquely reliant on the material in its training dataset,” the lawsuit says.

Last month, American comedian and author Sarah Silverman and two other writers sued OpenAI and Meta, the parent company of Facebook, over the same claims, saying their chatbots were trained on books that had been illegally acquired.

The lawsuit against OpenAI includes what it describes as “very accurate summaries” of the authors’ books generated by ChatGPT, suggesting the company illegally “copied” and then used them to train the chatbot.

The artificial intelligence companies have rejected the allegations and asked the courts to dismiss the lawsuits.

In a court filing in April, Stability AI, research lab Midjourney and online art gallery DeviantArt wrote that visual artists who sue “fail to identify a single allegedly infringing output image, let alone one that is substantially similar to any of their copyrighted works.”

For its part, OpenAI has defended its use of copyrighted material as “fair use,” saying it pulled the works from publicly available datasets on the internet.

The cases are slowly making their way through the courts. It is too early to say how judges will decide.

Last month, a federal judge in San Francisco said he was inclined to toss out most of a lawsuit brought by the three artists against Stability AI but indicated that the claim of direct infringement may continue.

“The big question is fair use,” said Robert Brauneis, a law professor and co-director of the Intellectual Property Program at George Washington University. “I would not be surprised if some of the courts came out in different ways, that some of the cases said, ‘Yes, fair use.’ And others said, ‘No.’”

If the courts are split, the question could eventually go to the Supreme Court, Brauneis said.

Assessing copyright claims

Training generative AI tools to create new works raises two legal questions: Is the data use authorized? And is the new work it creates “derivative” or “transformative”?

The answer is not clear-cut, O’Connor said.

“On the one hand, what the supporters of the generative AI models are saying is that they are acting not much differently than we as humans would do,” he said. “When we read books, watch movies, listen to music, and if we are talented, then we use those to train ourselves as models.

“The counterargument is that … it is categorically different from what humans do when they learn how to become creative themselves.”

While artificial intelligence companies claim their use of the data is fair, O’Connor said they still have to prove that the use was authorized.

“I think that’s a very close call, and I think they may lose on that,” he said.

On the other hand, the AI models can probably avoid liability for generating content that “seems sort of the style of a current author” but is not the same.

“That claim is probably not going to succeed,” O’Connor said. “It will be seen as just a different work.”

But Brauneis said content creators have a strong claim: The AI-generated output will likely compete with the original work.

Imagine you’re a magazine editor who wants an illustration to accompany an article about a particular bird, Brauneis suggested. You could do one of two things: Commission an artist or ask a generative AI tool like Stable Diffusion to create it for you. After a few attempts with the latter, you’ll probably get an image that you can use.

“One of the most important questions to ask about in fair use is, ‘Is this use a substitute, or is it competing with the work of art that is being copied?’” Brauneis said. “And the answer here may be yes. And if it is [competing], that really weighs strongly against fair use.”

This is not the first time that technology companies have been sued over their use of copyrighted material.

In 2015, the Authors Guild filed a class-action lawsuit against Google and three university libraries over Google’s digital books project, alleging “massive copyright infringement.”

In 2014, an appeals court ruled that the project, by then renamed Google Books, was protected under the fair use doctrine.

In 2007, Viacom sued both Google and YouTube for allowing users to upload and view copyrighted material owned by Viacom, including complete episodes of TV shows. The case was later settled out of court.

For Brauneis, the current “Wild West era of creating AI models” recalls YouTube’s freewheeling early days.

“They just wanted to get viewers, and they were willing to take a legal risk to do that,” Brauneis said. “That’s not the way YouTube operates now. YouTube has all sorts of precautions to identify copyrighted content that has not been permitted to be placed on YouTube and then to take it down.”

Artificial intelligence companies may make a similar pivot.

They may have justified using copyrighted material to test out their technology. But now that their models are working, they “may be willing to sit down and think about how to license content,” Brauneis said.

Read More

US Seeks to Extend Science, Tech Agreement With China for 6 Months

The U.S. State Department, in coordination with other agencies from President Joe Biden’s administration, is seeking a six-month extension of the U.S.-China Science and Technology Agreement (STA) that is due to expire on August 27.

The short-term extension comes as several Republican congressional members voiced concerns that China has previously leveraged the agreement to advance its military objectives and may continue to do so.

The State Department said the brief extension will keep the STA in force while the United States negotiates with China to amend and strengthen the agreement. It does not commit the U.S. to a longer-term extension.

“We are clear-eyed to the challenges posed by the PRC’s national strategies on science and technology, Beijing’s actions in this space, and the threat they pose to U.S. national security and intellectual property, and are dedicated to protecting the interests of the American people,” a State Department spokesperson said Wednesday.

But congressional critics worry that research partnerships organized under the STA could have developed technologies that could later be used against the United States.

“In 2018, the National Oceanic and Atmospheric Administration (NOAA) organized a project with China’s Meteorological Administration — under the STA — to launch instrumented balloons to study the atmosphere,” said Republican Representatives Mike Gallagher, Elise Stefanik and others in a June 27 letter to U.S. Secretary of State Antony Blinken.

“As you know, a few years later, the PRC used similar balloon technology to surveil U.S. military sites on U.S. territory — a clear violation of our sovereignty.”

The STA was originally signed in 1979 by then-U.S. President Jimmy Carter and then-PRC leader Deng Xiaoping. Under the agreement, the two countries cooperate in fields including agriculture, energy, space, health, environment, earth sciences and engineering, as well as educational and scholarly exchanges.

The agreement has been renewed roughly every five years since its inception. 

The most recent extension was in 2018. 

Read More

How AI Can ‘Resurrect’ People

In 2023, a new way to use AI has come online. Some companies are using the tool to make lifelike avatars of people, even those who have died.  Maxim Moskalkov reports. Camera: Andrey Degtyarev.

Read More

Вибори в Україні наступного року мають відбутися, навіть в умовах війни – сенатор США

Грем: «Я хочу, щоб у цій країні відбулися вільні та чесні вибори, навіть під час нападу на неї»

Read More

Пригожин і його соратник Уткін – мертві – офіційно

Смерть керівництва ПВК «Вагнер» підтверджено після падіння літака, яким вони летіли

Read More

Унаслідок обвалу залізничного мосту в Індії загинули 22 людини

За даними Північно-Східної прикордонної залізниці, на місці працювало 40 робітників

Read More