На запитання, чи вважає Шайе Крим приналежним Україні, посол сказав, що «це залежить від того, як ви сприймаєте проблему», додавши, що це нібито не так просто
…
The Biden administration announced more than $80 million in funding Thursday in a push to produce more solar panels in the U.S., make solar energy available to more people, and pursue superior alternatives to the ubiquitous sparkly panels made with silicon.
The initiative, spearheaded by the U.S. Department of Energy (DOE) and known as Community solar, encompasses a variety of arrangements where renters and people who don’t control their rooftops can still get their electricity from solar power. Two weeks ago, Vice President Kamala Harris announced what the administration said was the largest community solar effort ever in the United States.
Now it is set to spend $52 million on 19 solar projects across a dozen states, including $10 million from the infrastructure law, as well as $30 million on technologies that will help integrate solar electricity into the grid.
The DOE also selected 25 teams to participate in a $10 million competition designed to fast-track the efforts of solar developers working on community solar projects.
The Inflation Reduction Act already offers incentives to build large solar generation projects, such as renewable energy tax credits. But Ali Zaidi, White House national climate adviser, said the new money focuses on meeting the nation’s climate goals in a way that benefits more communities.
“It’s lifting up our workers and our communities. And that’s, I think, what really excites us about this work,” Zaidi said. “It’s a chance not just to tackle the climate crisis, but to bring economic opportunity to every zip code of America.”
The investments will help people save on their electricity bills and make the electricity grid more reliable, secure, and resilient in the face of a changing climate, said Becca Jones-Albertus, director of the energy department’s Solar Energy Technologies Office.
Jones-Albertus said she’s particularly excited about the support for community solar projects, since half of Americans don’t live in a situation where they can buy their own solar and put in on the roof.
Michael Jung, executive director of the ICF Climate Center agreed. “Community solar can help address equity concerns, as most current rooftop solar panels benefit owners of single-family homes,” he said.
In typical community solar projects, households can invest in or subscribe to part of a larger solar array offsite. “What we’re doing here is trying to unlock the community solar market,” Jones-Albertus said.
The U.S. has 5.3 gigawatts of installed community solar capacity currently, according to the latest estimates. The goal is that by 2025, five million households will have access to it — about three times as many as today — saving $1 billion on their electricity bills, according to Jones-Albertus.
The new funding also highlights investment in a next generation of solar technologies, intended to wring more electricity out of the same amount of solar panels. Currently only about 20% of the sun’s energy is converted to electricity in crystalline silicon solar cells, which is what most solar panels are made of. There has long been hope for higher efficiency, and today’s announcement puts some money towards developing two alternatives: perovskite and cadmium telluride (CdTe) solar cells. Zaidi said this will allow the U.S. to be “the innovation engine that tackles the climate crisis.”
Joshua Rhodes, a scientist at the University of Texas at Austin said the investment in perovskites is good news. They can be produced more cheaply than silicon and are far more tolerant of defects, he said. They can also be built into textured and curved surfaces, which opens up more applications for their use than traditional rigid panels. Most silicon is produced in China and Russia, Rhodes pointed out.
Cadmium telluride solar can be made quickly and at a low cost, but further research is needed to improve how efficient the material is at converting sunlight to electrons.
Cadmium is also toxic and people shouldn’t be exposed to it. Jones-Albertus said that in cadmium telluride solar technology, the compound is encapsulated in glass and additional protective layers.
The new funds will also help recycle solar panels and reuse rare earth elements and materials. “One of the most important ways we can make sure CdTe remains in a safe compound form is ensuring that all solar panels made in the U.S. can be reused or recycled at the end of their life cycle,” Jones-Albertus explained.
Recycling solar panels also reduces the need for mining, which damages landscapes and uses a lot of energy, in part to operate the heavy machinery. Eight of the projects in Thursday’s announcement focus on improving solar panel recycling, for a total of about $10 million.
Clean energy is a fit for every state in the country, the administration said. One solar project in Shungnak, Alaska, was able to eliminate the need to keep making electricity by burning diesel fuel, a method sometimes used in remote communities that is not healthy for people and contributes to climate change.
“Alaska is not a place that folks often think of when they think about solar, but this energy can be an economic and affordable resource in all parts of the country,” said Jones-Albertus.
…
A viral AI-generated song imitating Drake and The Weeknd was pulled from streaming services this week, but did it breach copyright as claimed by record label Universal?
Created by someone called @ghostwriter, Heart On My Sleeve racked up millions of listens before Universal Music Group asked for its removal from Spotify, Apple Music and other platforms.
However, Andres Guadamuz, who teaches intellectual property law at Britain’s University of Sussex, is not convinced that the song breached copyright.
As similar cases look set to multiply — with an uncanny AI replication of Liam Gallagher from Oasis causing buzz — he spoke to AFP about some of the issues being raised.
Did the song breach copyright?
The underlying music on Heart On My Sleeve was new, only the sound of the voice was familiar, “and you can’t copyright the sound of someone’s voice,” Guadamuz said.
Perhaps the furor around AI impersonators may lead to copyright being expanded to include voice, rather than just melody, lyrics and other created elements, “but that would be problematic,” Guadamuz added.
“What you’re protecting with copyright is the expression of an idea, and voice isn’t really that,” he said.
He said Universal probably claimed copyright infringement because it is the simplest route to removing content, with established procedures in place with streaming platforms.
Were other rights breached?
An AI-generated impersonator may be breaching other laws.
If an artist has a distinctive voice or image, this is potentially protected under “publicity rights” in the United States or similar image rights in other countries.
Bette Midler won a case against Ford in 1988 for using an impersonator of her in an ad. Tom Waits won a similar case in 1993 against the Frito-Lays potato chips company.
The problem, said Guadamuz, is that enforcement of these rights is “very hit and miss” and taken much more seriously in some countries than others.
And streaming platforms currently lack straightforward mechanisms for removing content seen as breaching image rights.
What comes next?
The big upcoming legal fight is over how AI programs are trained.
It may be argued that inputting existing Drake and Weeknd songs to train an AI program may be a breach of copyright, but Guadamuz said this issue was far from settled.
“You need to copy the music in order to train the AI and so that unauthorized copying could potentially be copyright infringement,” he said.
“But defendants will say it’s fair use. They are using it to train a machine, teaching it to listen to music, and then removing the copies,” he said. “Ultimately, we will have to wait and see for the case law to be decided.”
But it is almost certainly too late to stem the flood.
“Bands are going to have to decide whether they want to pursue this in court, and copyright cases are expensive,” said Guadamuz.
“Some artists may lean into the technology and start using it themselves, especially if they start losing their voice.”
…
Competition between the U.S. and China in artificial intelligence has expanded into a race to design and implement comprehensive AI regulations.
The efforts to come up with rules to ensure AI’s trustworthiness, safety and transparency come at a time when governments around the world are exploring the impact of the technology on national security and education.
ChatGPT, a chatbot that mimics human conversation, has received massive attention since its debut in November. Its ability to give sophisticated answers to complex questions with a language fluency comparable to that of humans has caught the world by surprise. Yet its many flaws, including its ostensibly coherent responses laden with misleading information and apparent bias, have prompted tech leaders in the U.S. to sound the alarm.
“What happens when something vastly smarter than the smartest person comes along in silicon form? It’s very difficult to predict what will happen in that circumstance,” said Tesla Chief Executive Officer Elon Musk in an interview with Fox News. He warned that artificial intelligence could lead to “civilization destruction” without regulations in place.
Google CEO Sundar Pichai echoed that sentiment. “Over time there has to be regulation. There have to be consequences for creating deep fake videos which cause harm to society,” Pichai said in an interview with CBS’s “60 Minutes” program.
Jessica Brandt, policy director for the Artificial Intelligence and Emerging Technology Initiative at the Brookings Institution, told VOA Mandarin, “Business leaders understand that regulators will be watching this space closely, and they have an interest in shaping the approaches regulators will take.”
US grapples with regulations
AI regulation is still nascent in the U.S. Last year, the White House released voluntary guidance through a Blueprint for an AI Bill of Rights to help ensure users’ rights are protected as technology companies design and develop AI systems.
At a meeting of the President’s Council of Advisors on Science and Technology this month, President Joe Biden expressed concern about the potential dangers associated with AI and underscored that companies had a responsibility to ensure their products were safe before making them public.
On April 11, the National Telecommunications and Information Administration, a Commerce Department agency that advises the White House on telecommunications and information policy, began to seek comment and public input with the aim of crafting a report on AI accountability.
The U.S. government is trying to find the right balance to regulate the industry without stifling innovation “in part because the U.S. having innovative leadership globally is a selling point for the United States’ hard and soft power,” said Johanna Costigan, a junior fellow at the Asia Society Policy Institute’s Center for China Analysis.
Brandt, with Brookings, said, “The challenge for liberal democracies is to ensure that AI is developed and deployed responsibly, while also supporting a vibrant innovation ecosystem that can attract talent and investment.”
Meanwhile, other Western countries have also started to work on regulating the emerging technology.
The U.K. government published its AI regulatory framework in March. Also last month, Italy temporarily blocked ChatGPT in the wake of a data breach, and the German commissioner for data protection said his country could follow suit.
The European Union stated it’s pushing for an AI strategy aimed at making Europe a world-class hub for AI that ensures AI is human-centric and trustworthy, and it hopes to lead the world in AI standards.
Cyber regulations in China
In contrast to the U.S., the Chinese government has already implemented regulations aimed at tech sectors related to AI. In the past few years, Beijing has introduced several major data protection laws to limit the power of tech companies and to protect consumers.
The Cybersecurity Law enacted in 2017 requires that data must be stored within China and operators must submit to government-conducted security checks. The Data Security Law enacted in 2021 sets a comprehensive legal framework for processing personal information when doing business in China. The Personal Information Protection Law established in the same year gives Chinese consumers the right to access, correct and delete their personal data gathered by businesses. Costigan, with the Asia Society, said these laws have laid the groundwork for future tech regulations.
In March 2022, China began to implement a regulation that governs the way technology companies can use recommendation algorithms. The Cyberspace Administration of China (CAC) now supervises the process of using big data to analyze user preferences and companies’ ability to push information to users.
On April 11, the CAC unveiled a draft for managing generative artificial intelligence services similar to ChatGPT, in an effort to mitigate the dangers of the new technology.
Costigan said the goal of the proposed generative AI regulation could be seen in Article 4 of the draft, which states that content generated by future AI products must reflect the country’s “core socialist values” and not encourage subversion of state power.
“Maintaining social stability is a key consideration,” she said. “The new draft regulation does some good and is unambiguously in line with [President] Xi Jinping’s desire to ensure that individuals, companies or organizations cannot use emerging AI applications to challenge his rule.”
Michael Caster, the Asia digital program manager at Article 19, a London-based rights organization, told VOA, “The language, especially at Article 4, is clearly about maintaining the state’s power of censorship and surveillance.
“All global policymakers should be clearly aware that while China may be attempting to set standards on emerging technology, their approach to legislation and regulation has always been to preserve the power of the party.”
The future of cyber regulations
As strategies for cyber and AI regulations evolve, how they develop may largely depend on each country’s way of governance and reasons for creating standards. Analysts say there will also be intrinsic hurdles linked to coming up with consensus.
“Ethical principles can be hard to implement consistently, since context matters and there are countless potential scenarios at play,” Brandt told VOA. “They can be hard to enforce, too. Who would take on that role? How? And of course, before you can implement or enforce a set of principles, you need broad agreement on what they are.”
Observers said the international community would face challenges as it creates standards aimed at making AI technology ethical and safe.
…
U.S. homeland security officials are launching what they describe as two urgent initiatives to combat growing threats from China and expanding dangers from ever more capable, and potentially malicious, artificial intelligence.
Homeland Security Secretary Alejandro Mayorkas announced Friday that his department was starting a “90-day sprint” to confront more frequent and intense efforts by China to hurt the United States, while separately establishing an artificial intelligence task force.
“Beijing has the capability and the intent to undermine our interests at home and abroad and is leveraging every instrument of its national power to do so,” Mayorkas warned, addressing the threat from China during a speech at the Council on Foreign Relations in Washington.
The 90-day sprint will “assess how the threats posed by the PRC [People’s Republic of China] will evolve and how we can be best positioned to guard against future manifestations of this threat,” he said.
“One critical area we will assess, for example, involves the defense of our critical infrastructure against PRC or PRC-sponsored attacks designed to disrupt or degrade provision of national critical functions, sow discord and panic, and prevent mobilization of U.S. military capabilities,” Mayorkas added.
Other areas of focus for the sprint will include addressing ways to stop Chinese government exploitation of U.S. immigration and travel systems to spy on the U.S. government and private entities and to silence critics, and looking at ways to disrupt the global fentanyl supply chain.
AI dangers
Mayorkas also said the magnitude of the threat from artificial intelligence, appearing in a growing number of tools from major tech companies, was no less critical.
“We must address the many ways in which artificial intelligence will drastically alter the threat landscape and augment the arsenal of tools we possess to succeed in the face of these threats,” he said.
Mayorkas promised that the Department of Homeland Security “will lead in the responsible use of AI to secure the homeland and in defending against the malicious use of this transformational technology.”
The new task force is set to seek ways to use AI to protect U.S. supply chains and critical infrastructure, counter the flow of fentanyl, and help find and rescue victims of online child sexual exploitation.
The unveiling of the two initiatives came days after lawmakers grilled Mayorkas about what some described as a lackluster and derelict effort under his leadership to secure the U.S. border with Mexico.
“You have not secured our borders, Mr. Secretary, and I believe you’ve done so intentionally,” the chair of the House Homeland Security Committee, Republican Mark Green, told Mayorkas on Wednesday.
Another lawmaker, Republican Marjorie Taylor Greene, went as far as to accuse Mayorkas of lying, though her words were quickly removed from the record.
Mayorkas on Friday said it might be possible to use AI to help with border security, though how exactly it could be deployed for the task was not yet clear.
“We’re at a nascent stage of really deploying AI,” he said. “I think we’re now at the dawn of a new age.”
But Mayorkas cautioned that technologies like AI would do little to slow the number of migrants willing to embark on dangerous journeys to reach U.S. soil.
“Desperation is the greatest catalyst for the migration we are seeing,” he said.
FBI warning
The announcement of Homeland Security’s 90-day sprint to confront growing threats from Beijing followed a warning earlier this week from the FBI about the willingness of China to target dissidents and critics in the U.S.
and the arrests of two New York City residents for their involvement in a secret Chinese police station.
China has denied any wrongdoing.
“The Chinese government strictly abides by international law, and fully respects the law enforcement sovereignty of other countries,” Liu Pengyu, the spokesman for the Chinese Embassy in Washington, told VOA in an email earlier this week, accusing the U.S. of seeking “to smear China’s image.”
Top U.S. officials have said they are opening two investigations daily into Chinese economic espionage in the U.S.
“The Chinese government has stolen more of American’s personal and corporate data than that of every nation, big or small combined,” FBI Director Christopher Wray told an audience late last year.
More recently, Wray warned of Chinese’ advances in AI, saying he was “deeply concerned.”
Mayorkas voiced a similar sentiment, pointing to China’s use of investments and technology to establish footholds around the world.
“We are deeply concerned about PRC-owned and -operated infrastructure, elements of infrastructure, and what that control can mean, given that the operator and owner has adverse interests,” Mayorkas said Friday.
“Whether it’s investment in our ports, whether it is investment in partner nations, telecommunications channels and the like, it’s a myriad of threats,” he said.
…
Twitter has removed labels describing global media organizations as government-funded or state-affiliated, a move that comes after the Elon Musk-owned platform started stripping blue verification checkmarks from accounts that don’t pay a monthly fee.
Among those no longer labeled was National Public Radio in the U.S., which announced last week that it would stop using Twitter after its main account was designated state-affiliated media, a term also used to identify media outlets controlled or heavily influenced by authoritarian governments, such as Russia and China.
Twitter later changed the label to “government-funded media,” but NPR — which relies on the government for a tiny fraction of its funding — said it was still misleading.
Canadian Broadcasting Corp. and Swedish public radio made similar decisions to quit tweeting. CBC’s government-funded label vanished Friday, along with the state-affiliated tags on media accounts including Sputnik and RT in Russia and Xinhua in China.
Many of Twitter’s high-profile users on Thursday lost the blue checks that helped verify their identity and distinguish them from impostors.
Twitter had about 300,000 verified users under the original blue-check system — many of them journalists, athletes and public figures. The checks used to mean the account was verified by Twitter to be who it says it is.
High-profile users who lost their blue checks Thursday included Beyoncé, Pope Francis, Oprah Winfrey and former President Donald Trump.
The costs of keeping the marks range from $8 a month for individual web users to a starting price of $1,000 monthly to verify an organization, plus $50 monthly for each affiliate or employee account. Twitter does not verify the individual accounts, as was the case with the previous blue check doled out during the platform’s pre-Musk administration.
Celebrity users, from basketball star LeBron James to author Stephen King and Star Trek’s William Shatner, have balked at joining — although on Thursday, all three had blue checks indicating that the account paid for verification.
King, for one, said he hadn’t paid.
“My Twitter account says I’ve subscribed to Twitter Blue. I haven’t. My Twitter account says I’ve given a phone number. I haven’t,” King tweeted Thursday. “Just so you know.”
In a reply to King’s tweet, Musk said “You’re welcome namaste” and in another tweet he said he’s “paying for a few personally.” He later tweeted he was just paying for King, Shatner and James.
Singer Dionne Warwick tweeted earlier in the week that the site’s verification system “is an absolute mess.”
“The way Twitter is going anyone could be me now,” Warwick said. She had earlier vowed not to pay for Twitter Blue, saying the monthly fee “could (and will) be going toward my extra hot lattes.”
On Thursday, Warwick lost her blue check (which is actually a white check mark in a blue background).
For users who still had a blue check Thursday, a popup message indicated that the account “is verified because they are subscribed to Twitter Blue and verified their phone number.” Verifying a phone number simply means that the person has a phone number and they verified that they have access to it — it does not confirm the person’s identity.
It wasn’t just celebrities and journalists who lost their blue checks Thursday. Many government agencies, nonprofits and public-service accounts around the world found themselves no longer verified, raising concerns that Twitter could lose its status as a platform for getting accurate, up-to-date information from authentic sources, including in emergencies.
While Twitter offers gold checks for “verified organizations” and gray checks for government organizations and their affiliates, it’s not clear how the platform doles these out.
The official Twitter account of the New York City government, which earlier had a blue check, tweeted on Thursday that “This is an authentic Twitter account representing the New York City Government This is the only account for @NYCGov run by New York City government” in an attempt to clear up confusion.
A newly created spoof account with 36 followers (also without a blue check), disagreed: “No, you’re not. THIS account is the only authentic Twitter account representing and run by the New York City Government.”
Soon, another spoof account — purporting to be Pope Francis — weighed in too: “By the authority vested in me, Pope Francis, I declare @NYC_GOVERNMENT the official New York City Government. Peace be with you.”
Fewer than 5% of legacy verified accounts appear to have paid to join Twitter Blue as of Thursday, according to an analysis by Travis Brown, a Berlin-based developer of software for tracking social media.
Musk’s move has riled up some high-profile users and pleased some right-wing figures and Musk fans who thought the marks were unfair. But it is not an obvious money-maker for the social media platform that has long relied on advertising for most of its revenue.
Digital intelligence platform Similarweb analyzed how many people signed up for Twitter Blue on their desktop computers and only detected 116,000 confirmed sign-ups last month, which at $8 or $11 per month does not represent a major revenue stream. The analysis did not count accounts bought via mobile apps.
After buying San Francisco-based Twitter for $44 billion in October, Musk has been trying to boost the struggling platform’s revenue by pushing more people to pay for a premium subscription. But his move also reflects his assertion that the blue verification marks have become an undeserved or “corrupt” status symbol for elite personalities, news reporters and others granted verification for free by Twitter’s previous leadership.
Twitter began tagging profiles with a blue check mark starting about 14 years ago. Along with shielding celebrities from impersonators, one of the main reasons was to provide an extra tool to curb misinformation coming from accounts impersonating people. Most “legacy blue checks,” including the accounts of politicians, activists and people who suddenly find themselves in the news, as well as little-known journalists at small publications around the globe, are not household names.
One of Musk’s first product moves after taking over Twitter was to launch a service granting blue checks to anyone willing to pay $8 a month. But it was quickly inundated by impostor accounts, including those impersonating Nintendo, pharmaceutical company Eli Lilly and Musk’s businesses Tesla and SpaceX, so Twitter had to temporarily suspend the service days after its launch.
The relaunched service costs $8 a month for web users and $11 a month for users of its iPhone or Android apps. Subscribers are supposed to see fewer ads, be able to post longer videos and have their tweets featured more prominently.
…