01001, Київ, Україна
info@ukrlines.com

Electric Vehicle ‘Fast Chargers’ Seen as Game Changer

With White House funding to help get more electric cars on the road, some states are creating local rules to get top technologies into their charging stations. Deana Mitchell has the story.

Read More

US Sues SpaceX for Discriminating Against Refugees, Asylum-Seekers

The U.S. Justice Department is suing Elon Musk’s SpaceX for refusing to hire refugees and asylum-seekers at the rocket company.

In a lawsuit filed on Thursday, the Justice Department said SpaceX routinely discriminated against these job applicants between 2018 and 2022, in violation of U.S. immigration laws.

The lawsuit says that Musk and other SpaceX officials falsely claimed the company was allowed to hire only U.S. citizens and permanent residents due to export control laws that regulate the transfer of sensitive technology.

“U.S. law requires at least a green card to be hired at SpaceX, as rockets are advanced weapons technology,” Musk wrote in a June 16, 2020, tweet cited in the lawsuit.

In fact, U.S. export control laws impose no such restrictions, according to the Justice Department.

Those laws limit the transfer of sensitive technology to foreign entities, but they do not prevent high-tech companies such as SpaceX from hiring job applicants who have been granted refugee or asylum status in the U.S. (Foreign nationals, however, need a special permit.)

“Under these laws, companies like SpaceX can hire asylees and refugees for the same positions they would hire U.S. citizens and lawful permanent residents,” the Department said in a statement. “And once hired, asylees and refugees can access export-controlled information and materials without additional government approval, just like U.S. citizens and lawful permanent residents.”

The company did not respond to a VOA request for comment on the lawsuit and whether it had changed its hiring policy.

Recruiters discouraged refugees, say investigators

The Justice Department’s civil rights division launched an investigation into SpaceX in 2020 after learning about the company’s alleged discriminatory hiring practices.

The inquiry discovered that SpaceX “failed to fairly consider or hire asylees and refugees because of their citizenship status and imposed what amounted to a ban on their hire regardless of their qualification, in violation of federal law,” Assistant Attorney General Kristen Clarke said in a statement.

“Our investigation also found that SpaceX recruiters and high-level officials took actions that actively discouraged asylees and refugees from seeking work opportunities at the company,” Clarke said.

According to data SpaceX provided to the Justice Department, out of more than 10,000 hires between September 2018 and May 2022, SpaceX hired only one person described as an asylee on his application.

The company hired the applicant about four months after the Justice Department notified it about its investigation, according to the lawsuit.

No refugees were hired during this period.

“Put differently, SpaceX’s own hiring records show that SpaceX repeatedly rejected applicants who identified as asylees or refugees because it believed that they were ineligible to be hired due to” export regulations, the lawsuit says.

On one occasion, a recruiter turned down an asylee “who had more than nine years of relevant engineering experience and had graduated from Georgia Tech University,” the lawsuit says.

Suit seeks penalties, change

SpaceX, based in Hawthorne, California, designs, manufactures and launches advanced rockets and spacecraft.

The Justice Department’s lawsuit asks an administrative judge to order SpaceX to “cease and desist” its alleged hiring practices and seeks civil penalties and policy changes.

Read More

AI Firms Under Fire for Allegedly Infringing on Copyrights

New artificial intelligence tools that write human-like prose and create stunning images have taken the world by storm. But these awe-inspiring technologies are not creating something out of nothing; they’re trained on lots and lots of data, some of which come from works under copyright protection.

Now, the writers, artists and others who own the rights to the material used to teach ChatGPT and other generative AI tools want to stop what they see as blatant copyright infringement of mass proportions.

With billions of dollars at stake, U.S. courts will most likely have to sort out who owns what, using the 1976 Copyright Act, the same law that has determined who owns much of the content published on the internet.

U.S. copyright law seeks to strike a balance between protecting the rights of content creators and fostering creativity and innovation. Among other things, the law gives content creators the exclusive right to reproduce their original work and to prepare derivative works.

But it also provides for an exception. Known as “fair use,” it permits the use of copyrighted material without the copyright holder’s permission for content such as criticism, comment, news reporting, teaching and research.

On the one hand, “we want to allow people who have currently invested time, money, creativity to reap the rewards of what they have done,” said Sean O’Connor, a professor of law at George Mason University. “On the other hand, we don’t want to give them such strong rights that we inhibit the next generation of innovation.”

Is AI ‘scraping’ fair use?

The development of generative AI tools is testing the limits of “fair use,” pitting content creators against technology companies, with the outcome of the dispute promising wide-ranging implications for innovation and society at large.

In the 10 months since ChatGPT’s groundbreaking launch, AI companies have faced a rapidly increasing number of lawsuits over content used to train generative AI tools.  The plaintiffs are seeking damages and want the courts to end the alleged infringement.

In January, three visual artists filed a proposed class-action lawsuit against Stability AI Ltd. and two others in San Francisco, alleging that Stability “scraped” more than 5 billion images from the internet to train its popular image generator Stable Diffusion, without the consent of copyright holders.

Stable Diffusion is a “21st-century collage tool” that “remixes the copyrighted works of millions of artists whose work was used as training data,” according to the lawsuit.

In February, stock photo company Getty Images filed its own lawsuit against Stability AI in both the United States and Britain, saying the company copied more than 12 million photos from Getty’s collection without permission or compensation.

In June, two U.S.-based authors sued OpenAI, the creator of ChatGPT, claiming the company’s training data included nearly 300,000 books pulled from illegal “shadow library” websites that offer copyrighted books.

“A large language model’s output is entirely and uniquely reliant on the material in its training dataset,” the lawsuit says.

Last month, American comedian and author Sarah Silverman and two other writers sued OpenAI and Meta, the parent company of Facebook, over the same claims, saying their chatbots were trained on books that had been illegally acquired.

The lawsuit against OpenAI includes what it describes as “very accurate summaries” of the authors’ books generated by ChatGPT, suggesting the company illegally “copied” and then used them to train the chatbot.

The artificial intelligence companies have rejected the allegations and asked the courts to dismiss the lawsuits.

In a court filing in April, Stability AI, research lab Midjourney and online art gallery DeviantArt wrote that visual artists who sue “fail to identify a single allegedly infringing output image, let alone one that is substantially similar to any of their copyrighted works.”

For its part, OpenAI has defended its use of copyrighted material as “fair use,” saying it pulled the works from publicly available datasets on the internet.

The cases are slowly making their way through the courts. It is too early to say how judges will decide.

Last month, a federal judge in San Francisco said he was inclined to toss out most of a lawsuit brought by the three artists against Stability AI but indicated that the claim of direct infringement may continue.

“The big question is fair use,” said Robert Brauneis, a law professor and co-director of the Intellectual Property Program at George Washington University. “I would not be surprised if some of the courts came out in different ways, that some of the cases said, ‘Yes, fair use.’ And others said, ‘No.’”

If the courts are split, the question could eventually go to the Supreme Court, Brauneis said.

Assessing copyright claims

Training generative AI tools to create new works raises two legal questions: Is the data use authorized? And is the new work it creates “derivative” or “transformative”?

The answer is not clear-cut, O’Connor said.

“On the one hand, what the supporters of the generative AI models are saying is that they are acting not much differently than we as humans would do,” he said. “When we read books, watch movies, listen to music, and if we are talented, then we use those to train ourselves as models.

“The counterargument is that … it is categorically different from what humans do when they learn how to become creative themselves.”

While artificial intelligence companies claim their use of the data is fair, O’Connor said they still have to prove that the use was authorized.

“I think that’s a very close call, and I think they may lose on that,” he said.

On the other hand, the AI models can probably avoid liability for generating content that “seems sort of the style of a current author” but is not the same.

“That claim is probably not going to succeed,” O’Connor said. “It will be seen as just a different work.”

But Brauneis said content creators have a strong claim: The AI-generated output will likely compete with the original work.

Imagine you’re a magazine editor who wants an illustration to accompany an article about a particular bird, Brauneis suggested. You could do one of two things: Commission an artist or ask a generative AI tool like Stable Diffusion to create it for you. After a few attempts with the latter, you’ll probably get an image that you can use.

“One of the most important questions to ask about in fair use is, ‘Is this use a substitute, or is it competing with the work of art that is being copied?’” Brauneis said. “And the answer here may be yes. And if it is [competing], that really weighs strongly against fair use.”

This is not the first time that technology companies have been sued over their use of copyrighted material.

In 2015, the Authors Guild filed a class-action lawsuit against Google and three university libraries over Google’s digital books project, alleging “massive copyright infringement.”

In 2014, an appeals court ruled that the project, by then renamed Google Books, was protected under the fair use doctrine.

In 2007, Viacom sued both Google and YouTube for allowing users to upload and view copyrighted material owned by Viacom, including complete episodes of TV shows. The case was later settled out of court.

For Brauneis, the current “Wild West era of creating AI models” recalls YouTube’s freewheeling early days.

“They just wanted to get viewers, and they were willing to take a legal risk to do that,” Brauneis said. “That’s not the way YouTube operates now. YouTube has all sorts of precautions to identify copyrighted content that has not been permitted to be placed on YouTube and then to take it down.”

Artificial intelligence companies may make a similar pivot.

They may have justified using copyrighted material to test out their technology. But now that their models are working, they “may be willing to sit down and think about how to license content,” Brauneis said.

Read More

US Seeks to Extend Science, Tech Agreement With China for 6 Months

The U.S. State Department, in coordination with other agencies from President Joe Biden’s administration, is seeking a six-month extension of the U.S.-China Science and Technology Agreement (STA) that is due to expire on August 27.

The short-term extension comes as several Republican congressional members voiced concerns that China has previously leveraged the agreement to advance its military objectives and may continue to do so.

The State Department said the brief extension will keep the STA in force while the United States negotiates with China to amend and strengthen the agreement. It does not commit the U.S. to a longer-term extension.

“We are clear-eyed to the challenges posed by the PRC’s national strategies on science and technology, Beijing’s actions in this space, and the threat they pose to U.S. national security and intellectual property, and are dedicated to protecting the interests of the American people,” a State Department spokesperson said Wednesday.

But congressional critics worry that research partnerships organized under the STA could have developed technologies that could later be used against the United States.

“In 2018, the National Oceanic and Atmospheric Administration (NOAA) organized a project with China’s Meteorological Administration — under the STA — to launch instrumented balloons to study the atmosphere,” said Republican Representatives Mike Gallagher, Elise Stefanik and others in a June 27 letter to U.S. Secretary of State Antony Blinken.

“As you know, a few years later, the PRC used similar balloon technology to surveil U.S. military sites on U.S. territory — a clear violation of our sovereignty.”

The STA was originally signed in 1979 by then-U.S. President Jimmy Carter and then-PRC leader Deng Xiaoping. Under the agreement, the two countries cooperate in fields including agriculture, energy, space, health, environment, earth sciences and engineering, as well as educational and scholarly exchanges.

The agreement has been renewed roughly every five years since its inception. 

The most recent extension was in 2018. 

Read More

How AI Can ‘Resurrect’ People

In 2023, a new way to use AI has come online. Some companies are using the tool to make lifelike avatars of people, even those who have died.  Maxim Moskalkov reports. Camera: Andrey Degtyarev.

Read More

Kenyan Court Gives Meta and Sacked Moderators 21 Days to Pursue Settlement  

A Kenyan court has given Facebook’s parent company, Meta, and the content moderators who are suing it for unfair dismissal 21 days to resolve their dispute out of court, a court order showed on Wednesday.

The 184 content moderators are suing Meta and two subcontractors after they say they lost their jobs with one of the firms, Sama, for organizing a union.

The plaintiffs say they were then blacklisted from applying for the same roles at the second firm, Luxembourg-based Majorel, after Facebook switched contractors.

“The parties shall pursue an out of court settlement of this petition through mediation,” said the order by the Employment and Labour Relations Court, which was signed by lawyers for the plaintiffs, Meta, Sama and Majorel.

Kenya’s former chief justice, Willy Mutunga, and Hellen Apiyo, the acting commissioner for labor, will serve as mediators, the order said. If the parties fail to resolve the case within 21 days, the case will proceed before the court, it said.

Meta, Sama and Majorel did not immediately respond to requests for comment.

A judge ruled in April that Meta could be sued by the moderators in Kenya, even though it has no official presence in the east African country.

The case could have implications for how Meta works with content moderators globally. The U.S. social media giant works with thousands of moderators around the world, who review graphic content posted on its platform.

Meta has also been sued in Kenya by a former moderator over accusations of poor working conditions at Sama, and by two Ethiopian researchers and a rights institute, which accuse it of letting violent and hateful posts from Ethiopia flourish on Facebook.

Those cases are ongoing.

Meta said in May 2022, in response to the first case, that it required partners to provide industry-leading conditions. On the Ethiopia case, it said in December that hate speech and incitement to violence were against the rules of Facebook and Instagram.

Read More

India Lands Craft on Moon’s Unexplored South Pole

An Indian spacecraft has landed on the moon, becoming the first craft to touch down on the lunar surface’s south pole, the country’s space agency said.

India’s attempt to land on the moon Wednesday came days after Russia’s Luna-25 lander, also headed for the unexplored south pole, crashed into the moon.  

It was India’s second attempt to reach the south pole — four years ago, India’s lander crashed during its final approach.  

India has become the fourth country to achieve what is called a “soft-landing” on the moon – a feat accomplished by the United States, China and the former Soviet Union.  

However, none of those lunar missions landed at the south pole. 

The south side, where the terrain is rough and rugged, has never been explored.  

The current mission, called Chandrayaan-3, blasted into space on July 14.

Read More

Europe’s Sweeping Rules for Tech Giants Are About to Kick In

Google, Facebook, TikTok and other Big Tech companies operating in Europe are facing one of the most far-reaching efforts to clean up what people encounter online.

The first phase of the European Union’s groundbreaking new digital rules will take effect this week. The Digital Services Act is part of a suite of tech-focused regulations crafted by the 27-nation bloc — long a global leader in cracking down on tech giants.

The DSA, which the biggest platforms must start following Friday, is designed to keep users safe online and stop the spread of harmful content that’s either illegal or violates a platform’s terms of service, such as promotion of genocide or anorexia. It also looks to protect Europeans’ fundamental rights like privacy and free speech.

Some online platforms, which could face billions in fines if they don’t comply, have already started making changes.

Here’s a look at what’s happening this week:

Which platforms are affected?

So far, 19. They include eight social media platforms: Facebook, TikTok, Twitter, YouTube, Instagram, LinkedIn, Pinterest and Snapchat.

There are five online marketplaces: Amazon, Booking.com, China’s Alibaba AliExpress and Germany’s Zalando.

Mobile app stores Google Play and Apple’s App Store are subject, as are Google’s Search and Microsoft’s Bing search engine.

Google Maps and Wikipedia round out the list.

What about other online companies?

The EU’s list is based on numbers submitted by the platforms. Those with 45 million or more users — or 10% of the EU’s population — will face the DSA’s highest level of regulation.

Brussels insiders, however, have pointed to some notable omissions from the EU’s list, like eBay, Airbnb, Netflix and even PornHub. The list isn’t definitive, and it’s possible other platforms may be added later on.

Any business providing digital services to Europeans will eventually have to comply with the DSA. They will face fewer obligations than the biggest platforms, however, and have another six months before they must fall in line.

Citing uncertainty over the new rules, Meta Platforms has held off launching its Twitter rival, Threads, in the EU.

What’s changing?

Platforms have started rolling out new ways for European users to flag illegal online content and dodgy products, which companies will be obligated to take down quickly and objectively.

Amazon opened a new channel for reporting suspected illegal products and is providing more information about third-party merchants.

TikTok gave users an “additional reporting option” for content, including advertising, that they believe is illegal. Categories such as hate speech and harassment, suicide and self-harm, misinformation or frauds and scams, will help them pinpoint the problem.

Then, a “new dedicated team of moderators and legal specialists” will determine whether flagged content either violates its policies or is unlawful and should be taken down, according to the app from Chinese parent company ByteDance.

TikTok says the reason for a takedown will be explained to the person who posted the material and the one who flagged it, and decisions can be appealed.

TikTok users can turn off systems that recommend videos based on what a user has previously viewed. Such systems have been blamed for leading social media users to increasingly extreme posts. If personalized recommendations are turned off, TikTok’s feeds will instead suggest videos to European users based on what’s popular in their area and around the world.

The DSA prohibits targeting vulnerable categories of people, including children, with ads.

Snapchat said advertisers won’t be able to use personalization and optimization tools for teens in the EU and U.K. Snapchat users who are 18 and older also would get more transparency and control over ads they see, including “details and insight” on why they’re shown specific ads.

TikTok made similar changes, stopping users 13 to 17 from getting personalized ads “based on their activities on or off TikTok.”

Is there pushback?

Zalando, a German online fashion retailer, has filed a legal challenge over its inclusion on the DSA’s list of the largest online platforms, arguing that it’s being treated unfairly.

Nevertheless, Zalando is launching content flagging systems for its website even though there’s little risk of illegal material showing up among its highly curated collection of clothes, bags and shoes.

The company has supported the DSA, said Aurelie Caulier, Zalando’s head of public affairs for the EU.

“It will bring loads of positive changes” for consumers, she said. But “generally, Zalando doesn’t have systemic risk [that other platforms pose]. So that’s why we don’t think we fit in that category.”

Amazon has filed a similar case with a top EU court.

What happens if companies don’t follow the rules?

Officials have warned tech companies that violations could bring fines worth up to 6% of their global revenue — which could amount to billions — or even a ban from the EU. But don’t expect penalties to come right away for individual breaches, such as failing to take down a specific video promoting hate speech.

Instead, the DSA is more about whether tech companies have the right processes in place to reduce the harm that their algorithm-based recommendation systems can inflict on users. Essentially, they’ll have to let the European Commission, the EU’s executive arm and top digital enforcer, look under the hood to see how their algorithms work.

EU officials “are concerned with user behavior on the one hand, like bullying and spreading illegal content, but they’re also concerned about the way that platforms work and how they contribute to the negative effects,” said Sally Broughton Micova, an associate professor at the University of East Anglia.

That includes looking at how the platforms work with digital advertising systems, which could be used to profile users for harmful material like disinformation, or how their livestreaming systems function, which could be used to instantly spread terrorist content, said Broughton Micova, who’s also academic co-director at the Centre on Regulation in Europe, a Brussels-based think tank.

Under the rules, the biggest platforms will have to identify and assess potential systemic risks and whether they’re doing enough to reduce them. These risk assessments are due by the end of August and then they will be independently audited.

The audits are expected to be the main tool to verify compliance — though the EU’s plan has faced criticism for lacking details that leave it unclear how the process will work.

What about the rest of the world?

Europe’s changes could have global impact. Wikipedia is tweaking some policies and modifying its terms of service to provide more information on “problematic users and content.” Those alterations won’t be limited to Europe, said the nonprofit Wikimedia Foundation, which hosts the community-powered encyclopedia.

“The rules and processes that govern Wikimedia projects worldwide, including any changes in response to the DSA, are as universal as possible. This means that changes to our Terms of Use and Office Actions Policy will be implemented globally,” it said in a statement.

It’s going to be hard for tech companies to limit DSA-related changes, said Broughton Micova, adding that digital ad networks aren’t isolated to Europe and that social media influencers can have global reach.

The regulations are “dealing with multichannel networks that operate globally. So there is going to be a ripple effect once you have kind of mitigations that get taken into place,” she said.

Read More

Meta Rolls Out Web Version of Threads 

Meta Platforms on Tuesday launched the web version of its new text-first social media platform Threads, in a bid to retain professional users and gain an edge over rival X, formerly Twitter.

Threads’ users will now be able to access the microblogging platform by logging-in to its website from their computers, the Facebook and Instagram owner said.

The widely anticipated roll out could help Threads gain broader acceptance among power users like brands, company accounts, advertisers and journalists, who can now take advantage of the platform by using it on a bigger screen.

Threads, which crossed 100 million sign-ups for the app within five days of its launch on July 5, saw a decline in its popularity as users returned to the more familiar platform X after the initial rush.

In just over a month, daily active users on Android version of Threads app dropped to 10.3 million from the peak of 49.3 million, according to a report, dated August 10, by analytics platform Similarweb.

The company will be adding more functionality to the web experience in the coming weeks, Meta said.

Read More

Biden Administration Announces More New Funding for Rural Broadband Infrastructure

The Biden administration on Monday continued its push toward internet-for-all by 2030, announcing about $667 million in new grants and loans to build more broadband infrastructure in the rural U.S.

“With this investment, we’re getting funding to communities in every corner of the country because we believe that no kid should have to sit in the back of a mama’s car in a McDonald’s parking lot in order to do homework,” said Mitch Landrieu, the White House’s infrastructure coordinator, in a call with reporters.

The 37 new recipients represent the fourth round of funding under the program, dubbed ReConnect by the U.S. Department of Agriculture. Another 37 projects received $771.4 million in grants and loans announced in April and June.

The money flowing through federal broadband programs, including what was announced Monday and the $42.5 billion infrastructure program detailed earlier this summer, will lead to a new variation on “the electrification of rural America,” Landrieu said, repeating a common Biden administration refrain.

The largest award went to the Ponderosa Telephone Co. in California, which received more than $42 million to deploy fiber networks in Fresno County. In total, more than 1,200 people, 12 farms and 26 other businesses will benefit from that effort alone, according to USDA.

The telephone cooperatives, counties and telecommunications companies that won the new awards are based in 22 states and the Marshall Islands.

At least half of the households in areas receiving the new funding lack access to internet speeds of 100 megabits per second download and 20 Mbps upload — what the federal government considers “underserved” in broadband terminology. The recipients’ mandate is to build networks that raise those levels to at least 100 Mbps upload and 100 Mbps download speeds for every household, business and farm in their service areas.

Agriculture Secretary Tom Vilsack said the investments could bring new economic opportunities to farmers, allow people without close access to medical care to see specialist doctors through telemedicine and increase academic offerings, including Advanced Placement courses in high schools.

“The fact that this administration understands and appreciates the need for continued investment in rural America to create more opportunity is something that I’m really excited about,” Vilsack said on the media call.  

Read More

Meta to Soon Launch Web Version of Threads in Race with X for Users

Meta Platforms is set to roll out the web version on its new text-first social media platform Threads, hoping to gain an edge over X, formerly Twitter, as the initial surge in users waned.

The widely anticipated web version will make Threads more useful for power users like brands, company accounts, advertisers and journalists.

Meta did not give a date for the launch, but Instagram head Adam Mosseri said it could happen soon.

“We are close on web…,” Mosseri said in a post on Threads on Friday. The launch could happen as early as this week, according to a report in the Wall Street Journal.

Threads, which launched as an Android and iOS app on July 5 and gained 100 million users in just five days, saw its popularity drop as users returned to the more familiar platform X after the initial rush to try Meta’s new offering. 

But in just over a month, its daily active users on Android app dropped to 10.3 million from the peak of 49.3 million, according to a report by analytics platform Similarweb dated Aug. 10. 

Meanwhile, the management is moving quickly to launch new features. Threads now offers the ability to set post notifications for accounts and view them in a type of chronological feed. 

It will soon roll out an improved search that could allow users to search for specific posts and not just accounts. 

Read More

Russia’s Luna-25 Crashes Into Moon 

Russia’s Luna-25 spacecraft has crashed into the moon.

“The apparatus moved into an unpredictable orbit and ceased to exist as a result of a collision with the surface of the moon,” Roscosmos, the Russian space agency, said Sunday.

On Saturday, the agency said it had a problem with the craft and lost contact with it.

The unmanned robot lander was set to land on the moon’s south pole Monday, ahead of an Indian craft scheduled to land on the south pole later this week.

Scientists are eager to explore the south pole because they believe water may be there and that the water could be transformed by future astronauts into air and rocket fuel.

Russia’s last moon launch was in 1976, during the Soviet era.

Some information in this report came from The Associated Press and Reuters.

Read More

Russia Fines Google $32,000 for Videos About Ukraine Conflict

A Russian court on Thursday imposed a $32,000 fine on Google for failing to delete allegedly false information about the conflict in Ukraine.

The move by a magistrate’s court follows similar actions in early August against Apple and the Wikimedia Foundation that hosts Wikipedia.

According to Russian news reports, the court found that the YouTube video service, which is owned by Google, was guilty of not deleting videos with incorrect information about the conflict — which Russia characterizes as a “special military operation.”

Google was also found guilty of not removing videos that suggested ways of gaining entry to facilities which are not open to minors, news agencies said, without specifying what kind of facilities were involved.

In Russia, a magistrate court typically handles administrative violations and low-level criminal cases.

Since sending troops into Ukraine in February 2022, Russia has enacted an array of measures to punish any criticism or questioning of the military campaign.

Some critics have received severe punishments. Opposition figure Vladimir Kara-Murza was sentenced this year to 25 years in prison for treason stemming from speeches he made against Russia’s actions in Ukraine.

Read More

Texas OKs Plan to Mandate Tesla Tech for EV Chargers in State

Texas on Wednesday approved its plan to require companies to include Tesla’s technology in electric vehicle charging stations to be eligible for federal funds, despite calls for more time to re-engineer and test the connectors.

The decision by Texas, the biggest recipient of a $5 billion program meant to electrify U.S. highways, is being closely watched by other states and is a step forward for Tesla CEO Elon Musk’s plans to make its technology the U.S. charging standard.

Tesla’s efforts are facing early tests as some states start rolling out the funds. The company won a slew of projects in Pennsylvania’s first round of funding announced on Monday but none in Ohio last month.

Federal rules require companies to offer the rival Combined Charging System, or CCS, a U.S. standard preferred by the Biden administration, as a minimum to be eligible for the funds.

But individual states can add their own requirements on top of CCS before distributing the federal funds at a local level.

Ford Motor and General Motors’ announcement about two months ago that they planned to adopt Tesla’s North American Charging Standard, or NACS, sent shockwaves through the industry and prompted a number of automakers and charging companies to embrace the technology.

In June, Reuters reported that Texas, which will receive and deploy $407.8 million over five years, planned to mandate companies to include Tesla’s plugs. Washington state has talked about similar plans, and Kentucky has mandated it.

Florida, another major recipient of funds, recently revised its plans, saying it would mandate NACS one year after standards body SAE International, which is reviewing the technology, formally recognizes it. 

Some charging companies wrote to the Texas Transportation Commission opposing the requirement in the first round of funds. They cited concerns about the supply chain and certification of Tesla’s connectors could put the successful deployment of EV chargers at risk.

That forced Texas to defer a vote on the plan twice as it sought to understand NACS and its implications, before the commission voted unanimously to approve the plan on Wednesday.

“The two-connector approach being proposed will help assure coverage of a minimum of 97% of the current, over 168,000 electric vehicles with fast charge ports in the state,” Humberto Gonzalez, a director at Texas’ department of transportation, said while presenting the state’s plan to the commissioners.

Read More

In Seattle, VP Harris Touts Administration Efforts to Boost Clean Energy

Vice President Kamala Harris marked the one-year anniversary of the Inflation Reduction Act by touting the Biden administration’s commitment to mitigating the climate crisis. Natasha Mozgovaya reports from Seattle.

Read More

Musk’s X Delays Access to Content on Reuters, NY Times, Social Media Rivals

Social media company X, formerly known as Twitter, delayed access to links to content on the Reuters and New York Times websites as well as rivals like Bluesky, Facebook and Instagram, according to a Washington Post report on Tuesday.

Clicking a link on X to one of the affected websites resulted in a delay of about five seconds before the webpage loaded, The Washington Post reported, citing tests it conducted on Tuesday. Reuters also saw a similar delay in tests it ran.

By late Tuesday afternoon, X appeared to have eliminated the delay. When contacted for comment, X confirmed the delay was removed but did not elaborate.

Billionaire Elon Musk, who bought Twitter in October, has previously lashed out at news organizations and journalists who have reported critically on his companies, which include Tesla and SpaceX. Twitter has previously prevented users from posting links to competing social media platforms.

Reuters could not establish the precise time when X began delaying links to some websites.

A user on Hacker News, a tech forum, posted about the delay earlier on Tuesday and wrote that X began delaying links to the New York Times on Aug. 4. On that day, Musk criticized the publication’s coverage of South Africa and accused it of supporting calls for genocide. Reuters has no evidence that the two events are related.

A spokesperson for the New York Times said it has not received an explanation from X about the link delay.

“While we don’t know the rationale behind the application of this time delay, we would be concerned by targeted pressure applied to any news organization for unclear reasons,” the spokesperson said on Tuesday.

A Reuters spokesperson said: “We are aware of the report in the Washington Post of a delay in opening links to Reuters stories on X. We are looking into the matter.”

Bluesky, an X rival that has Twitter co-founder Jack Dorsey on its board, did not reply to a request for comment.

Meta, which owns Facebook and Instagram, did not immediately respond to a request for comment.

Read More

Google to Train 20,000 Nigerians in Digital Skills

Google plans to train 20,000 Nigerian women and youth in digital skills and provide a grant of $1.6 million to help the government create 1 million digital jobs in the country, its Africa executives said on Tuesday. 

Nigeria plans to create digital jobs for its teeming youth population, Vice President Kashim Shettima told Google Africa executives during a meeting in Abuja. Shettima did not provide a timeline for creating the jobs. 

Google Africa executives said a grant from its philanthropic arm in partnership with Data Science Nigeria and the Creative Industry Initiative for Africa will facilitate the program. 

Shettima said Google’s initiative aligned with the government’s commitment to increase youth participation in the digital economy. The government is also working with the country’s banks on the project, Shettima added. 

Google director for West Africa Olumide Balogun said the company would commit funds and provide digital skills to women and young people in Nigeria and also enable startups to grow, which will create jobs. 

Google is committed to investing in digital infrastructure across Africa, Charles Murito, Google Africa’s director of government relations and public policy, said during the meeting, adding that digital transformation can be a job enabler. 

Read More

Fiction Writers Fear Rise of AI, Yet See It as a Story

For a vast number of book writers, artificial intelligence is a threat to their livelihood and the very idea of creativity. More than 10,000 of them endorsed an open letter from the Authors Guild this summer, urging AI companies not to use copyrighted work without permission or compensation.

At the same time, AI is a story to tell, and no longer just science fiction.

As present in the imagination as politics, the pandemic, or climate change, AI has become part of the narrative for a growing number of novelists and short story writers who only need to follow the news to imagine a world upended.

“I’m frightened by artificial intelligence, but also fascinated by it. There’s a hope for divine understanding, for the accumulation of all knowledge, but at the same time there’s an inherent terror in being replaced by non-human intelligence,” said Helen Phillips, whose upcoming novel “Hum” tells of a wife and mother who loses her job to AI.

“We’ve been seeing more and more about AI in book proposals,” said Ryan Doherty, vice president and editorial director at Celadon Books, which recently signed Fred Lunzker’s novel “Sike,” featuring an AI psychiatrist.

“It’s the zeitgeist right now. And whatever is in the cultural zeitgeist seeps into fiction,” Doherty said. 

Other AI-themed novels expected in the next two years include Sean Michaels’ “Do You Remember Being Born?” — in which a poet agrees to collaborate with an AI poetry company; Bryan Van Dyke’s “In Our Likeness,” about a bureaucrat and a fact-checking program with the power to change facts; and A.E. Osworth’s “Awakened,” about a gay witch and her titanic clash with AI.

Crime writer Jeffrey Diger, known for his thrillers set in contemporary Greece, is working on a novel touching upon AI and the metaverse, the outgrowth of being “continually on the lookout for what’s percolating on the edge of societal change,” he said.

Authors are invoking AI to address the most human questions.

In Sierra Greer’s “Annie Bot,” the title name is an AI mate designed for a human male. For Greer, the novel was a way to explore her character’s “urgent desire to please,” adding that a robot girlfriend enabled her “to explore desire, respect, and longing in ways that felt very new and strange to me.”

Amy Shearn’s “Animal Instinct” has its origins in the pandemic and in her personal life; she was recently divorced and had begun using dating apps.

“It’s so weird how, with apps, you start to feel as if you’re going person-shopping,” she said. “And I thought, wouldn’t it be great if you could really pick and choose the best parts of all these people you encounter and sort of cobble them together to make your ideal person?”

“Of course,” she added, “I don’t think anyone actually knows what their ideal person is, because so much of what draws us to mates is the unexpected, the ways in which people surprise us. That said, it seemed like an interesting premise for a novel.”

Some authors aren’t just writing about AI, but openly working with it.

Earlier this year, journalist Stephen Marche used AI to write the novella “Death of An Author,” for which he drew upon everyone from Raymond Chandler to Haruki Murakami. Screenwriter and humorist Simon Rich collaborated with Brent Katz and Josh Morgenthau for “I Am Code,” a thriller in verse that came out this month and was generated by the AI program “code-davinci-002.” (Filmmaker Werner Herzog reads the audiobook edition). 

Osworth, who is trans, wanted to address comments by “Harry Potter” author J.K. Rowling that have offended many in the trans community, and to wrest from her the power of magic. At the same time, they worried the fictional AI in their book sounded too human, and decided AI should speak for AI.

Osworth devised a crude program, based on the writings of Machiavelli among others, that would turn out a more mechanical kind of voice.

“I like to say that CHATgpt is a Ferrari, while what I came up with is a skateboard with one square wheel. But I was much more interested in the skateboard with one square wheel,” they said.

Michaels centers his new novel on a poet named Marian, in homage to poet Marianne Moore, and an AI program called Charlotte. He said the novel is about parenthood, labor, community, and “this technology’s implications for art, language and our sense of identity.”

Believing the spirit of “Do You Remember Being Born?” called for the presence of actual AI text, he devised a program that would generate prose and poetry, and uses an alternate format in the novel so readers know when he’s using AI.

In one passage, Marian is reviewing some of her collaboration with Charlotte.

“The preceding day’s work was a collection of glass cathedrals. I reread it with alarm. Turns of phrase I had mistaken for beautiful, which I now found unintelligible,” Michaels writes. “Charlotte had simply surprised me: I would propose a line, a portion of a line, and what the system spat back upended my expectations. I had been seduced by this surprise.”

And now AI speaks: “I had mistaken a fit of algorithmic exuberance for the truth.”

Read More

Chinese Surveillance Firm Selling Cameras With ‘Skin Color Analytics’

IPVM, a U.S.-based security and surveillance industry research group, says the Chinese surveillance equipment maker Dahua is selling cameras with what it calls a “skin color analytics” feature in Europe, raising human rights concerns. 

In a report released on July 31, IPVM said “the company defended the analytics as being a ‘basic feature of a smart security solution.'” The report is behind a paywall, but IPVM provided a copy to VOA Mandarin. 

Dahua’s ICC Open Platform guide for “human body characteristics” includes “skin color/complexion,” according to the report. In what Dahua calls a “data dictionary,” the company says that the “skin color types” that Dahua analytic tools would target are ”yellow,” “black,” and ”white.”  VOA Mandarin verified this on Dahua’s Chinese website. 

The IPVM report also says that skin color detection is mentioned in the “Personnel Control” category, a feature Dahua touts as part of its Smart Office Park solution intended to provide security for large corporate campuses in China.  

Charles Rollet, co-author of the IPVM report, told VOA Mandarin by phone on August 1, “Basically what these video analytics do is that, if you turn them on, then the camera will automatically try and determine the skin color of whoever passes, whoever it captures in the video footage. 

“So that means the camera is going to be guessing or attempting to determine whether the person in front of it … has black, white or yellow — in their words — skin color,” he added.  

VOA Mandarin contacted Dahua for comment but did not receive a response. 

The IPVM report said that Dahua is selling cameras with the skin color analytics feature in three European nations. Each has a recent history of racial tension: Germany, France and the Netherlands.

‘Skin color is a basic feature’

Dahua said its skin tone analysis capability was an essential function in surveillance technology.  

 In a statement to IPVM, Dahua said, “The platform in question is entirely consistent with our commitments to not build solutions that target any single racial, ethnic, or national group. The ability to generally identify observable characteristics such as height, weight, hair and eye color, and general categories of skin color is a basic feature of a smart security solution.”  

IPMV said the company has previously denied offering the mentioned feature, and color detection is uncommon in mainstream surveillance tech products. 

In many Western nations, there has long been a controversy over errors due to skin color in surveillance technologies for facial recognition. Identifying skin color in surveillance applications raises human rights and civil rights concerns.  

“So it’s unusual to see it for skin color because it’s such a controversial and ethically fraught field,” Rollet said.  

Anna Bacciarelli, technology manager at Human Rights Watch (HRW), told VOA Mandarin that Dahua technology should not contain skin tone analytics.   

“All companies have a responsibility to respect human rights, and take steps to prevent or mitigate any human rights risks that may arise as a result of their actions,” she said in an email.

“Surveillance software with skin tone analytics poses a significant risk to the right to equality and non-discrimination, by allowing camera owners and operators to racially profile people at scale — likely without their knowledge, infringing privacy rights — and should simply not be created or sold in the first place.”  

Dahua denied that its surveillance products are designed to enable racial identification. On the website of its U.S. company, Dahua says, “contrary to allegations that have been made by certain media outlets, Dahua Technology has not and never will develop solutions targeting any specific ethnic group.” 

However, in February 2021, IPVM and the Los Angeles Times reported that Dahua provided a video surveillance system with “real-time Uyghur warnings” to the Chinese police that included eyebrow size, skin color and ethnicity.  

IPVM’s 2018 statistical report shows that since 2016, Dahua and another Chinese video surveillance company, Hikvision, have won contracts worth $1 billion from the government of China’s Xinjiang province, a center of Uyghur life. 

The U.S. Federal Communications Commission determined in 2022 that the products of Chinese technology companies such as Dahua and Hikvision, which has close ties to Beijing, posed a threat to U.S. national security. 

The FCC banned sales of these companies’ products in the U.S. “for the purpose of public safety, security of government facilities, physical security surveillance of critical infrastructure, and other national security purposes,” but not for other purposes.  

Before the U.S. sales bans, Hikvision and Dahua ranked first and second among global surveillance and access control firms, according to The China Project.  

‘No place in a liberal democracy’

On June 14, the European Union passed a revision proposal to its draft Artificial Intelligence Law, a precursor to completely banning the use of facial recognition systems in public places.  

“We know facial recognition for mass surveillance from China; this technology has no place in a liberal democracy,” Svenja Hahn, a German member of the European Parliament and Renew Europe Group, told Politico.  

Bacciarelli of HRW said in an email she “would seriously doubt such racial profiling technology is legal under EU data protection and other laws. The General Data Protection Regulation, a European Union regulation on Information privacy, limits the collection and processing of sensitive personal data, including personal data revealing racial or ethnic origin and biometric data, under Article 9. Companies need to make a valid, lawful case to process sensitive personal data before deployment.” 

“The current text of the draft EU AI Act bans intrusive and discriminatory biometric surveillance tech, including real-time biometric surveillance systems; biometric systems that use sensitive characteristics, including race and ethnicity data; and indiscriminate scraping of CCTV data to create facial recognition databases,” she said.  

In Western countries, companies are developing AI software for identifying race primarily as a marketing tool for selling to diverse consumer populations. 

The Wall Street Journal reported in 2020 that American cosmetics company Revlon had used recognition software from AI start-up Kairos to analyze how consumers of different ethnic groups use cosmetics, raising concerns among researchers that racial recognition could lead to discrimination.  

The U.S. government has long prohibited sectors such as healthcare and banking from discriminating against customers based on race. IBM, Google and Microsoft have restricted the provision of facial recognition services to law enforcement.  

Twenty-four states, counties and municipal governments in the U.S. have prohibited government agencies from using facial recognition surveillance technology. New York City, Baltimore, and Portland, Oregon, have even restricted the use of facial recognition in the private sector.  

Some civil rights activists have argued that racial identification technology is error-prone and could have adverse consequences for those being monitored. 

Rollet said, “If the camera is filming at night or if there are shadows, it can misclassify people.”  

Caitlin Chin is a fellow at the Center for Strategic and International Studies, a Washington think tank where she researches technology regulation in the United States and abroad. She emphasized that while Western technology companies mainly use facial recognition for business, Chinese technology companies are often happy to assist government agencies in monitoring the public.  

She told VOA Mandarin in an August 1 video call, “So this is something that’s both very dehumanizing but also very concerning from a human rights perspective, in part because if there are any errors in this technology that could lead to false arrests, it could lead to discrimination, but also because the ability to sort people by skin color on its own almost inevitably leads to people being discriminated against.”  

She also said that in general, especially when it comes to law enforcement and surveillance, people with darker skin have been disproportionately tracked and disproportionately surveilled, “so these Dahua cameras make it easier for people to do that by sorting people by skin color.”  

Read More

Elon Musk Names NBCUniversal’s Yaccarino as New Twitter CEO

Billionaire tech entrepreneur Elon Musk on Friday named NBCUniversal executive Linda Yaccarino as the chief executive officer of social media giant Twitter.

From his own Twitter account Friday, Musk wrote, “I am excited to welcome Linda Yaccarino as the new CEO of Twitter! (She) will focus primarily on business operations, while I focus on product design and new technology.” 

He said Yaccarino would transform Twitter, which is now called X Corp., into “an everything app” called X. 

On Thursday, Musk teased Yaccarino’s hiring, saying only “she” will start in six to eight weeks.  

Yaccarino worked in advertising and media sales for NBCUniversal since 2011 and as chairperson of global advertising since October 2020. The company announced her departure earlier in the day Friday.

Analysts say Yaccarino’s background could be key to Twitter’s future. Since Musk acquired Twitter last October, he has taken some controversial steps, such as loosening controls on the spread of false information and laying off nearly 80% of its staff, which prompted advertisers to flee.

No comment from Yaccarino on her hiring was immediately available.

Some information for this report was provided by The Associated Press and Reuters. 

Read More

Apple to Launch First Online Store in Vietnam

Apple will launch its first online store in Vietnam next week, the company said Friday, hoping to cash in on the country’s young and tech-savvy population.

The iPhone maker is among a host of global tech giants including Intel, Samsung and LG, that have chosen Vietnam for assembly of their products.

But up to now, the Silicon Valley giant has sold its products in Vietnam’s market of 100 million people via authorized resellers.

“We’re honored to be expanding in Vietnam,” said Deirdre O’Brien, Apple’s senior vice president of retail in an online statement in Vietnamese.

The country’s communist government says it wants 85 percent of its adult population to have access to a smartphone by 2025, up from the current 73 percent.

Less than a third of the country’s mobile users have an iPhone, according to market research platform Statista.

Through online stores, “clients in Vietnam can discover products and connect with our experienced experts,” O’Brien said in the statement.

The production of accessories and assembly of mobile phones account for up to 70 percent of electronics manufacturing in Vietnam. Products are mainly for export.

Official figures said Vietnam’s mobile phone production industry reported an import-export turnover of U.S. $114 billion last year, a third of the country’s total import-export revenue.

Read More

Stunning Mosaic of Baby Star Clusters Created From 1 Million Telescope Shots

Astronomers have created a stunning mosaic of baby star clusters hiding in our galactic backyard.

The montage, published Thursday, reveals five vast stellar nurseries less than 1,500 light-years away. A light-year is nearly 9.7 trillion kilometers.

To come up with their atlas, scientists pieced together more than 1 million images taken over five years by the European Southern Observatory in Chile. The observatory’s infrared survey telescope was able to peer through clouds of dust and discern infant stars.

“We can detect even the faintest sources of light, like stars far less massive than the sun, revealing objects that no one has ever seen before,” University of Vienna’s Stefan Meingast, the lead author, said in a statement.

The observations, conducted from 2017 to 2022, will help researchers better understand how stars evolve from dust, Meingast said.

The findings, appearing in the journal Astronomy and Astrophysics, complement observations by the European Space Agency’s star-mapping Gaia spacecraft, orbiting nearly 1.5 million kilometers away.

Gaia focuses on optical light, missing most of the objects obscured by cosmic dust, the researchers said.

Read More

Will Artificial Intelligence Take Away Jobs? Not Many for Now, Says Expert

The growing abilities of artificial intelligence have left many observers wondering how AI will impact people’s jobs and livelihoods. One expert in the field predicts it won’t have much effect, at least in the short term.  

The topic was a point of discussion at the annual TED conference held recently in Vancouver.   

In a world where students’ term papers can now be written by artificial intelligence, paintings can be drawn by merely uttering words and an AI-generated version of your favorite celebrity can appear on screen, the impact of this new technology is starting to be felt in societies and sparking both wonderment and concern.  

While artificial intelligence has yet to become pervasive in everyday life, the rumblings of what could be a looming economic earthquake are growing stronger.  

 

Gary Marcus is a professor emeritus of psychology and neural science at New York University who helped ride sharing company Uber adopt the rapidly developing technology. 

 

An author and host of the podcast “Humans versus Machines,” Marcus says AI’s economic impact is limited for now, although some jobs have already been threatened by the technology, such as commercial animators for electronic gaming. 

Speaking with VOA after a recent conference for TED, the non-profit devoting to spreading ideas, Marcus said jobs that require manual labor will be safe, for now.   

“We’re not going to see blue collar jobs replaced I think as quickly as some people had talked about.,” Marcus predicted. “So we still don’t have driverless cars, even though people have talked about that for years. Anybody that does something with their hands is probably safe right now. Because we don’t really know how to make robots that sophisticated when it comes to dealing with the real world.”          

Another TED speaker, Sal Khan, is the founder of Khanmigo, an artificial intelligence powered software designed to help educate children. He is optimistic about AI’s potential economic impact as a driver of wealth creation. 

“Will it cause mass dislocations in the job market? I actually don’t know the answer to that,” Khan said, adding that “It will create more wealth, more productivity.” 

The legal profession could be boosted by AI if the technology prompts litigation. Copyright attorneys could especially benefit. 

 

Tom Graham and his company, Metaphysic.ai, artificially recreate famous actors and athletes so they do not need to physically be in front of a camera or microphone in order to appear in films, TV shows or commercials.    

His company is behind the popular fake videos of actor Tom Cruise that have gone viral on social media. 

 

He says the legal system will play a role in protecting people from being recreated without their permission.  

Graham, who has a law degree from Harvard University, has applied to the U.S. Copyright Office to register the real-life version of himself.            

“We did that because you’re looking for legal institutions that exist today, that could give you some kind of protection or remedy,” Graham explained, “It’s just, if there’s no way to enforce it, then it’s not really a thing.”                                

Gary Marcus is urging the formation of an international organization to oversee and monitor artificial intelligence.   

He emphasized the need to “get a lot of smart people together, from the companies, from the government, but also scientists, philosophers, ethicists…” 

“I think it’s really important that we as a globe, think all these things through,” Marcus concluded, “And don’t just leave it to like 190 governments doing whatever random thing they do without really understanding the science.”     

The popular AI website ChatGPT has gained widespread attention in recent months but is not yet a moneymaker. Its parent company, OpenAI, lost more than $540 million in 2022.     

Read More

Elon Musk and Tesla Break Ground on Massive Texas Lithium Refinery

Tesla Inc on Monday broke ground on a Texas lithium refinery that CEO Elon Musk said should produce enough of the battery metal to build about 1 million electric vehicles (EVs) by 2025, making it the largest North American processor of the material. 

The facility will push Tesla outside its core focus of building automobiles and into the complex area of lithium refining and processing, a step Musk said was necessary if the auto giant was to meet its ambitious EV sales targets. 

“As we look ahead a few years, a fundamental choke point in the advancement of electric vehicles is the availability of battery grade lithium,” Musk said at the ground-breaking ceremony on Monday, with dozers and other earth-moving equipment operating in the background. 

Musk said Tesla aimed to finish construction of the factory next year and then reach full production about a year later. 

The move will make Tesla the only major automaker in North America that will refine its own lithium. Currently, China dominates the processing of many critical minerals, including lithium. 

“Texas wants to be able to be self-reliant, not dependent upon any foreign hostile nation for what we need. We need lithium,” Texas Governor Greg Abbott said at the ceremony. 

Musk did not specify the volume of lithium the facility would process each year, although he said the automaker would continue to buy the metal from its vendors, which include Albemarle Corp and Livent Corp. 

“We intend to continue to use suppliers of lithium, so it’s not that Tesla will do all of it,” Musk said. 

Albemarle plans to build a lithium processing facility in South Carolina that will refine 100,000 tons of the metal each year, with construction slated to begin next year and the facility coming online sometime later this decade. 

Musk did not say where Tesla will source the rough form of lithium known as spodumene concentrate that will be processed at the facility, although Tesla has supply deals with Piedmont Lithium Inc and others. 

‘Clean operations’

Tesla said it would eschew the lithium industry’s conventional refining process, which relies on sulfuric acid and other strong chemicals, in favor of materials that were less harsh on the environment, such as soda ash. 

“You could live right in the middle of the refinery and not suffer any ill effect. So they’re very clean operations,” Musk said, although local media reports said some environmental advocates had raised concerns over the facility. 

Monday’s announcement was not the first time that Tesla has attempted to venture into lithium production. Musk in 2020 told shareholders that Tesla had secured rights to 10,000 acres in Nevada where it aimed to produce lithium from clay deposits, which had never been done before on a commercial scale. 

While Musk boasted that the company had developed a proprietary process to sustainably produce lithium from those Nevada clay deposits, Tesla has not yet deployed the process. 

Musk has urged entrepreneurs to enter the lithium refining business, saying it is like “minting money.” 

“We’re begging you. We don’t want to do it. Can someone please?” he said during a conference call last month. 

Tesla said last month a recent plunge in prices of lithium and other commodities would aid Tesla’s bruised margins in the second half of the year. 

The refinery is the latest expansion by Tesla into Texas after the company moved its headquarters there from California in 2021. Musk’s other companies, including SpaceX and The Boring Company, also have operations in Texas. 

SEE ALSO: A related video by VOA’s Arash Arabasadi

“We are proud that he calls Texas home,” Abbott said, saying Tesla and Musk are “Texas’s economic juggernauts.” 

Read More

Congress Eyes New Rules for Tech

Most Democrats and Republicans agree that the federal government should better regulate the biggest technology companies, particularly social media platforms. But there is little consensus on how it should be done. 

Concerns have skyrocketed about China’s ownership of TikTok, and parents have grown increasingly worried about what their children are seeing online. Lawmakers have introduced a slew of bipartisan bills, boosting hopes of compromise. But any effort to regulate the mammoth industry would face major obstacles as technology companies have fought interference. 

Noting that many young people are struggling, President Joe Biden said in his February State of the Union address that “it’s time” to pass bipartisan legislation to impose stricter limits on the collection of personal data and ban targeted advertising to children. 

“We must finally hold social media companies accountable for the experiment they are running on our children for profit,” Biden said.

A look at some of the areas of potential regulation: 

Children’s safety

Several House and Senate bills would try to make social media, and the internet in general, safer for children who will inevitably be online. Lawmakers cite numerous examples of teenagers who have taken their own lives after cyberbullying or have died engaging in dangerous behavior encouraged on social media. 

In the Senate, at least two bills are focused on children’s online safety. Legislation by Senators Richard Blumenthal, a Connecticut Democrat, and Marsha Blackburn, a Tennessee Republican, approved by the chamber’s Commerce Committee last year would require social media companies to be more transparent about their operations and enable child safety settings by default. Minors would have the option to disable addictive product features and algorithms that push certain content. 

The idea, the senators say, is that platforms should be “safe by design.” The legislation, which Blumenthal and Blackburn reintroduced last week, would also obligate social media companies to prevent certain dangers to minors — including promotion of suicide, disordered eating, substance abuse, sexual exploitation and other illegal behaviors. 

A second bill introduced last month by four senators — Democratic Senators Brian Schatz of Hawaii and Chris Murphy of Connecticut and Republican Senators Tom Cotton of Arkansas and Katie Britt of Alabama — would take a more aggressive approach, prohibiting children under 13 from using social media platforms and requiring parental consent for teenagers. It would also prohibit companies from recommending content through algorithms for users under 18.

Critics of the bills, including some civil rights groups and advocacy groups aligned with tech companies, say the proposals could threaten teens’ online privacy and prevent them from accessing content that could help them, such as resources for those considering suicide or grappling with their sexual and gender identity. 

“Lawmakers should focus on educating and empowering families to control their online experience,” said Carl Szabo of NetChoice, a group aligned with Meta, TikTok, Google and Amazon, among other companies. 

Data privacy 

Biden’s State of the Union remarks appeared to be a nod toward legislation by Senators Ed Markey, a Massachusetts Democrat, and Bill Cassidy, a Louisiana Republican, that would expand child privacy protections online, prohibiting companies from collecting personal data from younger teenagers and banning targeted advertising to children and teens. The bill, also reintroduced last week, would create an “eraser button” allowing parents and kids to eliminate personal data, when possible. 

A broader House effort would attempt to give adults as well as children more control over their data with what lawmakers call a “national privacy standard.” Legislation that passed the House Energy and Commerce Committee last year would try to minimize data collected and make it illegal to target ads to children, usurping state laws that have tried to put privacy restrictions in place. But the bill, which would have also given consumers more rights to sue over privacy violations, never reached the House floor. 

Prospects for the House legislation are unclear now that Republicans have the majority.

 

TikTok, China 

Lawmakers introduced a raft of bills to either ban TikTok or make it easier to ban it after a combative March House hearing in which lawmakers from both parties grilled TikTok CEO Shou Zi Chew over his company’s ties to China’s communist government, data security and harmful content on the app. 

Chew attempted to assure lawmakers that the hugely popular video-sharing app prioritizes user safety and should not be banned because of its Chinese connections. But the testimony gave new momentum to the efforts. 

Soon after the hearing, Missouri Senator Josh Hawley, a Republican, tried to force a Senate vote on legislation that would ban TikTok from operating in the United States. But he was blocked by a fellow Republican, Kentucky Senator Rand Paul, who said that a ban would violate the Constitution and anger the millions of voters who use the app. 

Another bill sponsored by Republican Senator Marco Rubio of Florida would, like Hawley’s bill, ban U.S. economic transactions with TikTok, but it would also create a new framework for the executive branch to block any foreign apps deemed hostile. His bill is co-sponsored by Representatives Raja Krishnamoorthi, an Illinois Democrat, and Mike Gallagher, a Wisconsin Republican. 

There is broad Senate support for bipartisan legislation sponsored by Senate Intelligence Committee Chairman Mark Warner, a Virginia Democrat, and South Dakota Senatpr John Thune, the No. 2 Senate Republican, that does not specifically call out TikTok but would give the Commerce Department power to review and potentially restrict foreign threats to technology platforms. 

The White House has signaled it would back that bill, but its prospects are uncertain. 

Artificial intelligence 

A newer question for Congress is whether lawmakers should move to regulate artificial intelligence as rapidly developing and potentially revolutionary products like AI chatbot ChatGPT begin to enter the marketplace and can in many ways mimic human behavior. 

Senate Democratic leader Chuck Schumer of New York has made the emerging technology a priority, arguing that the United States needs to stay ahead of China and other countries that are eyeing regulations on AI products. He has been working with AI experts and has released a general framework of what regulation could look like, including increased disclosure of the people and data involved in developing the technology, more transparency and explanation for how the bots arrive at responses.

The White House has been focused on the issue as well, with a recent announcement of a $140 million investment to establish seven new AI research institutes. Vice President Kamala Harris met Thursday with the heads of Google, Microsoft and other companies developing AI products.

Read More

New Twitter Rules Expose Election Offices to Spoof Accounts

Tracking down accurate information about Philadelphia’s elections on Twitter used to be easy. The account for the city commissioners who run elections, @phillyvotes, was the only one carrying a blue check mark, a sign of authenticity.

But ever since the social media platform overhauled its verification service last month, the check mark has disappeared. That’s made it harder to distinguish @phillyvotes from a list of random accounts not run by the elections office but with very similar names.

The election commission applied weeks ago for a gray check mark — Twitter’s new symbol to help users identify official government accounts – but has yet to hear back from the Twitter, commission spokesman Nick Custodio said. It’s unclear whether @phillyvotes is an eligible government account under Twitter’s new rules.

That’s troubling, Custodio said, because Pennsylvania has a primary election May 16 and the commission uses its account to share important information with voters in real time. If the account remains unverified, it will be easier to impersonate – and harder for voters to trust – heading into Election Day.

Impostor accounts on social media are among many concerns election security experts have heading into next year’s presidential election. Experts have warned that foreign adversaries or others may try to influence the election, either through online disinformation campaigns or by hacking into election infrastructure.

Election administrators across the country have struggled to figure out the best way to respond after Twitter owner Elon Musk threw the platform’s verification service into disarray, given that Twitter has been among their most effective tools for communicating with the public.

Some are taking other steps allowed by Twitter, such as buying check marks for their profiles or applying for a special label reserved for government entities, but success has been mixed. Election and security experts say the inconsistency of Twitter’s new verification system is a misinformation disaster waiting to happen.

“The lack of clear, at-a-glance verification on Twitter is a ticking time bomb for disinformation,” said Rachel Tobac, CEO of the cybersecurity company SocialProof Security. “That will confuse users – especially on important days like election days.”

The blue check marks that Twitter once doled out to notable celebrities, public figures, government entities and journalists began disappearing from the platform in April. To replace them, Musk told users that anyone could pay $8 a month for an individual blue check mark or $1,000 a month for a gold check mark as a “verified organization.”

The policy change quickly opened the door for pranksters to pose convincingly as celebrities, politicians and government entities, which could no longer be identified as authentic. While some impostor accounts were clear jokes, others created confusion.

Fake accounts posing as Chicago Mayor Lori Lightfoot, the city’s Department of Transportation and the Illinois Department of Transportation falsely claimed the city was closing one of its main thoroughfares to private traffic. The fake accounts used the same photos, biographical text and home page links as the real ones. Their posts amassed hundreds of thousands of views before being taken down.

Twitter’s new policy invites government agencies and certain affiliated organizations to apply to be labeled as official with a gray check. But at the state and local level, qualifying agencies are limited to “main executive office accounts and main agency accounts overseeing crisis response, public safety, law enforcement, and regulatory issues,” the policy says.

The rules do not mention agencies that run elections. So while the main Philadelphia city government account quickly received its gray check mark last month, the local election commission has not heard back.

Election offices in four of the country’s five most populous counties — Cook County in Illinois, Harris County in Texas, Maricopa County in Arizona and San Diego County — remain unverified, a Twitter search shows. Maricopa, which includes Phoenix, has been targeted repeatedly by election conspiracy theorists as the most populous and consequential county in one of the most closely divided political battleground states.

Some counties contacted by The Associated Press said they have minimal concerns about impersonation or plan to apply for a gray check later, but others said they already have applied and have not heard back from Twitter.

Even some state election offices are waiting for government labels. Among them is the office of Maine Secretary of State Shenna Bellows.

In an April 24 email to Bellows’ communications director reviewed by The Associated Press, a Twitter representative wrote that there was “nothing to do as we continue to manually process applications from around the world.” The representative added in a later email that Twitter stands “ready to swiftly enforce any impersonation, so please don’t hesitate to flag any problematic accounts.”

An email sent to Twitter’s press office and a company safety officer requesting comment was answered only with an autoreply of a poop emoji.

“Our job is to reinforce public confidence,” Bellows told the AP. “Even a minor setback, like no longer being able to ensure that our information on Twitter is verified, contributes to an environment that is less predictable and less safe.”

Some government accounts, including the one representing Pennsylvania’s second-largest county, have purchased blue checks because they were told it was required to continue advertising on the platform.

Allegheny County posts ads for elections and jobs on Twitter, so the blue check mark “was necessary,” said Amie Downs, the county’s communications director.

When anyone can buy verification and when government accounts are not consistently labeled, the check mark loses its meaning, Colorado Secretary of State Jena Griswold said.

Griswold’s office received a gray check mark to maintain trust with voters, but she told the AP she would not buy verification for her personal Twitter account because “it doesn’t carry the same weight” it once did.

Custodio, at the Philadelphia elections commission, said his office would not buy verification either, even if it gets denied a gray check.

“The blue or gold check mark just verifies you as a paid subscriber and does not verify identity,” he said.

Experts and advocates tracking election discourse on social media say Twitter’s changes do not just incentivize bad actors to run disinformation campaigns — they also make it harder for well-meaning users to know what’s safe to share.

“Because Twitter is dropping the ball on verification, the burden will fall on voters to double check that the information they are consuming and sharing is legitimate,” said Jill Greene, voting and elections manager for Common Cause Pennsylvania.

That dampens an aspect of Twitter that until now had been seen as one of its strengths – allowing community members to rally together to elevate authoritative information, said Mike Caulfield, a research scientist at the University of Washington’s Center for an Informed Public.

“The first rule of a good online community user interface is to ‘help the helpers.’ This is the opposite of that,” Caulfield said. “It takes a community of people who want to help boost good information, and robs them of the tools to make fast, accurate decisions.”

Read More

Buffett Shares Good News on Profits, AI Thoughts at Meeting

Billionaire Warren Buffett said artificial intelligence may change the world in all sorts of ways, but new technology won’t take away opportunities for investors, and he’s confident America will continue to prosper over time.

Buffett and his partner Charlie Munger are spending all day Saturday answering questions at Berkshire Hathaway’s annual meeting inside a packed Omaha arena.

“New things coming along doesn’t take away the opportunities. What gives you the opportunities is other people doing dumb things,” said Buffett, who had a chance to try out ChatGPT when his friend Bill Gates showed it to him a few months back.

Buffett reiterated his long-term optimism about the prospects for America even with the bitter political divisions today.

“The problem now is that partisanship has moved more towards tribalism, and in tribalism you don’t even hear the other side,” he said.

Both Buffett and Munger said the United States will benefit from having an open trading relationship with China, so both countries should be careful not to exacerbate the tensions between them because the stakes are too high for the world.

“Everything that increases the tension between these two countries is stupid, stupid, stupid,” Munger said. And whenever either country does something stupid, he said the other country should respond with incredible kindness.

The chance to listen to the two men answer all sorts of questions about business and life attracts people from all over the world to Omaha, Nebraska. Some of the shareholders feel a particular urgency to attend now because Buffett and Munger are both in their 90s.

“Charlie Munger is 99. I just wanted to see him in person. It’s on my bucket list,” said 40-year-old Sheraton Wu from Vancouver. “I have to attend while I can.”

“It’s a once in a lifetime opportunity,” said Chloe Lin, who traveled from Singapore to attend the meeting for the first time and learn from the two legendary investors.

One of the few concessions Buffett makes to his age is that he no longer tours the exhibit hall before the meeting. In years past, he would be mobbed by shareholders trying to snap a picture with him while a team of security officers worked to manage the crowd. Munger has used a wheelchair for several years, but both men are still sharp mentally.

But in a nod to the concerns about their age, Berkshire showed a series of clips of questions about succession from past meetings dating back to the first one they filmed in 1994. Two years ago, Buffett finally said that Greg Abel will eventually replace him as CEO although he has no plans to retire. Abel already oversees all of Berkshire’s noninsurance businesses.

Buffett assured shareholders that he has total confidence in Abel to lead Berkshire in the future, and he doesn’t have a second choice for the job because Abel is remarkable in his own right. But he said much of what Abel will have to do is just maintain Berkshire’s culture and keep making similar decisions.

“Greg understands capital allocation as well as I do. He will make these decisions on the same framework that I use,” Buffett said.

Abel followed that up by assuring the crowd that he knows how Buffett and Munger have handled things for nearly six decades and “I don’t really see that framework changing.”

Although not everyone at the meeting is a fan. Outside the arena, pilots from Berkshire’s NetJets protested over the lack of a new contract and pro-life groups carried signs declaring “Buffett’s billions kill millions” to object to his many charitable donations to abortion rights groups.

Berkshire Hathaway said Saturday morning that it made $35.5 billion, or $24,377 per Class A share, in the first quarter. That’s more than 6 times last year’s $5.58 billion, or $3,784 per share.

But Buffett has long cautioned that those bottom line figures can be misleading for Berkshire because the wide swings in the value of its investments — most of which it rarely sells — distort the profits. In this quarter, Berkshire sold only $1.7 billion of stocks while recording a $27.4 billion paper investment gain. Part of this year’s investment gains included a $2.4 billion boost related to Berkshire’s planned acquisition of the majority of the Pilot Travel Centers truck stop company’s shares in January.

Buffett says Berkshire’s operating earnings that exclude investments are a better measure of the company’s performance. By that measure, Berkshire’s operating earnings grew nearly 13% to $8.065 billion, up from $7.16 billion a year ago.

The three analysts surveyed by FactSet expected Berkshire to report operating earnings of $5,370.91 per Class A share.

Buffett came close to giving a formal outlook Saturday when he told shareholders that he expects Berkshire’s operating profits to grow this year even though the economy is slowing down and many of its businesses will sell less in 2023. He said Berkshire will profit from rising interest rates on its holdings, and the insurance market looks good this year.

This year’s first quarter was relatively quiet compared to a year ago when Buffett revealed that he had gone on a $51 billion spending spree at the start of last year, snapping up stocks like Occidental Petroleum, Chevron and HP. Buffett’s buying slowed through the rest of last year with the exception of a number of additional Occidental purchases.

At the end of this year’s first quarter, Berkshire held $130.6 billion cash, up from about $128.59 billion at the end of last year. But Berkshire did spend $4.4 billion during the quarter to repurchase its own shares.

Berkshire’s insurance unit, which includes Geico and a number of large reinsurers, recorded a $911 million operating profit, up from $167 million last year, driven by a rebound in Geico’s results. Geico benefitted from charging higher premiums and a reduction in advertising spending and claims.

But Berkshire’s BNSF railroad and its large utility unit did report lower profits. BNSF earned $1.25 billion, down from $1.37 billion, as the number of shipments it handled dropped 10% after it lost a big customer and imports slowed at the West Coast ports. The utility division added $416 million, down from last year’s $775 million.

Besides those major businesses, Berkshire owns an eclectic assortment of dozens of other businesses, including a number of retail and manufacturing firms such as See’s Candy and Precision Castparts.

Berkshire again faces pressure from activist investors urging the company to do more to catalog its climate change risks in a companywide report. Shareholders were expected to brush that measure and all the other shareholder proposals aside Saturday afternoon because Buffett and the board oppose them, and Buffett controls more than 30% of the vote.

But even as they resist detailing climate risks, a number of Berkshire’s subsidiaries are working to reduce their carbon emissions, including its railroad and utilities. The company’s Clayton Homes unit is showing off a new home design this year that will meet strict energy efficiency standards from the Department of Energy and come pre-equipped for solar power to be added later.

Read More

Google Plans to Make Search More ‘Human,’ Says Wall Street Journal

Google is planning to make its search engine more “visual, snackable, personal and human,” with a focus on serving young people globally, The Wall Street Journal reported on Saturday, citing documents.

The move comes as artificial intelligence (AI) applications such as ChatGPT are rapidly gaining in popularity, highlighting a technology that could upend the way businesses and society operate.

The tech giant will nudge its service further away from “10 blue links,” which is a traditional format of presenting search results and plans to incorporate more human voices as part of the shift, the report said.

At its annual I/O developer conference in the coming week, Google is expected to debut new features that allow users to carry out conversations with an AI program, a project code-named “Magi,” The Wall Street Journal added, citing people familiar with the matter.

Generative AI has become a buzzword this year, with applications capturing the public’s fancy and sparking a rush among companies to launch similar products they believe will change the nature of work.

Google, part of Alphabet Inc., did not immediately respond to Reuters’ request for comment.

Read More