Ще понад сто осіб постраждали
…
For a vast number of book writers, artificial intelligence is a threat to their livelihood and the very idea of creativity. More than 10,000 of them endorsed an open letter from the Authors Guild this summer, urging AI companies not to use copyrighted work without permission or compensation.
At the same time, AI is a story to tell, and no longer just science fiction.
As present in the imagination as politics, the pandemic, or climate change, AI has become part of the narrative for a growing number of novelists and short story writers who only need to follow the news to imagine a world upended.
“I’m frightened by artificial intelligence, but also fascinated by it. There’s a hope for divine understanding, for the accumulation of all knowledge, but at the same time there’s an inherent terror in being replaced by non-human intelligence,” said Helen Phillips, whose upcoming novel “Hum” tells of a wife and mother who loses her job to AI.
“We’ve been seeing more and more about AI in book proposals,” said Ryan Doherty, vice president and editorial director at Celadon Books, which recently signed Fred Lunzker’s novel “Sike,” featuring an AI psychiatrist.
“It’s the zeitgeist right now. And whatever is in the cultural zeitgeist seeps into fiction,” Doherty said.
Other AI-themed novels expected in the next two years include Sean Michaels’ “Do You Remember Being Born?” — in which a poet agrees to collaborate with an AI poetry company; Bryan Van Dyke’s “In Our Likeness,” about a bureaucrat and a fact-checking program with the power to change facts; and A.E. Osworth’s “Awakened,” about a gay witch and her titanic clash with AI.
Crime writer Jeffrey Diger, known for his thrillers set in contemporary Greece, is working on a novel touching upon AI and the metaverse, the outgrowth of being “continually on the lookout for what’s percolating on the edge of societal change,” he said.
Authors are invoking AI to address the most human questions.
In Sierra Greer’s “Annie Bot,” the title name is an AI mate designed for a human male. For Greer, the novel was a way to explore her character’s “urgent desire to please,” adding that a robot girlfriend enabled her “to explore desire, respect, and longing in ways that felt very new and strange to me.”
Amy Shearn’s “Animal Instinct” has its origins in the pandemic and in her personal life; she was recently divorced and had begun using dating apps.
“It’s so weird how, with apps, you start to feel as if you’re going person-shopping,” she said. “And I thought, wouldn’t it be great if you could really pick and choose the best parts of all these people you encounter and sort of cobble them together to make your ideal person?”
“Of course,” she added, “I don’t think anyone actually knows what their ideal person is, because so much of what draws us to mates is the unexpected, the ways in which people surprise us. That said, it seemed like an interesting premise for a novel.”
Some authors aren’t just writing about AI, but openly working with it.
Earlier this year, journalist Stephen Marche used AI to write the novella “Death of An Author,” for which he drew upon everyone from Raymond Chandler to Haruki Murakami. Screenwriter and humorist Simon Rich collaborated with Brent Katz and Josh Morgenthau for “I Am Code,” a thriller in verse that came out this month and was generated by the AI program “code-davinci-002.” (Filmmaker Werner Herzog reads the audiobook edition).
Osworth, who is trans, wanted to address comments by “Harry Potter” author J.K. Rowling that have offended many in the trans community, and to wrest from her the power of magic. At the same time, they worried the fictional AI in their book sounded too human, and decided AI should speak for AI.
Osworth devised a crude program, based on the writings of Machiavelli among others, that would turn out a more mechanical kind of voice.
“I like to say that CHATgpt is a Ferrari, while what I came up with is a skateboard with one square wheel. But I was much more interested in the skateboard with one square wheel,” they said.
Michaels centers his new novel on a poet named Marian, in homage to poet Marianne Moore, and an AI program called Charlotte. He said the novel is about parenthood, labor, community, and “this technology’s implications for art, language and our sense of identity.”
Believing the spirit of “Do You Remember Being Born?” called for the presence of actual AI text, he devised a program that would generate prose and poetry, and uses an alternate format in the novel so readers know when he’s using AI.
In one passage, Marian is reviewing some of her collaboration with Charlotte.
“The preceding day’s work was a collection of glass cathedrals. I reread it with alarm. Turns of phrase I had mistaken for beautiful, which I now found unintelligible,” Michaels writes. “Charlotte had simply surprised me: I would propose a line, a portion of a line, and what the system spat back upended my expectations. I had been seduced by this surprise.”
And now AI speaks: “I had mistaken a fit of algorithmic exuberance for the truth.”
…
IPVM, a U.S.-based security and surveillance industry research group, says the Chinese surveillance equipment maker Dahua is selling cameras with what it calls a “skin color analytics” feature in Europe, raising human rights concerns.
In a report released on July 31, IPVM said “the company defended the analytics as being a ‘basic feature of a smart security solution.'” The report is behind a paywall, but IPVM provided a copy to VOA Mandarin.
Dahua’s ICC Open Platform guide for “human body characteristics” includes “skin color/complexion,” according to the report. In what Dahua calls a “data dictionary,” the company says that the “skin color types” that Dahua analytic tools would target are ”yellow,” “black,” and ”white.” VOA Mandarin verified this on Dahua’s Chinese website.
The IPVM report also says that skin color detection is mentioned in the “Personnel Control” category, a feature Dahua touts as part of its Smart Office Park solution intended to provide security for large corporate campuses in China.
Charles Rollet, co-author of the IPVM report, told VOA Mandarin by phone on August 1, “Basically what these video analytics do is that, if you turn them on, then the camera will automatically try and determine the skin color of whoever passes, whoever it captures in the video footage.
“So that means the camera is going to be guessing or attempting to determine whether the person in front of it … has black, white or yellow — in their words — skin color,” he added.
VOA Mandarin contacted Dahua for comment but did not receive a response.
The IPVM report said that Dahua is selling cameras with the skin color analytics feature in three European nations. Each has a recent history of racial tension: Germany, France and the Netherlands.
‘Skin color is a basic feature’
Dahua said its skin tone analysis capability was an essential function in surveillance technology.
In a statement to IPVM, Dahua said, “The platform in question is entirely consistent with our commitments to not build solutions that target any single racial, ethnic, or national group. The ability to generally identify observable characteristics such as height, weight, hair and eye color, and general categories of skin color is a basic feature of a smart security solution.”
IPMV said the company has previously denied offering the mentioned feature, and color detection is uncommon in mainstream surveillance tech products.
In many Western nations, there has long been a controversy over errors due to skin color in surveillance technologies for facial recognition. Identifying skin color in surveillance applications raises human rights and civil rights concerns.
“So it’s unusual to see it for skin color because it’s such a controversial and ethically fraught field,” Rollet said.
Anna Bacciarelli, technology manager at Human Rights Watch (HRW), told VOA Mandarin that Dahua technology should not contain skin tone analytics.
“All companies have a responsibility to respect human rights, and take steps to prevent or mitigate any human rights risks that may arise as a result of their actions,” she said in an email.
“Surveillance software with skin tone analytics poses a significant risk to the right to equality and non-discrimination, by allowing camera owners and operators to racially profile people at scale — likely without their knowledge, infringing privacy rights — and should simply not be created or sold in the first place.”
Dahua denied that its surveillance products are designed to enable racial identification. On the website of its U.S. company, Dahua says, “contrary to allegations that have been made by certain media outlets, Dahua Technology has not and never will develop solutions targeting any specific ethnic group.”
However, in February 2021, IPVM and the Los Angeles Times reported that Dahua provided a video surveillance system with “real-time Uyghur warnings” to the Chinese police that included eyebrow size, skin color and ethnicity.
IPVM’s 2018 statistical report shows that since 2016, Dahua and another Chinese video surveillance company, Hikvision, have won contracts worth $1 billion from the government of China’s Xinjiang province, a center of Uyghur life.
The U.S. Federal Communications Commission determined in 2022 that the products of Chinese technology companies such as Dahua and Hikvision, which has close ties to Beijing, posed a threat to U.S. national security.
The FCC banned sales of these companies’ products in the U.S. “for the purpose of public safety, security of government facilities, physical security surveillance of critical infrastructure, and other national security purposes,” but not for other purposes.
Before the U.S. sales bans, Hikvision and Dahua ranked first and second among global surveillance and access control firms, according to The China Project.
‘No place in a liberal democracy’
On June 14, the European Union passed a revision proposal to its draft Artificial Intelligence Law, a precursor to completely banning the use of facial recognition systems in public places.
“We know facial recognition for mass surveillance from China; this technology has no place in a liberal democracy,” Svenja Hahn, a German member of the European Parliament and Renew Europe Group, told Politico.
Bacciarelli of HRW said in an email she “would seriously doubt such racial profiling technology is legal under EU data protection and other laws. The General Data Protection Regulation, a European Union regulation on Information privacy, limits the collection and processing of sensitive personal data, including personal data revealing racial or ethnic origin and biometric data, under Article 9. Companies need to make a valid, lawful case to process sensitive personal data before deployment.”
“The current text of the draft EU AI Act bans intrusive and discriminatory biometric surveillance tech, including real-time biometric surveillance systems; biometric systems that use sensitive characteristics, including race and ethnicity data; and indiscriminate scraping of CCTV data to create facial recognition databases,” she said.
In Western countries, companies are developing AI software for identifying race primarily as a marketing tool for selling to diverse consumer populations.
The Wall Street Journal reported in 2020 that American cosmetics company Revlon had used recognition software from AI start-up Kairos to analyze how consumers of different ethnic groups use cosmetics, raising concerns among researchers that racial recognition could lead to discrimination.
The U.S. government has long prohibited sectors such as healthcare and banking from discriminating against customers based on race. IBM, Google and Microsoft have restricted the provision of facial recognition services to law enforcement.
Twenty-four states, counties and municipal governments in the U.S. have prohibited government agencies from using facial recognition surveillance technology. New York City, Baltimore, and Portland, Oregon, have even restricted the use of facial recognition in the private sector.
Some civil rights activists have argued that racial identification technology is error-prone and could have adverse consequences for those being monitored.
Rollet said, “If the camera is filming at night or if there are shadows, it can misclassify people.”
Caitlin Chin is a fellow at the Center for Strategic and International Studies, a Washington think tank where she researches technology regulation in the United States and abroad. She emphasized that while Western technology companies mainly use facial recognition for business, Chinese technology companies are often happy to assist government agencies in monitoring the public.
She told VOA Mandarin in an August 1 video call, “So this is something that’s both very dehumanizing but also very concerning from a human rights perspective, in part because if there are any errors in this technology that could lead to false arrests, it could lead to discrimination, but also because the ability to sort people by skin color on its own almost inevitably leads to people being discriminated against.”
She also said that in general, especially when it comes to law enforcement and surveillance, people with darker skin have been disproportionately tracked and disproportionately surveilled, “so these Dahua cameras make it easier for people to do that by sorting people by skin color.”
…