01001, Київ, Україна
info@ukrlines.com

У Судані після ймовірного перевороту оголосили надзвичайний стан

Продемократична Асоціація суданських професіоналів закликала громадян вийти на вулиці, щоб «протистояти» військовому перевороту

Read More

Україна залишається у «зеленому» списку ЄС щодо подорожей поточного тижня – журналіст Радіо Свобода

21 жовтня Мальта заборонила в’їзд подорожувальникам із України через високий сплеск захворюваності на COVID-19

Read More

Whistleblower Haugen to Testify as UK Scrutinizes Facebook

Former Facebook data scientist turned whistleblower Frances Haugen plans to answer questions Monday from lawmakers in the United Kingdom who are working on legislation to rein in the power of social media companies. 

Haugen is set to appear before a parliamentary committee scrutinizing the British government’s draft legislation to crack down on harmful online content, and her comments could help lawmakers beef up the new rules. She’s testifying the same day that Facebook is set to release its latest earnings and that The Associated Press and other news organizations started publishing stories based on thousands of pages of internal company documents she obtained. 

It will be her second appearance before lawmakers after she testified in the U.S. Senate earlier this month about the danger she says the company poses, from harming children to inciting political violence and fueling misinformation. Haugen cited internal research documents she secretly copied before leaving her job in Facebook’s civic integrity unit. 

The documents, which Haugen provided to the U.S. Securities and Exchange Commission, allege Facebook prioritized profits over safety and hid its own research from investors and the public. Some stories based on the files have already been published, exposing internal turmoil after Facebook was blindsided by the Jan. 6 U.S. Capitol riot and how it dithered over curbing divisive content in India, and more is to come. 

Facebook CEO Mark Zuckerberg has disputed Haugen’s portrayal of the company as one that puts profit over the well-being of its users or that pushes divisive content, saying a false picture is being painted. But he does agree on the need for updated internet regulations, saying lawmakers are best able to assess the tradeoffs.

Haugen told U.S. lawmakers that she thinks a federal regulator is needed to oversee digital giants like Facebook, something that officials in Britain and the European Union are already working on. 

The U.K. government’s online safety bill calls for setting up a regulator that would hold companies to account when it comes to removing harmful or illegal content from their platforms, such as terrorist material or child sex abuse images. 

“This is quite a big moment,” Damian Collins, the lawmaker who chairs the committee, said ahead of the hearing. “This is a moment, sort of like Cambridge Analytica, but possibly bigger in that I think it provides a real window into the soul of these companies.” 

Collins was referring to the 2018 debacle involving data-mining firm Cambridge Analytica, which gathered details on as many as 87 million Facebook users without their permission.

Representatives from Facebook and other social media companies plan to speak to the committee Thursday. 

Ahead of the hearing, Haugen met the father of Molly Russell, a 14-year-old girl who killed herself in 2017 after viewing disturbing content on Facebook-owned Instagram. In a chat filmed by the BBC, Ian Russell told Haugen that after Molly’s death, her family found notes she wrote about being addicted to Instagram.

Haugen also is scheduled to meet next month with European Union officials in Brussels, where the bloc’s executive commission is updating its digital rulebook to better protect internet users by holding online companies more responsible for illegal or dangerous content. 

Under the U.K. rules, expected to take effect next year, Silicon Valley giants face an ultimate penalty of up to 10% of their global revenue for any violations. The EU is proposing a similar penalty. 

The U.K. committee will be hoping to hear more from Haugen about the data that tech companies have gathered. Collins said the internal files that Haugen has turned over to U.S. authorities are important because it shows the kind of information that Facebook holds — and what regulators should be asking when they investigate these companies.  

The committee has already heard from another Facebook whistleblower, Sophie Zhang, who raised the alarm after finding evidence of online political manipulation in countries such as Honduras and Azerbaijan before she was fired.

Read More

Facebook’s Language Gaps Weaken Screening of Hate, Terrorism

In Gaza and Syria, journalists and activists feel Facebook censors their speech, flagging inoffensive Arabic posts as terrorist content. In India and Myanmar, political groups use Facebook to incite violence. All of it frequently slips through the company’s efforts to police its social media platforms because of a shortage of moderators who speak local languages and understand cultural contexts.

Internal company documents from the former Facebook product manager-turned-whistleblower Frances Haugen show the problems plaguing the company’s content moderation are systemic, and that Facebook has understood the depth of these failings for years while doing little about it.

Its platforms have failed to develop artificial-intelligence solutions that can catch harmful content in different languages. As a result, terrorist content and hate speech proliferate in some of the world’s most volatile regions. Elsewhere, the company’s language gaps lead to overzealous policing of everyday expression.

This story, along with others published Monday, is based on former Facebook product manager-turned-whistleblower Frances Haugen’s disclosures to the Securities and Exchange Commission, which were also provided to Congress in redacted form by her legal team. The redacted versions received by Congress were obtained by a consortium of news organizations, including The Associated Press.

In a statement to the AP, a Facebook spokesperson said that over the last two years the company has invested in recruiting more staff with local dialect and topic expertise to bolster its review capacity globally.

When it comes to Arabic content moderation, in particular, the company said, “We still have more work to do.”

But the documents show the problems are not limited to Arabic. In Myanmar, where Facebook-based misinformation has been linked repeatedly to ethnic violence, the company’s internal reports show it failed to stop the spread of hate speech targeting the minority Rohingya Muslim population.

In India, the documents show moderators never flagged anti-Muslim hate speech broadcast by Prime Minister Narendra Modi’s far-right Hindu nationalist group because Facebook lacked moderators and automated filters with knowledge of Hindi and Bengali.

Arabic, Facebook’s third-most common language, does pose particular challenges to the company’s automated systems and human moderators, each of which struggles to understand spoken dialects unique to each country and region, their vocabularies salted with different historical influences and cultural contexts. The platform won a vast following across the region amid the 2011 Arab Spring, but its reputation as a forum for free expression in a region full of autocratic governments has since changed.

Scores of Palestinian journalists have had their accounts deleted. Archives of the Syrian civil war have disappeared. During the 11-day Gaza war last May, Facebook’s Instagram app briefly banned the hashtag #AlAqsa, a reference to the Al-Aqsa Mosque in Jerusalem’s Old City, a flashpoint of the conflict. The company later apologized, saying it confused Islam’s third-holiest site for a terrorist group.

Criticism, satire and even simple mentions of groups on the company’s Dangerous Individuals and Organizations list — a docket modeled on the U.S. government equivalent — are grounds for a takedown.

“We were incorrectly enforcing counterterrorism content in Arabic,” one document reads, noting the system “limits users from participating in political speech, impeding their right to freedom of expression.”

The Facebook blacklist includes Gaza’s ruling Hamas party, as well as Hezbollah, the militant group that holds seats in Lebanon’s Parliament, along with many other groups representing wide swaths of people and territory across the Middle East.

The company’s language gaps and biases have led to the widespread perception that its reviewers skew in favor of governments and against minority groups. 

Israeli security agencies and watchdogs also monitor Facebook and bombard it with thousands of orders to take down Palestinian accounts and posts as they try to crack down on incitement.

“They flood our system, completely overpowering it,” said Ashraf Zeitoon, Facebook’s former head of policy for the Middle East and North Africa region, who left in 2017.

Syrian journalists and activists reporting on the country’s opposition also have complained of censorship, with electronic armies supporting embattled President Bashar Assad aggressively flagging dissident posts for removal. 

Meanwhile in Afghanistan, Facebook does not translate the site’s hate speech and misinformation pages into Dari and Pashto, the country’s two main languages. The site also doesn’t have a bank of hate speech terms and slurs in Afghanistan, so it can’t build automated filters that catch the worst violations.

In the Philippines, homeland of many domestic workers in the Middle East, Facebook documents show that engineers struggled to detect reports of abuse by employers because the company couldn’t flag words in Tagalog, the major Philippine language.

In the Middle East, the company over-relies on artificial-intelligence filters that make mistakes, leading to “a lot of false positives and a media backlash,” one document reads. Largely unskilled moderators, in over their heads and at times relying on Google Translate, tend to passively field takedown requests instead of screening proactively. Most are Moroccans and get lost in the translation of Arabic’s 30-odd dialects.

The moderators flag inoffensive Arabic posts as terrorist content 77% of the time, one report said.

Although the documents from Haugen predate this year’s Gaza war, episodes from that bloody conflict show how little has been done to address the problems flagged in Facebook’s own internal reports.

Activists in Gaza and the West Bank lost their ability to livestream. Whole archives of the conflict vanished from newsfeeds, a primary portal of information. Influencers accustomed to tens of thousands of likes on their posts saw their outreach plummet when they posted about Palestinians.

“This has restrained me and prevented me from feeling free to publish what I want,” said Soliman Hijjy, a Gaza-based journalist.

Palestinian advocates submitted hundreds of complaints to Facebook during the war, often leading the company to concede error. In the internal documents, Facebook reported it had erred in nearly half of all Arabic language takedown requests submitted for appeal.

Facebook’s internal documents also stressed the need to enlist more Arab moderators from less-represented countries and restrict them to where they have appropriate dialect expertise.

“It is surely of the highest importance to put more resources to the task to improving Arabic systems,” said the report.

Meanwhile, many across the Middle East worry the stakes of Facebook’s failings are exceptionally high, with potential to widen long-standing inequality, chill civic activism and stoke violence in the region.

“We told Facebook: Do you want people to convey their experiences on social platforms, or do you want to shut them down?” said Husam Zomlot, the Palestinian envoy to the United Kingdom. “If you take away people’s voices, the alternatives will be uglier.” 

Read More

Окупований Крим: під будівлею суду у Сімферополі затримали 21 людину

За інформацією об’єднання «Кримська солідарність» , затриманих відвезли до відділку поліції Центрального району міста. Інших подробиць наразі немає

Read More

Крим: російські силовики затримують кримських татар біля будівлі суду у Сімферополі

Російські поліцейські вимагали від присутніх розійтися, після чого співробітники ОМОНу почали заводити активістів у автобус, зокрема й жінок

Read More

Напади «ІД» на афганських шиїтів є злочинами проти людства – HRW

Члени шиїтської хараейської меншини стикалися з тривалою дискримінацією та переслідуванням у переважно сунітській країні

Read More

З Білорусі до Німеччини потрапили понад 6 тисяч мігрантів з серпня – DW

Лише за останні два дні близько 500 мігрантів перетнули польсько-німецький кордон у пошуках притулку

Read More

Головне на ранок: розслідування смерті Галини Хатчінс і «бездомні національності»

У МЗС України відповіли на заяву Москви про «провокацію» через плани відкрити пункти зв’язку поряд з окупованим Кримом

Read More

Більшість міністрів і цивільних осіб в уряді Судану заарештовані – міністерство

Серед затриманих військовими членів уряду – прем’єр-міністр Абдалла Хамдок і ще принаймні чотири міністри

Read More

Зеркаль: вимоги для сертифікації «Північного потоку-2» – «дуже чутливе для Путіна» питання

«Якщо в Росії буде ринок, вже не буде можливості диктувати ціни з боку Російської Федерації», – каже радниця міністра

Read More

Через дії Росії українська ГТС у жовтні транспортувала на 20% менше від запланованого – Зеркаль

«Думаю, що буде оскаржений контракт, який уклав «Газпром» з Угорщиною»

Read More

Патріарх Варфоломій «почувається добре», його планують виписати вранці – архиєпархія

Очікується, що Вселенського патріарха випишуть вранці

Read More

Чоловік ув’язненої в Ірані волонтерки вдруге оголосив голодування з вимогою її звільнити

Коли п’ятирічний термін ув’язнення Назанін Загарі-Реткліфф добігав кінця, іранська влада засудила її до ще одного року тюрми

Read More

Italian Lab Creates Extreme Weather; Could Predict Climate Change Effects

Researchers at a specialized lab in Italy say understanding climate change effects requires recreating them in a controlled environment. So, they built one. VOA’s Arash Arabasadi has more.

Read More

Висока ціна на газ вплине на врожай наступного року – Зеркаль

«Для того, щоб починати всі сільськогосподарські роботи, потрібно мати також і добрива. А добрива виробляються з газу»

Read More

Заборона доступу до важливого ядерного об’єкта в Ірані загрожує програмі моніторингу – МАГАТЕ

Іран надав МАГАТЕ доступ до більшості своїх камер, за винятком розташованих на важливому підприємстві, де виготовляють деталі для центрифуг

Read More

Дипломат США закликає Північну Корею припинити ракетні випробування і повернутися до переговорів

Зустріч відбулася через кілька днів після того, як Північна Корея провела випробування балістичної ракети підводного човна

Read More

МЗС: Україна розраховує, що ООН приєднається до «Кримської платформи»

Україна розраховує на конструктивний підхід з боку керівництва ООН до питань протидії військової агресії Росії проти України – МЗС

Read More

«Вони, мабуть, щось замислили» – Зеркаль щодо слів Путіна про те, що українська ГТС може «луснути»

Лана Зеркаль наголосила, що не можна навіть порівнювати кількість аварій, які відбуваються на українській ГТС та російському маршруті

Read More

Держдепртамент США відніс росіян, які хочуть отримати візу, до «бездомних національностей»

Росіянам радять звертатися за візами до консульських центрів США в Варшаві

Read More

Сербія не дозволяє боксерам з Косово взяти участь у чемпіонаті світу, який вона проводить

Як повідомив Олімпійський комітет Косово напередодні, спортсменам двічі відмовили у дозволі перетнути кордон

Read More

Zoom Gets More Popular Despite Worries About Links to China

Very few companies can boast of having their name also used as a verb. Zoom is one of them. The popularity of the videoconferencing platform continues to grow around the world despite continued questions about whether Chinese authorities are monitoring the calls.

Since Zoom became a household word last year during the pandemic, internet users including companies and government agencies have asked whether the app’s data centers and staff in China are passing call logs to Chinese authorities.

“Some of the more informed know about that, but the vast majority, they don’t know about that, or even if they do, they really don’t give much thought about it,” said Jack Nguyen, partner at the business advisory firm Mazars in Ho Chi Minh City.

He said in Vietnam, for example, many people resent China over territorial spats, but Vietnamese tend to Zoom as willingly as they sign on to rivals such as Microsoft Teams. They like Zoom’s free 40 minutes per call, said Nguyen.

Whether to use the Silicon Valley-headquartered Zoom, now as before, comes down to a user-by-user calculation of the service’s benefits versus the possibility that call logs are being viewed in China, analysts say. China hopes to identify and stop internet content that flouts Communist Party interests.

The 10-year-old listed company officially named Zoom Video Communications reported over $1 billion in revenue in the April-June quarter this year, up 54% over the same quarter of 2020 when the COVID-19 pandemic drove face-to-face meetings online. In the same quarter, the most recent one detailed by the company, Zoom had 504,900 customers of more than 10 employees, up about 36% year on year.

Zoom commanded a 42.8% U.S. market share, leading competitors, as of May 2020, the news website LearnBonds reported. Its U.S. share was up to 55% by March this year, according to ToolTester Network data.

Tech media cite Zoom’s free 40 minutes and capacity for up to 100 call participants as major reasons for its popularity.

Links to China?

Keys that Zoom uses to encrypt and decrypt meetings may be sent to servers in China, Wired Business Media’s website Security Week has reported. Some encryption keys were issued by servers in China, news website WCCF Tech said.

Zoom did not answer VOA’s requests this month for comment.

Zoom has acknowledged keeping at least one data center and a staff employee in China, where the communist government requires resident tech firms to provide user data on request. In September 2019, the Chinese government turned off Zoom in China, and in April last year Zoom said international calls were routed in error through a China-based data center.

“Odds are high” of China getting records of Zoom calls, said Jacob Helberg, a senior adviser at the Stanford University Center on Geopolitics and Technology.

“If you have Zoom engineers in China who have access to the actual servers, from an engineering standpoint those engineers can absolutely have access to content of potential communications in China,” he said.

Zoom said in a statement in early April 2020 that certain meetings held by its non-Chinese users might have been “allowed to connect to systems in China, where they should not have been able to connect,” SmarterAnalyst.com reported.

Excitement and caution

Zoom said in 2019 it had put in place “strict geo-fencing procedures around our mainland China data center.”

“No meeting content will ever be routed through our mainland China data center unless the meeting includes a participant from China,” it said in a blog post.

Among the bigger users of Zoom is the University of California, a 10-campus system that switched to online learning in early 2020. Zoom was selected following a request for proposals “years” before the pandemic, a UC-Berkeley spokesperson told VOA on Thursday.

Elsewhere in the United States, NASA has banned employees from using Zoom, and the Senate has urged its members to avoid it because of security concerns. The German Foreign Ministry and Australian Defense Force restrict use as well, while Taiwan barred Zoom for government business last year. China claims sovereignty over self-ruled Taiwan, which has caused decades of political hostility.

“For Taiwan, there’s still some doubt,” said Brady Wang, a Taipei analyst with the market intelligence firm Counterpoint Research, referring particularly to Zoom’s encryption software. “And in the final analysis, these kinds of choices are numerous, so it’s not like you must rely on Zoom.”

LinkedIn’s withdrawal from China announced this month may spark new scrutiny over Zoom, said Zennon Kapron, founder and director of Kapronasia, a Shanghai financial industry research firm.

“I think when you look at the other technology players that are currently in China or that have relations to China such as Zoom, there will be a renewed push probably by consumers, businesses and even regulators in some jurisdictions to really try to understand and pry apart what the roles of Chinese suppliers or development houses are in developing some of these platforms and the potential security risks that go with them,” Kapron said.

Read More

Facebook Dithered in Curbing Divisive User Content in India

Facebook in India has been selective in curbing hate speech, misinformation and inflammatory posts, particularly anti-Muslim content, according to leaked documents obtained by The Associated Press, even as its own employees cast doubt over the company’s motivations and interests.

From research as recent as March of this year to company memos that date back to 2019, the internal company documents on India highlight Facebook’s constant struggles in quashing abusive content on its platforms in the world’s biggest democracy and the company’s largest growth market. Communal and religious tensions in India have a history of boiling over on social media and stoking violence.

The files show that Facebook has been aware of the problems for years, raising questions over whether it has done enough to address these issues. Many critics and digital experts say it has failed to do so, especially in cases where members of Prime Minister Narendra Modi’s ruling Bharatiya Janata Party, the BJP, are involved.

Modi has been credited for leveraging the platform to his party’s advantage during elections, and reporting from The Wall Street Journal last year cast doubt over whether Facebook was selectively enforcing its policies on hate speech to avoid blowback from the BJP. Both Modi and Facebook chairman and CEO Mark Zuckerberg have exuded bonhomie, memorialized by a 2015 image of the two hugging at Facebook headquarters.

According to the documents, Facebook saw India as one of the most “at risk countries” in the world and identified both Hindi and Bengali languages as priorities for “automation on violating hostile speech.” Yet, Facebook didn’t have enough local language moderators or content-flagging in place to stop misinformation that at times led to real-world violence.

In a statement to the AP, Facebook said it has “invested significantly in technology to find hate speech in various languages, including Hindi and Bengali” which has “reduced the amount of hate speech that people see by half” in 2021. 

“Hate speech against marginalized groups, including Muslims, is on the rise globally. So we are improving enforcement and are committed to updating our policies as hate speech evolves online,” a company spokesperson said. 

This AP story, along with others being published, is based on disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by former Facebook employee-turned-whistleblower Frances Haugen’s legal counsel. The redacted versions were obtained by a consortium of news organizations, including the AP.

In February 2019 and ahead of a general election when concerns about misinformation were running high, a Facebook employee wanted to understand what a new user in the country saw on their news feed if all they did was follow pages and groups solely recommended by the platform.

The employee created a test user account and kept it live for three weeks, during which an extraordinary event shook India — a militant attack in disputed Kashmir killed more than 40 Indian soldiers, bringing the country to near war with rival Pakistan.

In a report, titled “An Indian Test User’s Descent into a Sea of Polarizing, Nationalistic Messages,” the employee, whose name is redacted, said they were shocked by the content flooding the news feed, which “has become a near constant barrage of polarizing nationalist content, misinformation, and violence and gore.”

Seemingly benign and innocuous groups recommended by Facebook quickly morphed into something else altogether, where hate speech, unverified rumors and viral content ran rampant.

The recommended groups were inundated with fake news, anti-Pakistan rhetoric and Islamophobic content. Much of the content was extremely graphic.

“Following this test user’s News Feed, I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total,” the researcher wrote.

The Facebook spokesperson said the test study “inspired deeper, more rigorous analysis” of its recommendation systems and “contributed to product changes to improve them.”

“Separately, our work on curbing hate speech continues and we have further strengthened our hate classifiers, to include four Indian languages,” the spokesperson said.

Other research files on misinformation in India highlight just how massive a problem it is for the platform.

In January 2019, a month before the test user experiment, another assessment raised similar alarms about misleading content. 

In a presentation circulated to employees, the findings concluded that Facebook’s misinformation tags weren’t clear enough for users, underscoring that it needed to do more to stem hate speech and fake news. Users told researchers that “clearly labeling information would make their lives easier.”

Alongside misinformation, the leaked documents reveal another problem dogging Facebook in India: anti-Muslim propaganda, especially by Hindu-hardline groups.

India is Facebook’s largest market with over 340 million users — nearly 400 million Indians also use the company’s messaging service WhatsApp. But both have been accused of being vehicles to spread hate speech and fake news against minorities.

In February 2020, these tensions came to life on Facebook when a politician from Modi’s party uploaded a video on the platform in which he called on his supporters to remove mostly Muslim protesters from a road in New Delhi if the police didn’t. Violent riots erupted within hours, killing 53 people. Most of them were Muslims. Only after thousands of views and shares did Facebook remove the video.

In April, misinformation targeting Muslims again went viral on its platform as the hashtag “Coronajihad” flooded news feeds, blaming the community for a surge in COVID-19 cases. The hashtag was popular on Facebook for days but was later removed by the company.

The misinformation triggered a wave of violence, business boycotts and hate speech toward Muslims.

Criticisms of Facebook’s handling of such content were amplified in August of last year when The Wall Street Journal published a series of stories detailing how the company had internally debated whether to classify a Hindu hard-line lawmaker close to Modi’s party as a “dangerous individual” — a classification that would ban him from the platform — after a series of anti-Muslim posts from his account.

The documents also show how the company’s South Asia policy head herself had shared what many felt were Islamophobic posts on her personal Facebook profile. 

Months later the India Facebook official quit the company. Facebook also removed the politician from the platform, but documents show many company employees felt the platform had mishandled the situation, accusing it of selective bias to avoid being in the crosshairs of the Indian government.

As recently as March this year, the company was internally debating whether it could control the “fear mongering, anti-Muslim narratives” pushed by Rashtriya Swayamsevak Sangh, a far-right Hindu nationalist group that Modi is also a part of, on its platform.

In one document titled “Lotus Mahal,” the company noted that members with links to the BJP had created multiple Facebook accounts to amplify anti-Muslim content.

The research found that much of this content was “never flagged or actioned” since Facebook lacked “classifiers” and “moderators” in Hindi and Bengali languages. 

Facebook said it added hate speech classifiers in Hindi starting in 2018 and introduced Bengali in 2020.

Read More

Російські гібридні сили припиняють блокувати СММ ОБСЄ в Донецьку – соцмережі

Жодних повідомлень від самої ОБСЄ про якусь взаємодію з російською гібридною адміністрацією немає

Read More

У США помер російський мільярдер, який зажадав прибрати його зі списку осіб, наближених до Путіна

Російський учений-фізик і бізнесмен Валентин Гапонцев помер у США на 83 році життя. Його статки журнал Forbes оцінював у 2,8 мільярда доларів.

Гапонцев займався виробництвом і продажем промислових лазерів. Заснована ним IPG Photonics – один з найбільших виробників промислових лазерів в світі, її ринкова капіталізація становить 8,9 мільярда доларів.

У січні 2018 року Гапонцев, у якого на той час було вже і американське громадянство, потрапив в доповідь Міністерства фінансів США про осіб, наближених до Путіна. Після потрапляння в список заснована ним компанія подешевшала на три мільярди доларів, а сам він втратив майже половину з трьох мільярдів своїх статків.

У грудні того ж року бізнесмен зажадав через суд видалити його зі списку. Його захист наполягав, що його не можна називати олігархом, і що більша частина його бізнесу ніколи не була пов’язана з Росією. Він залишається єдиним росіянином, який зміг домогтися видалення зі списку.

Через пів року позов був відкликаний, а бізнесмен уклав з Мінфіном мирову угоду. За повідомленнями Forbes, в останні роки він боровся з онкологічним захворюванням.

Read More

«Відповідайте: не їздив. Навіть, якщо їздили» – російська поетеса, яку не впустили в Україну через відвідини Криму

Раніше в суботу в Держприкордонслужбі України повідомили, що 25-річній громадянці Росії, яка прибула з Іспанії до Києва, заборонили в’їзд в Україну через незаконне відвідування анексованого Криму

Read More

Шойгу відповів міністру оборони Німеччини на заяву про «стримування»

У Міноборони Німеччини кажуть, що НАТО буде адаптуватися до нинішнього поведінки Росії

Read More