Tech

Fighting extremist content online: Feds spend $1.9 million on terrorism analysis tool – National


The federal government is providing new funding to continue developing an automated tool to find and flag terrorist content online.

In a press release published Tuesday night, the public safety department detailed a three-year, $1.9 million investment in funding “to combat terrorist and violent content.” online extremism”.

Prime Minister Justin Trudeau said: “We need to confront the rise of hatred and violent extremism. in a tweet on Tuesday.

“At the Christchurch Call Summit, I announced that Canada will fund a new tool to help small and medium-sized online platforms better identify and combat anti-Semitism-related content. terrorism and violent extremism”.

Read more:

How close is too close to far right? Why are some experts worried about Canada’s MPs?

The tool Trudeau refers to is the Terrorism Content Analysis Platform.

The story continues below the ad

Created by the United Nations Counter-Terrorism Technology initiative in 2020, the tool aggregates different corners of the Internet to find terrorist content and flags it for tech companies. worldwide review – and if they choose to do so, delete.

The creation of this tool was funded by the Public Safety Agency of Canada through the Community Resilience Fund. However, although their foundation supports it, the government is still limited in the work that TCAP does, according to the website for the tool.

How does the Terrorism Content Analysis Platform (TCAP) work?

Usually, terrorists share their content on “smaller platforms” first, according to Adam Hadley, executive director of Tech Against Terrorism.

“Unfortunately, smaller platforms also tend to limit their ability to handle the use of services by terrorists,” he explained to Global News in an emailed statement.

“With TCAP, we can alert this content to smaller platforms quickly and thus stop the content from going viral on the internet before it becomes viral.”

The story continues below the ad

TCAP started with a group of open source intelligence analysts (OSINT), who find out which platforms take precedence among terrorist entities. The group then identifies links to smaller terrorist-run websites and social media platforms where their content is hosted and uploads those links to TCAP.

The autocutter also extracts data from the platforms identified by OSINT, uploading relevant links to TCAP.


Click to play video: 'Future-forward terrorism survivor helping others'







Terrorist survivor looks to a future of helping others


Future-oriented terrorism survivor helping others – September 12, 2022

Once these links are uploaded to TCAP, they will be verified and sorted into the respective terrorist organization. If the verified links are on a platform registered with TCAP, the tech company will receive an automatic alert to let them decide if they want to moderate the content. The platform also monitors content to see what tech platforms decide to do.

As a final step, TCAP caches the content it collects for what its website describes as “academic and human rights purposes”. While it’s not yet available, the tool’s repository will eventually be open to researchers and scholars.

The story continues below the ad

So far, the TCAP tool has sent only about 20,000 alerts to 72 different platforms, according to its website.

The alerts dealt with a total of 34 different terrorist entities.

In its latest transparency report, covers the period from December 2020 to November 2021Tech Against Terrorism says 94% of the content its TCAP tool warns tech platforms about is ultimately taken down.

However, the takedowns did not take place equally among different terrorist groups. On average, tech companies that received a “terrorist” content warning took down 94% of the content flagged for them.

However, the rate of removal of far-right terrorist content after a warning is only 50%.

On top of that, media from the right was sent to TCAP at a much lower rate. While 18,787 submissions were made for Islamic terrorism content – ​​resulting in 10,959 warnings being sent out – only 170 submissions were made for far-right terrorist content, resulting in 115 warnings being sent. Go.

The story continues below the ad

Read more:

Some of the truck convoy organizers have a history of white nationalism, racism

Part of the reason for the much lower application rates could be due to the rigorous verification procedures implemented by the TCAP tool. To be considered for warning, content must be tied to a designated terrorist organization – an official classification made under the Counter-Terrorism Act in Canada.

Canada only started adding right-wing extremist groups to its list of outlaw terrorist organizations in 2019, when it added the names Blood & Honor and Combat 18.

“We adhere closely to the designation of even more violent far-right organizations and will include any new organizations designated in TCAP as soon as they are told by national and democratic organizations,” said Hadley. on legal designation,” Hadley said.

“We believe that the major democracies need to do more to ensure that the designation of far-right violent extremist organizations, groups and individuals is done more.”

Debating the effectiveness of automatic flagging

According to Hadley, part of the goal of the latest round of funding is to help TCAP ramp up efforts to host the content it flags.

The story continues below the ad

Part of the funding from Canada, he said, will “ensure that content referrals are auditable and accountable by providing access to original content after the referral is taken down”.

According to JM Berger, a writer and researcher focusing on extremism and author of four critically acclaimed books, checking flagged content is one of the key steps in the process. this.

He told Global News: “Some organized effort is needed to host extremist content that is easy to take down, which is one of the functions of TCAP.

“This document is not only important for prosecutions and research, but it is a necessary component in any effort to examine how tech companies approach takedown requests.”

When things go as they are, the current takedown mode is “quite lackluster,” Berger said.

“The Archives may allow for some of the first steps towards accountability, but more needs to be done.”


Click to play video: 'Canada adds 13 entities, including Proud Boys, to terror list'







Canada adds 13 entities, including Proud Boys, to terror list


Canada adds 13 entities, including Proud Boys, to terror list – February 3, 2021

However, not everyone believes that automation is the best way to manage terrorist content online, including Stephanie Carvin, a former CSIS analyst who now teaches at Carleton University.

The story continues below the ad

“I’m not necessarily against it,” Carvin said of the TCAP tool.

However, she thinks tech companies should take bigger initiatives to get the content right on their platforms without relying on automated tools.

“The reality is you have a problem with remote rights that’s going to have to be resolved with the (tech) companies themselves.”

Several major tech companies have taken steps recently to crack down on material from white supremacists and far-right militias.

According to Reutersin 2021, a number of US technology companies including Twitter, Alphabet, Meta – still known as Facebook at the time of this announcement – and Microsoft begin to contribute to Global Internet Forum to Combat Terrorism (GIFCT) database.

This allows them to share their data to better identify and remove extremist content across different platforms.

However, despite efforts from tech companies and TCAP, there’s still a risk that some far-right content will slip through the cracks, given how quickly their icons and memes change relative to content. of Islamic terrorist groups such as Daesh.

“When you have groups like Daesh using their flags and things like that… they’re using some kind of visual. Carvin explained.

The story continues below the ad

“But the issue on the right, for example, I think the primary concern of the Canadian government is that memes and content change very quickly.”

Read more:

Canada adds neo-Nazi groups Blood & Honor, Combat 18 to list of terrorist organizations

Meanwhile, the Canadian government said providing online protections is a “central part” of its efforts to keep Canadians “safe,” according to a statement from Audrey Champoux, a spokeswoman for Canada. Public Safety Minister, addressed to Global News.

“We must confront the proliferation of hate, misinformation and misinformation as well as violent extremism often amplified and spread online – which can have consequences in the world. real world.”

– with files from Reuters

© 2022 Global News, a division of Corus Entertainment Inc.





Source link

news5s

News5s: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button