WASHINGTON – With tech giants cracking down on terrorist content, lawmakers and experts
fear extremists may simply move to smaller platforms to continue spreading their propaganda
and recruiting new members.

“We have been successful in taking [terrorist content] off Twitter, but that doesn’t eliminate
terrorists moving to other platforms,” Twitter’s Public Policy Director, Carlos Monje said at a
Senate Committee hearing on social media and terrorist content on Wednesday.
“If I were a terrorist, I might move to smaller platforms in the future,” Monje said.

Syracuse University Professor Nina Brown, who has conducted extensive research on terrorists’
use of social media networks, said that because tech companies have no legal obligation to detect
and deter terrorist content, their actions are driven by a corporate and social responsibility to
protect their users.

“There is tremendous pressure [on these companies] placed by the public, especially in response
to the attacks that have come from people who have been radicalized online,” Brown said.
Not only will smaller platforms likely not face that pressure, they also lack the resources the
larger companies have to invest in anti-terrorism research. “Smaller platforms are going to have
fewer resources to develop those algorithms than Facebook, Twitter, and Google do,” Brown
said.

Looking to future prevention, Sen. Jerry Moran, R-Kan, asked the Twitter, Facebook, and
YouTube representatives at the hearing if they had any plans to prevent terrorists from moving to
different, smaller platforms.

Monika Bickert, Head of Product Policy and Counterterrorism at Facebook, said that Facebook,
along with the other tech giants, plan to ensure this movement is industry-wide.

Bickert, Monje and YouTube’s Public Policy Director, Juniper Downs, described The Global
Internet Forum to Counter Terrorism, a multi-company effort to combat extremist content and
terrorist recruitment through social media. The initiative, created in December 2016 and
launched in June 2017, currently includes 12 companies, including Facebook, Microsoft, Twitter,
and YouTube.

GIFCT has a database for companies to share “hashes” or digital footprints of terrorist content.
Companies can input terrorist content, i.e a recruitment video or propaganda message, into a
cryptographic function, which then generates a “hash,” that is typically displayed as a
hexadecimal number. This makes it easier for companies to detect terrorist content before it is
even published, if another site has already inputted it into the site.

So if YouTube inputs an ISIS recruitment video into the database, a hash is produced. If an ISIS
recruiter tries to upload the video to his Facebook status, Facebook technologies will detect
possible terrorist content. The technologies will then input the status into a database and when
the output is the hash YouTube already created, Facebook knows the possible terrorist content is
in fact dangerous and can prevent it from being published.

However, while the database has been successful with nearly over 40,000 hashes created, if a
terrorist slightly alters the content he posts, the input will not result in the same hash.
Monje also said that the Global Industry Forum to Counter Terrorism works with the UN’s
counter terrorism committee to host workshops for other tech companies. “To date, we’ve hosted
68 small companies at workshops through the Tech Against Terrorism Initiative,” Monje said.

Through GIFCT, companies also use Google’s Jigsaw Redirect Method, which places anti-
terrorism ads next to content with keywords popular with potential terrorist recruits. Especially
in recent years, terrorist groups have been successful at recruiting soldiers through social media,
such as the perpetrators of the 2015 San Bernardino and 2016 Orlando Pulse Nightclub attacks.

Aside from the shared database, most of the large social media companies are making strides in
their own companies to combat terrorist content, through the use of technology and human
review.

Machine learning technologies are able to detect potential re-uploads of violent content before it
is even published. For examples, YouTube heavily invests in video-matching techniques that can
“prevent the dissemination of violent content by catching re-uploads of known bad content
before it is public,” Brown said.

To catch new videos and content that have not yet been uploaded, YouTube and Facebook have
employees working 24/7 to review flagged content.

Facebook is now removing 99 percent of ISIS and Al Qaeda-related terror content before it is
reported; YouTube eliminates 70 percent of violent extremism content within eight hours of its
uploading; and Twitter has suspended more than 1.1 million terrorist accounts since 2015.

However, the companies said that despite their success in taking down terrorist content, they
alone cannot stop terrorists from recruiting individuals to carrying out attacks on their behalf.

“Nonetheless, given the evolving nature of the threat, it is necessary for us to continue enhancing
our systems,” Downs said. “We know that no enforcement regime will ever be 100 percent
perfect.”

Brown echoed the sentiment that terrorists may still be able to get their message across, even
with the progress shown by tech giants.

“Regardless of how much effort big social puts into this, the issue [of terrorism] won’t go away,”
Brown said. “Tech companies could probably drastically reduce the content, but they still can’t
get rid of the content.”