How Facebook Protects Users from Terrorism Using Artificial Intelligence

Since early this year, Facebook has implemented an artificial intelligence tool that automatically scans user content for triggers that might point towards terrorism and other violence, including self-harm and suicide. The sophisticated AI doesn’t just read the text but also checks photos and video.

The obvious benefit to the AI is that the social media giant can now deal with illegal content or imminent threats immediately rather than wait for a vigilant user to report it. While the technology is relatively new and still being developed, Mark Zuckerberg said in a statement that it already generates one-third of the content reports reviewed by the Facebook team. A team of humans is also working behind-the-scenes training the artificial intelligence to understand the difference between genuine threats and news reports of terrorism.

Social platforms serve as a stage for terrorists to recruit new members, and the problem of illegal content hosted on the websites has been a big thorn in Facebook’s, Twitter’s and YouTube’s sides. This technology, according to Mark Zuckerberg, is the latest step to providing users space where they can feel safe posting whatever they wish that’s allowed by their local laws.

Why Does Facebook Need Artificial Intelligence to Fight Terrorism?

Facebook has come under intense fire in the past year for hosting high-profile violent acts on its Live Video platform, including the gang rape of a young woman and live torturing of a young man with special needs. In addition to violent broadcasts, the platform has struggled to find the right balance between human-led and algorithmic vetting of news stories, ushering an era of “fake news.” On the world stage, social platforms have to work harder to ban terrorism recruitment accounts they give a voice to.

Lawmakers have begun holding prominent social media communities like Facebook responsible for their failure to delete illegal content they host, adding fuel to the fire. These controversies prompted Facebook to hire over 3,000 new moderators in an effort to protect users from graphic and illegal content on the site. But even with thousands of human moderators, it’s tough to keep up with the high volume of posts generated on Facebook at all moments of the day. In this case, AI is mandatory for large communities.

When announcing the expanded moderation team, Mark Zuckerberg alluded to advancements in artificial intelligence being necessary to proactively protect Facebook users. Artificial intelligence is essential to Facebook and other social platforms in that it can immediately shield users from harm. When users can set their own terms for blocking content they don’t want to see, it becomes all the more powerful a tool for establishing a safe space.

How You Can Use Artificial Intelligence on Facebook & Other Social Media Profiles

Whether you manage social media profiles or run a social media platform of your own, you can make use of AI to protect your community just like Facebook does. Smart Moderation proactively identifies potentially harmful, abusive and inappropriate comments—including terrorism, but also other things depending on your community.

Our tool is fully trainable and customizable so you can set the terms and topics you want to shield from. This provides a custom artificial intelligence to protect your community. Whether you want to protect against terrorism, violence, suicide/self-harm or other risks, simply train the tool like you would a human moderator. Once it identifies harmful posts, you can easily respond by replying, hiding, deleting or reporting the comment from the dashboard. Smart Moderation connects to all your favorite social media platforms so you can manage these comments all in one single place.

Experience a safe social media environment for all with cutting-edge AI. Learn more about using Smart Moderation with Facebook here, and try the tool for free today.

Leave a Reply

Your email address will not be published.