What Facebook’s “Napalm Girl” Moments Teaches About Moderation Management?

facebok-moderation

It feels like Facebook has perpetually been in hot water ever since it positioned itself as a primary news source for most its users. The latest controversy came from Facebook’s censorship of the iconic “Napalm Girl” photo, which depicts 9-year-old Phan Thi Kim Phuc running nude in a Vietnamese village, having shed her clothes after being hit by napalm. While newsrooms argued decades ago about whether it was appropriate to run the photo, the debate has been reinstated here in 2016: was Facebook wrong to censor a post that featured it?

Norwegian newspaper Afternposten thought so, and many readers agreed—prompting a huge outcry about censorship and Facebook’s responsibilities as a new source. Facebook has recently responded to the controversy by announcing they will allow “more items that people find newsworthy, significant, or important to the public interest” even if they would otherwise violate community standards, such as the child nudity featured in the “Napalm Girl” photo.

While it’s unfortunate that journalists, readers and users had to demand that Facebook review its policies before it would, Facebook’s announcement serves as a good example for how to deal with such a situation: it clearly explains the company’s goals and challenges in moving forward. This helps us understand why Facebook didn’t allow the image in the first place: “Images of nudity or violence that are acceptable in one part of the world may be offensive—or even illegal—in another.”

As a global community—something that sets the company apart from a national newspaper or magazine—Facebook finds itself juggling audiences who have different subjective views on what’s appropriate or not. Facebook’s solution is to work with various community partners to better reflect its community’s values.

Community management is tough, especially when your community is as big as Facebook’s. The key here is to always be available to listen to opposing views, consider them, and act when it’s clear that a significant portion of your audience agrees. Then, like Facebook, you must clearly state your reasoning, goals and challenges for how you may amend your comment moderation policies.

As a moderation management team, you must anticipate making difficult calls on what is and isn’t allowed to be posted—and you must also understand that you can’t keep everyone happy by making such a decision (which is why being transparent about how made your stance is key). Discuss with your comment moderators situations and contexts in which you might allow certain images or phrases that would otherwise be a no-no.

Unfortunately, your community management tools might not make this easy. For example, let’s say you want to allow people to express how they’re “f—ing excited” about your upcoming product announcement, but you don’t want people dropping f-bombs willy-nilly all over your comments. Keyword-based comment moderators won’t help here, but a tool like Smart Moderation can review your comments section 24/7 by using machine learning and artificial intelligence to understand all the nuances to the discussion. With Smart Moderation, you can make the tough calls and enforce them—without having to lift a finger.

So here’s our question to you: have you ever made a tough decision on whether to allow a post/comment or not? How did you choose, and what were the consequences? Let us know!