On August 11, 2020, Facebook released the sixth edition of the Community Standards Enforcement Report. The report, which contains data and statistics relevant to the activity of the Policy Enforcement Team, is released on a quarterly basis. (We previously put together an infographic and abbreviated guide to understanding the Community Standards. To see that, click here.)

While sifting through the report may not be on the top of every Facebook user’s to-do list, the data it contains is helpful for everyone to better understand how to protect themselves from Facebook disabling their account.

Facebook has an entire team of people who are trained to moderate content and accounts being shared on the platform in order to protect their online community. These team members are on the lookout for content that fits into ten different categories: Adult Nudity and Sexual Activity, Bullying and Harassment, Child Nudity and Sexual Exploitation, Dangerous Organizations, Fake Accounts, Hate Speech, Regulated Goods, Spam, Suicide and Self-Injury, and Violent and Graphic Content.

Facebook’s Content Operations team consists of 15,000 people stationed across the globe whose sole task is to enforce the community standards. This team is on-duty twenty-four hours a day, seven days a week. Impressively, they are able to respond to nearly all violation reports within 24 hours. There are four actions that the Operations team will take on reported content: content removal, covering the content with a removable warning, disabling accounts, and even reporting to external agencies if the violation warrants such action.

The Community Operations team even has certain KPI’s, or Key Performance Indicators, to gauge the success of their content monitoring and to continue to raise their team’s internal standards moving forward. As one of Facebook’s 2.6 billion active users, I know that I’m grateful these measures are in place to keep my online experience safe and user-friendly.

There’s no denying that Disabled Accounts are on the rise. Here at fig social, we provide live, on-demand Facebook support. By far one of the most common issue that we see come across our chat stream is that the user’s account has been disabled. More often than not, we are able to submit an appeal to reverse the decision to disable the account. However, that’s not always the case.

Having to deal with this issue repeatedly has made us wonder, “What’s the most common reason this is happening?” Fortunately, Facebook provides those statistics in the Enforcement Report.

The Most Common Reason Facebook Disables Accounts

Fake Accounts. We know that’s somewhat anticlimatic; everyone has encountered a fake account probably more than once in their Facebook lifespan. Facebook reported having acted on some 1.5 billion (yes-with a B!) fake accounts in the second quarter of 2020. Thankfully, thanks to the proactive efforts of the team this number has been steadily decreasing quarter-by-quarter.

According to the report, “[Fake Accounts] include accounts created with malicious intent to violate our policies and personal profiles created to represent a business, organization or non-human entity, such as a pet. We prioritize enforcement against fake accounts that seek to cause harm. Many of these accounts are used in spam campaigns and are financially motivated.”

The above photo is a report of their proactivity on Fake Accounts. As you can see, they’ve gotten quite good at detecting these accounts and are able to remove nearly all (upwards of 99%) before a user detects and reports the fake account. Kudos, Facebook.

Facebook has also reported removing all accounts associated with the IP Address, email address, or phone number associated with the fake account (unless the fake account was originally an authentic account that has been hacked). This is an additional step that they’ve taken in order to continue to reduce the number of fake accounts created over time.

A Close Second: Spam Content

So, as long as you don’t create a fake, malicious account then you should be in the clear, right? Unfortunately, not so much. Close behind the 1.5 billion fake accounts acted on comes spam content. In the second quarter of this year, Facebook acted on 1.4 billion pieces of spam content.

Facebook released the following statement regarding what they consider to be spam content: “Spam is a broad term used to describe content that is designed to be shared in deceptive and annoying ways or attempts to mislead users to drive engagement. Spam spreads in a number of ways: It can be automated (published by bots or scripts) or coordinated (when an actor uses multiple accounts to spread deceptive content). Spammers aim to build audiences to inflate their content’s distribution and reach, typically for financial gain.”

So although it may be close, spam is not “fake news.” Spam clearly has malicious intent behind the content while misinformation is often shared in innocence. But we wouldn’t be surprised if we see a new category added in the future to include content that contains blatant misinformation. If you use Facebook with any amount of frequency, then you’ve seen content that’s been flagged as potentially misleading. Some users have even claimed to have had their accounts disabled over repeatedly sharing such content. So, our advice would be to share with caution and to do a little independent research before posting potentially controversial information.


About fig social

fig social is the world’s first provider of Live, 24/7 On-Demand Facebook Support.  Whether you need help to Recover a Hacked Account, Recover a Disabled Account, Reset your Password, or other customer service issues, our team is ready to help you resolve your issue.  

You can start by browsing some of our recent articles. Check out our library of help videos or our advice on how to avoid needing to contact Facebook Support.  To get started with a FREE TRIAL of fig social right away, CLICK HERE to visit our support page!


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *