Here at fig social, we frequently handle support requests that involve a disabled account. Facebook reserves the right to remove content, accounts, and/or pages that violate their Community Standards, as determined by their Product Policy team.
When a user’s account is disabled due to policy violation(s), the user will be notified at the point of login. If the account is only temporarily disabled, the user will have the opportunity to submit a request for Facebook to reverse the disabled account. However, if the account is permanently disabled, the user will not be able to access that account again.
We strongly suggest familiarizing yourself with all of Facebook’s content policies in order to avoid dealing with a disabled account. The rest of the article gives an overview of these policies as well as links to read the comprehensive Community Standards guide.
Would you like to learn more about any of the above policies? The following guidelines are taken directly from Facebook’s full Community Standards directory (the following is an incomplete list; please see the original guidelines for all Community Standards):
Violence and Criminal Behavior
Violence and Incitement
We aim to prevent potential offline harm that may be related to content on Facebook. While we understand that people commonly express disdain or disagreement by threatening or calling for violence in non-serious ways, we remove language that incites or facilitates serious violence. We remove content, disable accounts, and work with law enforcement when we believe there is a genuine risk of physical harm or direct threats to public safety. We also try to consider the language and context in order to distinguish casual statements from content that constitutes a credible threat to public or personal safety. In determining whether a threat is credible, we may also consider additional information like a person’s public visibility and the risks to their physical safety.
In some cases, we see aspirational or conditional threats directed at terrorists and other violent actors (e.g. Terrorists deserve to be killed), and we deem those non credible absent specific evidence to the contrary.
Dangerous Individuals or Organizations
In an effort to prevent and disrupt real-world harm, we do not allow any organizations or individuals that proclaim a violent mission or are engaged in violence to have a presence on Facebook. This includes organizations or individuals involved in the following:
Organized Violence or Criminal Activity
We also remove content that expresses support or praise for groups, leaders, or individuals involved in these activities. Learn more about our work to fight terrorism online here.
Coordinating Harm and Publicizing Crime
In an effort to prevent and disrupt offline harm and copycat behavior, we prohibit people from facilitating, organizing, promoting, or admitting to certain criminal or harmful activities targeted at people, businesses, property or animals. We allow people to debate and advocate for the legality of criminal and harmful activities, as well as draw attention to harmful or criminal activity that they may witness or experience as long as they do not advocate for or coordinate harm.
To encourage safety and compliance with common legal restrictions, we prohibit attempts by individuals, manufacturers, and retailers to purchase, sell, or trade non-medical drugs, pharmaceutical drugs, and marijuana. We also prohibit the purchase, sale, gifting, exchange, and transfer of firearms, including firearm parts or ammunition, between private individuals on Facebook. Some of these items are not regulated everywhere; however, because of the borderless nature of our community, we try to enforce our policies as consistently as possible. Firearm stores and online retailers may promote items available for sale off of our services as long as those retailers comply with all applicable laws and regulations. We allow discussions about sales of firearms and firearm parts in stores or by online retailers and advocating for changes to firearm regulation. Regulated goods that are not prohibited by our Community Standards may be subject to our more stringent Commerce Policies.
Fraud and Deception
In an effort to prevent and disrupt harmful or fraudulent activity, we remove content aimed at deliberately deceiving people to gain an unfair advantage or deprive another of money, property, or legal right. However, we allow people to raise awareness and educate others as well as condemn these activities using our platform.
Suicide and Self-Injury
In an effort to promote a safe environment on Facebook, we remove content that encourages suicide or self-injury, including certain graphic imagery and real-time depictions that experts tell us might lead others to engage in similar behavior. Self-injury is defined as the intentional and direct injuring of the body, including self-mutilation and eating disorders. We want Facebook to be a space where people can share their experiences, raise awareness about these issues, and seek support from one another, which is why we allow people to discuss suicide and self-injury.
We work with organizations around the world to provide assistance to people in distress. We also talk to experts in suicide and self-injury to help inform our policies and enforcement. For example, we have been advised by experts that we should not remove live videos of self-injury while there is an opportunity for loved ones and authorities to provide help or resources.
In contrast, we remove any content that identifies and negatively targets victims or survivors of self-injury or suicide seriously, humorously, or rhetorically.
Bullying and Harassment
Bullying and harassment happen in many places and come in many different forms, from making threats to releasing personally identifiable information, to sending threatening messages, and making unwanted malicious contact. We do not tolerate this kind of behavior because it prevents people from feeling safe and respected on Facebook.
We distinguish between public figures and private individuals because we want to allow discussion, which often includes critical commentary of people who are featured in the news or who have a large public audience. For public figures, we remove attacks that are severe as well as certain attacks where the public figure is directly tagged in the post or comment. For private individuals, our protection goes further: we remove content that’s meant to degrade or shame, including, for example, claims about someone’s sexual activity. We recognize that bullying and harassment can have more of an emotional impact on minors, which is why our policies provide heightened protection for users between the ages of 13 and 18.
Context and intent matter, and we allow people to share and re-share posts if it is clear that something was shared in order to condemn or draw attention to bullying and harassment. In certain instances, we require self-reporting because it helps us understand that the person targeted feels bullied or harassed. In addition to reporting such behavior and content, we encourage people to use tools available on Facebook to help protect against it.
We also have a Bullying Prevention Hub, which is a resource for teens, parents, and educators seeking support for issues related to bullying and other conflicts. It offers step-by-step guidance, including information on how to start important conversations about bullying. Learn more about what we’re doing to protect people from bullying and harassment here.
Privacy Violations and Image Privacy Rights
Privacy and the protection of personal information are fundamentally important values for Facebook. We work hard to keep your account secure and safeguard your personal information in order to protect you from potential physical or financial harm. You should not post personal or confidential information about others without first getting their consent. We also provide people ways to report imagery that they believe to be in violation of their privacy rights.
We do not allow hate speech on Facebook because it creates an environment of intimidation and exclusion and in some cases may promote real-world violence.
We define hate speech as a direct attack on people based on what we call protected characteristics — race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability. We protect against attacks on the basis of age when age is paired with another protected characteristic, and also provide certain protections for immigration status. We define attack as violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation. We separate attacks into three tiers of severity, as described below.
Sometimes people share content containing someone else’s hate speech for the purpose of raising awareness or educating others. In some cases, words or terms that might otherwise violate our standards are used self-referentially or in an empowering way. People sometimes express contempt in the context of a romantic break-up. Other times, they use gender-exclusive language to control membership in a health or positive support group, such as a breastfeeding group for women only. In all of these cases, we allow the content but expect people to clearly indicate their intent, which helps us better understand why they shared it. Where the intention is unclear, we may remove the content.
In addition, we believe that people are more responsible when they share this kind of commentary using their authentic identity.
Click here to read our Hard Questions Blog and learn more about our approach to hate speech.
Violent and Graphic Content
We remove content that glorifies violence or celebrates the suffering or humiliation of others because it may create an environment that discourages participation. We allow graphic content (with some limitations) to help people raise awareness about issues. We know that people value the ability to discuss important issues like human rights abuses or acts of terrorism. We also know that people have different sensitivities with regard to graphic and violent content. For that reason, we add a warning label to especially graphic or violent content so that it is not available to people under the age of eighteen and so that people are aware of the graphic or violent nature before they click to see it.
Cruel and Insensitive
We believe that people share and connect more freely when they do not feel targeted based on their vulnerabilities. As such, we have higher expectations for content that we call cruel and insensitive, which we define as content that targets victims of serious physical or emotional harm.
We remove explicit attempts to mock victims and mark as cruel implicit attempts, many of which take the form of memes and GIFs.
Integrity and Authenticity
Authenticity is the cornerstone of our community. We believe that people are more accountable for their statements and actions when they use their authentic identities. That’s why we require people to connect on Facebook using the name they go by in everyday life. Our authenticity policies are intended to create a safe environment where people can trust and hold one another accountable.
We work hard to limit the spread of spam because we do not want to allow content that is designed to deceive, or that attempts to mislead users to increase viewership. This content creates a negative user experience and detracts from people’s ability to engage authentically in online communities. We also aim to prevent people from abusing our platform, products, or features to artificially increase viewership or distribute content en masse for commercial gain.
We recognize that the safety of our users extends to the security of their personal information. Attempts to gather sensitive personal information by deceptive or invasive methods are harmful to the authentic, open, and safe atmosphere that we want to foster. Therefore, we do not allow attempts to gather sensitive user information through the abuse of our platform and products.
In line with our commitment to authenticity, we don’t allow people to misrepresent themselves on Facebook, use fake accounts, artificially boost the popularity of content, or engage in behaviors designed to enable other violations under our Community Standards. This policy is intended to create a space where people can trust the people and communities they interact with.
Reducing the spread of false news on Facebook is a responsibility that we take seriously. We also recognize that this is a challenging and sensitive issue. We want to help people stay informed without stifling productive public discourse. There is also a fine line between false news and satire or opinion. For these reasons, we don’t remove false news from Facebook but instead, significantly reduce its distribution by showing it lower in the News Feed. Learn more about our work to reduce the spread of false news here.
Respecting Intellectual Property
Facebook takes intellectual property rights seriously and believes they are important to promoting expression, creativity, and innovation in our community. You own all of the content and information you post on Facebook, and you control how it is shared through your privacy and application settings. However, before sharing content on Facebook, please be sure you have the right to do so. We ask that you respect other people’s copyrights, trademarks, and other legal rights. We are committed to helping people and organizations promote and protect their intellectual property rights. Facebook’s Terms of Service do not allow people to post content that violates someone else’s intellectual property rights, including copyright and trademark. We publish information about the intellectual property reports we receive in our bi-annual Transparency Report, which can be accessed at https://transparency.facebook.com/
Gathering input from our stakeholders is an important part of how we develop Facebook’s Community Standards. We want our policies to be based on feedback from community representatives and a broad spectrum of the people who use our service, and we want to learn from and incorporate the advice of experts.
Engagement makes our Community Standards stronger and more inclusive. It brings our stakeholders more fully into the policy development process, introduces us to new perspectives, allows us to share our thinking on policy options, and roots our policies in sources of knowledge and experience that go beyond Facebook.
Product Policy is the team that writes the rules for what people are allowed to share on Facebook, including the Community Standards. To open up the policy development process and gather outside views on our policies, we created the Stakeholder Engagement team, a sub-team that’s part of Product Policy. Stakeholder Engagement’s main goal is to ensure that our policy development process is informed by the views of outside experts and the people who use Facebook. We have developed specific practices and a structure for engagement in the context of the Community Standards, and we’re expanding our work to cover additional policies, particularly ads policies and major News Feed ranking changes.