Facebook’s ethical failures are not accidental; they are part of the business model

  • Opinion Paper
  • Published: 05 June 2021
  • Volume 1 , pages 395–403, ( 2021 )

Cite this article

facebook problem essay

  • David Lauer   ORCID: orcid.org/0000-0002-0003-4521 1  

90k Accesses

27 Citations

777 Altmetric

31 Mentions

Explore all metrics

Avoid common mistakes on your manuscript.

Facebook’s stated mission is “to give people the power to build community and bring the world closer together.” But a deeper look at their business model suggests that it is far more profitable to drive us apart. By creating “filter bubbles”—social media algorithms designed to increase engagement and, consequently, create echo chambers where the most inflammatory content achieves the greatest visibility—Facebook profits from the proliferation of extremism, bullying, hate speech, disinformation, conspiracy theory, and rhetorical violence. Facebook’s problem is not a technology problem. It is a business model problem. This is why solutions based in technology have failed to stem the tide of problematic content. If Facebook employed a business model focused on efficiently providing accurate information and diverse views, rather than addicting users to highly engaging content within an echo chamber, the algorithmic outcomes would be very different.

Facebook’s failure to check political extremism, [ 15 ] willful disinformation, [ 39 ] and conspiracy theory [ 43 ] has been well-publicized, especially as these unseemly elements have penetrated mainstream politics and manifested as deadly, real-world violence. So it naturally raised more than a few eyebrows when Facebook’s Chief AI Scientist Yann LeCun tweeted his concern [ 32 ] over the role of right-wing personalities in downplaying the severity of the COVID-19 pandemic. Critics were quick to point out [ 29 ] that Facebook has profited handsomely from exactly this brand of disinformation. Consistent with Facebook’s recent history on such matters, LeCun was both defiant and unconvincing.

In response to a frenzy of hostile tweets, LeCun made the following four claims:

Facebook does not cause polarization or so-called “filter bubbles” and that “most serious studies do not show this.”

Critics [ 30 ] who argue that Facebook is profiting from the spread of misinformation—are “factually wrong.” Footnote 1

Facebook uses AI-based technology to filter out [ 33 ]:

Hate speech;

Calls to violence;

Bullying; and

Disinformation that endangers public safety or the integrity of the democratic process.

Facebook is not an “arbiter of political truth” and that having Facebook “arbitrate political truth would raise serious questions about anyone’s idea of ethics and liberal democracy.”

Absent from the claims above is acknowledgement that the company’s profitability depends substantially upon the polarization LeCun insists does not exist.

Facebook has had a profound impact on our access to ideas, information, and one another. It has unprecedented global reach, and in many markets serves as a de-facto monopolist. The influence it has over individual and global affairs is unique in human history. Mr. LeCun has been at Facebook since December 2013, first as Director of AI Research and then as Chief AI Scientist. He has played a leading role in shaping Facebook’s technology and approach. Mr. LeCun’s problematic claims demand closer examination. What follows, therefore, is a response to these claims which will clearly demonstrate that Facebook:

Elevates disinformation campaigns and conspiracy theories from the extremist fringes into the mainstream, fostering, among other effects, the resurgent anti-vaccination movement, broad-based questioning of basic public health measures in response to COVID-19, and the proliferation of the Big Lie of 2020—that the presidential election was stolen through voter fraud [ 16 ];

Empowers bullies of every size, from cyber-bullying in schools, to dictators who use the platform to spread disinformation, censor their critics, perpetuate violence, and instigate genocide;

Defrauds both advertisers and newsrooms, systematically and globally, with falsified video engagement and user activity statistics;

Reflects an apparent political agenda espoused by a small core of corporate leaders, who actively impede or overrule the adoption of good governance;

Brandishes its monopolistic power to preserve a social media landscape absent meaningful regulatory oversight, privacy protections, safety measures, or corporate citizenship; and

Disrupts intellectual and civil discourse, at scale and by design.

1 I deleted my Facebook account

I deleted my account years ago for the reasons noted above, and a number of far more personal reasons. So when LeCun reached out to me, demanding evidence for my claims regarding Facebook’s improprieties, it was via Twitter. What proof did I have that Facebook creates filter bubbles that drive polarization?

In anticipation of my response, he offered the claims highlighted above. As evidence of his claims, he directed my attention to a single research paper [ 23 ] that, on closer inspection, does not appear at all to reinforce his case.

The entire exchange also suggests that senior leadership at Facebook still suffers from a massive blindspot regarding the harm that its platform causes—that they continue to “move fast and break things” without regard for the global impact of their behavior.

LeCun’s comments confirm the concerns that many of us have held for a long time: Facebook has declined to resolve its systemic problems, choosing instead to paper over these deep philosophical flaws with advanced, though insufficient, technological solutions. Even when Facebook takes occasion to announce its triumphs in the ethical use of AI, such as its excellent work [ 8 ] detecting suicidal tendencies, its advancements pale in comparison to the inherent problems written into its algorithms.

This is because, fundamentally, their problem is not a failure of technology, nor a shortcoming in their AI filters. Facebook’s problem is its business model. Facebook makes superficial technology changes, but at its core, profits chiefly from engagement and virality. Study after study has found that “lies spread faster than the truth,” [ 47 ] “conspiracy theories spread through a more decentralized network,” [ 41 ] and that “politically extreme sources tend to generate more interactions from users.” Footnote 2 Facebook knows that the most efficient way to maximize profitability is to build algorithms that create filter bubbles and spread viral misinformation.

This is not a fringe belief or controversial opinion. This is a reality acknowledged even by those who have lived inside of Facebook’s leadership structure. As the former director of monetization for Facebook, Tim Kendall explained in his Congressional testimony, “social media services that I, and others have built, have torn people apart with alarming speed and intensity. At the very least we have eroded our collective understanding—at worst, I fear we are pushing ourselves to the brink of a civil war.” [ 38 ]

2 Facebook’s black box

To effectively study behavior on Facebook, we must be able to study Facebook’s algorithms and AI models. Therein lies the first problem. The data and transparency to do so are simply not there. Facebook does not practice transparency—they do not make comprehensive data available on their recommendation and filtering algorithms, or their other implementations of AI. One organization attempting to study the spread of misinformation, NYU’s Cybersecurity for Democracy, explains, “[o]ur findings are limited by the lack of data provided by Facebook…. Without greater transparency and access to data, such research questions are out of reach.” Footnote 3

Facebook’s algorithms and AI models are proprietary, and they are intentionally hidden from us. While this is normal for many companies, no other company has 2.85 billion monthly active users. Any platform that touches so many lives must be studied so that we can truly understand its impact. Yet Facebook does not make the kind of data available that is needed for robust study of the platform.

Facebook would likely counter this, and point to their partnership with Harvard’s Institute for Quantitative Social Science (Social Science One) as evidence that they are making data available to researchers [ 19 ]. While this partnership is one step in the right direction, there are several problems with this model:

The data are extremely limited. At the moment it consists solely of web page addresses that have been shared on Facebook for 18 months from 2017 to 2019.

Researchers have to apply for access to the data through Social Science One, which acts as a gatekeeper of the data.

If approved, researchers have to execute an agreement directly with Facebook.

This is not an open, scientific process. It is, rather, a process that empowers administrators to cherry-pick research projects that favor their perspective. If Facebook was serious about facilitating academic research, they would provide far greater access to, availability of, and insight into the data. There are legitimate privacy concerns around releasing data, but there are far better ways to address those concerns while fostering open, vibrant research.

3 Does Facebook cause polarization?

LeCun cited a single study as evidence that Facebook does not cause polarization. But do the findings of this study support Mr. LeCun’s claims?

The study concludes that “polarization has increased the most among the demographic groups least likely to use the Internet and social media.” The study does not, however, actually measure this type of polarization directly. Its primary data-gathering instrument—a survey on polarization—did not ask whether respondents were on the Internet or if they used social media. Instead, the study estimates whether an individual respondent is likely to be on the Internet based on an index of demographic factors which suggest “predicted” Internet use. As explained in the study, “the main predictor [they] focus on is age” [ 23 ]. Age is estimated to be negatively correlated with social media usage. Therefore, since older people are also shown to be more politically polarized, LeCun takes this as evidence that social media use does not cause polarization.

This assumption of causality is flawed. The study does not point to a causal relationship between these demographic factors and social media use. It simply says that these demographic factors drive polarization. Whether these factors have a correlational or causative relationship with the Internet and social media use is complete conjecture. The author of the study himself caveats any such conclusions, noting that “[t]hese findings do not rule out any effect of the internet or social media on political polarization.” [ 5 ].

Not only is LeCun’s assumption flawed, it is directly refuted by a recent Pew Research study [ 3 ] that found an overwhelmingly high percentage of US adults age 65 + are on Facebook (50%), the most of any social network. If anything, older age is actually more clearly correlated with Facebook use relative to other social networks.

Moreover, in 2020, the MIS Quarterly journal published a study by Steven L. Johnson, et al. that explored this problem and found that the “more time someone spends on Facebook, the more polarized their online news consumption becomes. This evidence suggests Facebook indeed serves as an echo chamber especially for its conservative users” [ 24 ].

Allcott, et al. also explores this question in “The Welfare Effects of Social Media” in November, 2019, beginning with a review of other studies confirming a relationship between social media use, well-being and political polarization [ 1 ]:

More recent discussion has focused on an array of possible negative impacts. At the individual level, many have pointed to negative correlations between intensive social media use and both subjective well-being and mental health. Adverse outcomes such as suicide and depression appear to have risen sharply over the same period that the use of smartphones and social media has expanded. Alter (2018) and Newport (2019), along with other academics and prominent Silicon Valley executives in the “time well-spent” movement, argue that digital media devices and social media apps are harmful and addictive. At the broader social level, concern has focused particularly on a range of negative political externalities. Social media may create ideological “echo chambers” among like-minded friend groups, thereby increasing political polarization (Sunstein 2001, 2017; Settle 2018). Furthermore, social media are the primary channel through which misinformation spreads online (Allcott and Gentzkow 2017), and there is concern that coordinated disinformation campaigns can affect elections in the US and abroad.

Allcott’s 2019 study uses a randomized experiment in the run-up to the November 2018 midterm elections to examine how Facebook affects several individual and social welfare measures. They found that:

deactivating Facebook for the four weeks before the 2018 US midterm election (1) reduced online activity, while increasing offline activities such as watching TV alone and socializing with family and friends; (2) reduced both factual news knowledge and political polarization; (3) increased subjective well-being; and (4) caused a large persistent reduction in post-experiment Facebook use.

In other words, not using Facebook for a month made you happier and resulted in less future usage. In fact, they say that “deactivation significantly reduced polarization of views on policy issues and a measure of exposure to polarizing news.” None of these findings would come as a surprise to anybody who works at Facebook.

“A former Facebook AI researcher” confirmed that they ran “‘study after study’ confirming the same basic idea: models that maximize engagement increase polarization” [ 21 ]. Not only did Facebook know this, but they continued to design and build their recommendation algorithms to maximize user engagement, knowing that this meant optimizing for extremism and polarization. Footnote 4

Facebook understood what they were building according to Tim Kendall’s Congressional testimony in 2020. He explained that “we sought to mine as much attention as humanly possible and turn [sic] into historically unprecedented profits” [ 38 ]. He went on to explain that their inspiration was “Big Tobacco’s playbook … to make our offering addictive at the outset.” They quickly figured out that “extreme, incendiary content” directly translated into “unprecedented engagement—and profits.” He was the director of monetization for Facebook—few would have been better positioned to understand Facebook’s motivations, findings and strategy.

4 Engagement, filter bubbles, and executive compensation

The term “filter bubble” was coined by Eli Pariser who wrote a book with that title, exploring how social media algorithms are designed to increase engagement and create echo chambers where inflammatory posts are more likely to go viral. Filter bubbles are not just an algorithmic outcome; often we filter our own lives, surrounding ourselves with friends (online and offline) who are more likely to agree with our philosophical, religious and political views.

Social media platforms capitalize on our natural tendency toward filtered engagement. These platforms build algorithms, and structure executive compensation, [ 27 ] to maximize such engagement. By their very design, social media curation and recommendation algorithms are engineered to maximize engagement, and thus, are predisposed to create filter bubbles.

Facebook has long attracted criticism for its pursuit of growth at all costs. A recent profile of Facebook’s AI efforts details the difficulty of getting “buy-in or financial support when the work did not directly improve Facebook’s growth.” [ 21 ]. Andrew Bosworth, a Vice President at Facebook said in a 2016 memo that nothing matters but growth, and that “all the work we do in growth is justified” regardless of whether “it costs someone a life by exposing someone to bullies” or if “somebody dies in a terrorist attack coordinated on our tools” [ 31 ].

Bosworth and Zuckerberg went on to claim [ 36 ] that the shocking memo was merely an attempt at being provocative. Certainly, it succeeded in this aim. But what else could they really say? It’s not a great look. And it looks even worse when you consider that Facebook’s top brass really do get paid more when these things happen. The above-referenced report is based on interviews with multiple former product managers at Facebook, and shows that their executive compensation system is largely based around their most important metric–user engagement. This creates a perverse incentive. And clearly, by their own admission, Facebook will not allow a few casualties to get in the way of their executive compensation.

5 Is it incidental or intentional?

Yaël Eisenstat, a former CIA analyst who specialized in counter-extremism went on to work at Facebook out of concern that the social media platform was increasing radicalization and political polarization. She explained in a TED talk [ 13 ] that the current information ecosystem is manipulating its users, and that “social media companies like Facebook profit off of segmenting us and feeding us personalized content that both validates and exploits our biases. Their bottom line depends on provoking a strong emotion to keep us engaged, often incentivizing the most inflammatory and polarizing voices.” This emotional response results in more than just engagement—it results in addiction.

Eisenstat joined Facebook in 2018 and began to explore the issues which were most divisive on the social media platform. She began asking questions internally about what was causing this divisiveness. She found that “the largest social media companies are antithetical to the concept of reasoned discourse … Lies are more engaging online than truth, and salaciousness beats out wonky, fact-based reasoning in a world optimized for frictionless virality. As long as algorithms’ goals are to keep us engaged, they will continue to feed us the poison that plays to our worst instincts and human weaknesses.”

She equated Facebook’s algorithmic manipulation to the tactics that terrorist recruiters use on vulnerable youth. She offered Facebook a plan to combat political disinformation and voter suppression. She has claimed that the plan was rejected, and Eisenstat left after just six months.

As noted earlier, LeCun flatly denies [ 34 ] that Facebook creates filter bubbles that drive polarization. In sharp contrast, Eisenstat explains that such an outcome is a feature of their algorithm, not a bug. The Wall St. Journal reported that in 2018, senior executives at Facebook were informed of the following conclusions during an internal presentation [ 22 ]:

“Our algorithms exploit the human brain’s attraction to divisiveness… [and] if left unchecked,” Facebook would feed users “more and more divisive content in an effort to gain user attention and increase time on the platform.”

The platform aggravates polarization and tribal behavior.

Some proposed algorithmic changes would “disproportionately affect[] conservative users and publishers.”

Looking at data for Germany, an internal report found “64% of all extremist group joins are due to our recommendation tools … Our recommendation systems grow the problem.”

These are Facebook’s own words, and arguably, they provide the social media platform with an invaluable set of marketing prerogatives. They are reinforced by Tim Kendall’s testimony as discussed above.

“Most notably,” reported the WSJ, “the project forced Facebook to consider how it prioritized ‘user engagement’—a metric involving time spent, likes, shares and comments that for years had been the lodestar of its system.” As noted in the section above, executive compensation was tied to “user engagement,” which meant product developers at Facebook were incentivized to design systems in this very way. Footnote 5

Mark Zuckerberg and Joel Kaplan reportedly [ 22 ] dismissed the conclusions from the 2018 presentation, calling efforts to bring greater civility to conversations on the social media platform “paternalistic.” Zuckerberg went on to say that he would “stand up against those who say that new types of communities forming on social media are dividing us.” Kaplan reportedly “killed efforts to build a classification system for hyperpolarized content.” Failing to address this has resulted in algorithms that, as Tim Kendall explained, “have brought out the worst in us. They have literally rewired our brains so that we are detached from reality and immersed in tribalism” [ 38 ].

Facebook would have us believe that it has made great strides in confronting these problems over just the last two years, as Mr. LeCun has claimed. But at present, the burden of proof is on Facebook to produce the full, raw data so that independent researchers can make a fair assessment of his claims.

6 The AI filter

According to LeCun’s tweets cited at the beginning of this paper, Facebook’s AI-powered filter cleanses the platform of:

Disinformation that endangers public safety or the integrity of the democratic process

These are his words, so we will refer to them even while the actual definitions of hate speech, calls to violence, and other terms are potentially controversial and open to debate.

These claims are provably false. While “AI” (along with some very large, manual curation operations in developing countries) may effectively filter some of this content, at Facebook’s scale, some is not enough.

Let’s examine the claims a little closer.

6.1 Does Facebook actually filter out hate speech?

An investigation by the UK-based counter-extremist organization ISD (Institute for Strategic Dialog) found that Facebook’s algorithm “actively promotes” Holocaust denial content [ 20 ]. The same organization, in another report, documents how Facebook’s “delays or mistakes in policy enforcement continue to enable hateful and harmful content to spread through paid targeted ads.” [ 17 ]. They go on to explain that “[e]ven when action is taken on violating ad content, such a response is often reactive and delayed, after hundreds, thousands, or potentially even millions of users have already been served those ads on their feeds.” Footnote 6

Zuckerberg admitted in April 2018 that hate speech in Myanmar was a problem, and pledged to act. Four months later, Reuters found more than “1000 examples of posts, comments, images and videos attacking the Rohingya or other Myanmar Muslims that were on Facebook” [ 45 ]. As recently as June 2020 there were reports [ 7 ] of troll farms using Facebook to intimidate opponents of Rodrigo Duterte in the Philippines with death threats and hateful comments.

6.2 Does Facebook actually filter out calls to violence?

The Sri Lankan government had to block access to Facebook “amid a wave of violence against Muslims … after Facebook ignored years of calls from both the government and civil society groups to control ethnonationalist accounts that spread hate speech and incited violence.” [ 42 ] A report from the Center for Policy Alternatives in September 2014 detailed evidence of 20 hate groups in Sri Lanka, and informed Facebook. In March of 2018, Buzzfeed reported that “16 out of the 20 groups were still on Facebook”. Footnote 7

When former President Trump tweeted, in response to Black Lives Matters protests, when “the looting starts, the shooting starts,” the message was liked and shared hundreds of thousands of times across Facebook and Instagram, even as other social networks such as Twitter flagged the message for its explicit incitement of violence [ 48 ] and prevented it from being retweeted.

Facebook played a pivotal role in the planning of the January 6th insurrection in the US, providing an unchecked platform for proliferation of the Big Lie, radicalization around this lie, and coordinated organization around explicitly-stated plans to engage in violent confrontation at the nation’s capital on the outgoing president’s behalf. Facebook’s role in the deadly violence was far greater and more widespread than the role of Parler and the other fringe right-wing platforms that attracted so much attention in the aftermath of the attack [ 11 ].

6.3 Does Facebook actually filter out cyberbullying?

According to Enough Is Enough, a non-partisan, non-profit organization whose mission is “making the Internet safer for children and families,” the answer is a resounding no. According to their most recent cyberbullying statistics, [ 10 ] 47% of young people have been bullied online, and the two most prevalent platforms are Instagram at 42% and Facebook at 37%.

In fact, Facebook is failing to protect children on a global scale. According to a UNICEF poll of children in 30 countries, one in every three young people says that they have been victimized by cyberbullying. And one in five says the harassment and threat of actual violence caused them to skip school. According to the survey, conducted in concert with the UN Special Representative of the Secretary-General (SRSG) on Violence against Children, “almost three-quarters of young people also said social networks, including Facebook, Instagram, Snapchat and Twitter, are the most common place for online bullying” [ 49 ].

6.4 Does Facebook actually filter out “disinformation that endangers public safety or the integrity of the democratic process?”

To list the evidence contradicting this point would be exhausting. Below are just a few examples:

The Computational Propaganda Research Project found in their 2019 Global Inventory of Organized Social Media Manipulation that 70 countries had disinformation campaigns organized on social media in 2019, with Facebook as the top platform [ 6 ].

A Facebook whistleblower produced a 6600 word memo detailing case after case of Facebook “abdicating responsibility for malign activities on its platform that could affect the political fate of nations outside the United States or Western Europe.” [ 44 ]

Facebook is ground-zero for anti-vaccination and pandemic misinformation, with the 26-min conspiracy theory film “Plandemic” going viral on Facebook in April 2020 and garnering tens of millions of views. Facebook’s attempt to purge itself of anti-vaccination disinformation was easily thwarted when the groups guilty of proliferating this content removed the word “vaccine” from their names. In addition to undermining public health interests by spreading provably false content, these anti-vaccination groups have obscured meaningful discourse about the actual health concerns and risks that may or may not be connected to vaccinations. A paper from May 2020 attempts to map out the “multi-sided landscape of unprecedented intricacy that involves nearly 100 million individuals” [ 25 ] that are entangled with anti-vaccination clusters. That report predicts that such anti-vaccination views “will dominate in a decade” given their explosive growth and intertwining with undecided people. According to the Knight Foundation and Gallup, [ 26 ] 75% of Americans believe they “were exposed to misinformation about the election” on Facebook during the 2020 US presidential election. This is one of those rare issues on which Republicans (76%), Democrats (75%) and Independents (75%) agree–Facebook was the primary source for election misinformation.

If those AI filters are in fact working, they are not working very well.

All of this said, Facebook’s reliance on “AI filters” misses a critical point, which is that you cannot have AI ethics without ethics [ 30 ]. These problems cannot be solved with AI. These problems cannot be solved with checklists, incremental advances, marginal changes, or even state-of-the-art deep learning networks. These problems are caused by the company’s entire business model and mission. Bosworth’s provocative quotes above, along with Tim Kendall’s direct testimony demonstrate as much.

These are systemic issues, not technological ones. Yael Eisenstat put it best in her TED talk: “as long as the company continues to merely tinker around the margins of content policy and moderation, as opposed to considering how the entire machine is designed and monetized, they will never truly address how the platform is contributing to hatred, division and radicalization.”

7 Facebook does not want to be the arbiter of truth

We should probably take comfort in Facebook’s claim that it does not wish to be the “arbiter of political truth.” After all, Facebook has a troubled history with the truth. Their ad buying customers proved as much when Facebook was forced to pay $40 million to settle a lawsuit alleging that they had inflated “by up to 900 percent—the time it said users spent watching videos.” [ 4 ] While Facebook would neither admit nor deny the truth of this allegation, they did admit to the error in a 2016 statement [ 14 ].

This was not some innocuous lie that just cost a few firms some money either. As Slate explained in a 2018 article, “many [publications] laid off writers and editors and cut back on text stories to focus on producing short, snappy videos for people to watch in their Facebook feeds.” [ 40 ] People lost their livelihoods to this deception.

Is this an isolated incident? Or is fraud at Facebook systemic? Matt Stoller describes the contents of recently unsealed legal documents [ 12 ] in a lawsuit alleging Facebook has defrauded advertisers for years [ 46 ]:

The documents revealed that Facebook COO Sheryl Sandberg directly oversaw the alleged fraud for years. The scheme was simple. Facebook deceived advertisers by pretending that fake accounts represented real people, because ad buyers choose to spend on ad campaigns based on where they think their customers are. Former employees noted that the corporation did not care about the accuracy of numbers as long as the ad money was coming in. Facebook, they said, “did not give a shit.” The inflated statistics sometimes led to outlandish results. For instance, Facebook told advertisers that its services had a potential reach of 100 million 18–34-year-olds in the United States, even though there are only 76 million people in that demographic. After employees proposed a fix to make the numbers honest, the corporation rejected the idea, noting that the “revenue impact” for Facebook would be “significant.” One Facebook employee wrote, “My question lately is: how long can we get away with the reach overestimation?” According to these documents, Sandberg aggressively managed public communications over how to talk to advertisers about the inflated statistics, and Facebook is now fighting against her being interviewed by lawyers in a class action lawsuit alleging fraud.

Facebook’s embrace of deception extends from its ad-buying fraud to the content on its platforms. For instance:

Those who would “aid[] and abet[] the spread of climate misinformation” on Facebook benefit from “a giant loophole in its fact-checking program.” Evidently, Facebook gives its staff the power to overrule climate scientists by deeming climate disinformation “opinion.” [ 2 ].

The former managing editor of Snopes reported that Facebook was merely using the well-regarded fact-checking site for “crisis PR,” that they did not take fact checking seriously and would ignore concerns [ 35 ]. Snopes tried hard to push against the Myanmar disinformation campaign, amongst many other issues, but its concerns were ignored.

ProPublica recently reported [ 18 ] that Sheryl Sandberg silenced and censored a Kurdish militia group that “the Turkish government had targeted” in order to safeguard their revenue from Turkey.

Mark Zuckerberg and Joel Kaplan intervened [ 37 ] in April 2019 to keep Alex Jones on the platform, despite the right-wing conspiracy theorist’s lead role in spreading disinformation about the 2012 Sandy Hook elementary school shooting and the 2018 Parkland high school shooting.

Arguably, Facebook’s executive team has not only ceded responsibility as an “arbiter of truth,” but has also on several notable occasions, intervened to ensure the continued proliferation of disinformation.

8 How do we disengage?

Facebook’s business model is focused entirely on increasing growth and user engagement. Its algorithms are extremely effective at doing so. The steps Facebook has taken, such as building “AI filters” or partnering with independent fact checkers, are superficial and toothless. They cannot begin to untangle the systemic issues at the heart of this matter, because these issues are Facebook’s entire reason for being.

So what can be done? Certainly, criminality needs to be prosecuted. Executives should go to jail for fraud. Social media companies, and their organizational leaders, should face legal liability for the impact made by the content on their platforms. One effort to impose legal liability in the US is centered around reforming section 230 of the US Communications Decency Act. It, and similar laws around the world, should be reformed to create far more meaningful accountability and liability for the promotion of disinformation, violence, and extremism.

Most importantly, monopolies should be busted. Existing antitrust laws should be used to break up Facebook and restrict its future activities and acquisitions.

The matters outlined here have been brought to the attention of Facebook’s leadership in countless ways that are well documented and readily provable. But the changes required go well beyond effective leveraging of AI. At its heart, Facebook will not change because they do not want to, and are not incentivized to. Facebook must be regulated, and Facebook’s leadership structure must be dismantled.

It seems unlikely that politicians and regulators have the political will to do all of this, but there are some encouraging signs, especially regarding antitrust investigations [ 9 ] and lawsuits [ 28 ] in both the US and Europe. Still, this issue goes well beyond mere enforcement. Somehow we must shift the incentives for social media companies, who compete for, and monetize, our attention. Until we stop rewarding Facebook’s illicit behavior with engagement, it’s hard to see a way out of our current condition. These companies are building technology that is designed to draw us in with problematic content, addict us to outrage, and ultimately drive us apart. We no longer agree on shared facts or truths, a condition that is turning political adversaries into bitter enemies, that is transforming ideological difference into seething contempt. Rather than help us lead more fulfilling lives or find truth, Facebook is helping us to discover enemies among our fellow citizens, and bombarding us with reasons to hate them, all to the end of profitability. This path is unsustainable.

The only thing Facebook truly understands is money, and all of their money comes from engagement. If we disengage, they lose money. If we delete, they lose power. If we decline to be a part of their ecosystem, perhaps we can collectively return to a shared reality.

Facebook executives have, themselves, acknowledged that Facebook profits from the spread of misinformation: https://www.facebook.com/facebookmedia/blog/working-to-stop-misinformation-and-false-news .

Cybersecurity for Democracy. (March 3, 2021). “Far-right news sources on Facebook more engaging.” https://medium.com/cybersecurity-for-democracy/far-right-news-sources-on-facebook-more-engaging-e04a01efae90 .

Facebook claims to have since broadened the metrics it uses to calculate executive pay, but to what extent this might offset the prime directive of maximizing user engagement is unclear.

Allcot, H., et al.: “The Welfare Effects of Social Media.” (2019). https://web.stanford.edu/~gentzkow/research/facebook.pdf

Atkin, E.: Facebook creates fact-checking exemption for climate deniers. Heated . (2020). https://heated.world/p/facebook-creates-fact-checking-exemption

Auxier, B., Anderson, M.: Social Media Use in 2021. Pew Research Center. (2021). https://www.pewresearch.org/internet/wp-content/uploads/sites/9/2021/04/PI_2021.04.07_Social-Media-Use_FINAL.pdf

Baron, E.: Facebook agrees to pay $40 million over inflated video-viewing times but denies doing anything wrong. The Mercury News . (2019). https://www.mercurynews.com/2019/10/07/facebook-agrees-to-pay-40-million-over-inflated-video-viewing-times-but-denies-doing-anything-wrong/

Boxell, L.: “The internet, social media, and political polarisation.” (2017). https://voxeu.org/article/internet-social-media-and-political-polarisation

Bradshaw, S., Howard, P.N.: The Global Disinformation Disorder: 2019 Global Inventory of Organised Social Media Manipulation. Working Paper 2019.2. Oxford: Project on Computational Propaganda. (2019)

Cabato, R.: Death threats, clone accounts: Another day fighting trolls in the Philippines. The Washington Post . (2020). https://www.washingtonpost.com/world/asia_pacific/facebook-trolls-philippines-death-threats-clone-accounts-duterte-terror-bill/2020/06/08/3114988a-a966-11ea-a43b-be9f6494a87d_story.html

Card, C.: “How Facebook AI Helps Suicide Prevention.” Facebook. (2018). https://about.fb.com/news/2018/09/inside-feed-suicide-prevention-and-ai/

Chee, F.Y.: “Facebook in EU antitrust crosshairs over data collection.” Reuters. (2019). https://www.reuters.com/article/us-eu-facebook-antitrust-idUSKBN1Y625J

Cyberbullying Statistics. Enough Is Enough. https://enough.org/stats_cyberbullying

Dwoskin, E.: Facebook’s Sandberg deflected blame for Capitol riot, but new evidence shows how platform played role. The Washington Post . (2021). https://www.washingtonpost.com/technology/2021/01/13/facebook-role-in-capitol-protest

DZ Reserve and Cain Maxwell v. Facebook, Inc. (2020). https://www.economicliberties.us/wp-content/uploads/2021/02/2021.02.17-Unredacted-Opp-to-Mtn-to-Dismiss.pdf

Eisenstat, Y.: Dear Facebook, this is how you’re breaking democracy [Video]. TED . (2020). https://www.ted.com/talks/yael_eisenstat_dear_facebook_this_is_how_you_re_breaking_democracy#t-385134

Fischer, D.: Facebook Video Metrics Update. Facebook . (2016). https://www.facebook.com/business/news/facebook-video-metrics-update

Fisher, M., Taub, A.: “How Everyday Social Media Users Become Real-World Extremists.” New York Times . (2018). https://www.nytimes.com/2018/04/25/world/asia/facebook-extremism.html

Frenkel, S.: “How Misinformation ‘Superspreaders’ Seed False Election Theories”. New York Times . (2020). https://www.nytimes.com/2020/11/23/technology/election-misinformation-facebook-twitter.html

Gallagher, A.: Profit and Protest: How Facebook is struggling to enforce limits on ads spreading hate, lies and scams about the Black Lives Matter protests . The Institute for Strategic Dialogue (2020)

Gillum, J., Ellion, J.: Sheryl Sandberg and Top Facebook Execs Silenced an Enemy of Turkey to Prevent a Hit to the Company’s Business. ProPublica . (2021). https://www.propublica.org/article/sheryl-sandberg-and-top-facebook-execs-silenced-an-enemy-of-turkey-to-prevent-a-hit-to-their-business

Gonzalez, R.: “Facebook Opens Its Private Servers to Scientists Studying Fake News.” Wired . (2018). https://www.wired.com/story/social-science-one-facebook-fake-news/

Guhl, J., Davey, J.: Hosting the ‘Holohoax’: A Snapshot of Holocaust Denial Across Social Media . The Institute for Strategic Dialogue (2020).

Hao, K.: “How Facebook got addicted to spreading misinformation”. MIT Technology Review . (2021). https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation

Horwitz, J., Seetharaman, D.: “Facebook Executives Shut Down Efforts to Make the Site Less Divisive.” Wall St Journal (2020)

Internet use and political polarization, Boxell, L., Gentzkow, M., Shapiro, J.M.: Proc Natl. Acad. Sci. 114 (40), 10612–10617 (2017). https://doi.org/10.1073/pnas.1706588114

Johnson, S.L., et al.: Understanding echo chambers and filter bubbles: the impact of social media on diversification and partisan shifts in news consumption. MIS Q. (2020). https://doi.org/10.25300/MISQ/2020/16371

Article   Google Scholar  

Johnson, N.F., Velásquez, N., Restrepo, N.J., et al.: The online competition between pro- and anti-vaccination views. Nature 582 , 230–233 (2020). https://doi.org/10.1038/s41586-020-2281-1

Jones, J.: In Election 2020, How Did The Media, Electoral Process Fare? Republicans, Democrats Disagree. Knight Foundation . (2020). https://knightfoundation.org/articles/in-election-2020-how-did-the-media-electoral-process-fare-republicans-democrats-disagree

Kantrowitz, A.: “Facebook Is Still Prioritizing Scale Over Safety.” Buzzfeed.News . (2019). https://www.buzzfeednews.com/article/alexkantrowitz/after-years-of-scandal-facebooks-unhealthy-obsession-with

Kendall, B., McKinnon, J.D.: “Facebook Hit With Antitrust Lawsuits by FTC, State Attorneys General.” Wall St. Journal. (2020). https://www.wsj.com/articles/facebook-hit-with-antitrust-lawsuit-by-federal-trade-commission-state-attorneys-general-11607543139

Lauer, D.: [@dlauer]. And yet people believe them because of misinformation that is spread and monetized on facebook [Tweet]. Twitter. (2021). https://twitter.com/dlauer/status/1363923475040251905

Lauer, D.: You cannot have AI ethics without ethics. AI Ethics 1 , 21–25 (2021). https://doi.org/10.1007/s43681-020-00013-4

Lavi, M.: Do Platforms Kill? Harvard J. Law Public Policy. 43 (2), 477 (2020). https://www.harvard-jlpp.com/wp-content/uploads/sites/21/2020/03/Lavi-FINAL.pdf

LeCun, Y.: [@ylecun]. Does anyone still believe whatever these people are saying? No one should. Believing them kills [Tweet]. Twitter. (2021). https://twitter.com/ylecun/status/1363923178519732230

LeCun, Y.: [@ylecun]. The section about FB in your article is factually wrong. For starter, AI is used to filter things like hate speech, calls to violence, bullying, child exploitation, etc. Second, disinformation that endangers public safety or the integrity of the democratic process is filtered out [Tweet]. Twitter. (2021). https://twitter.com/ylecun/status/1364010548828987393

LeCun, Y.: [@ylecun]. As attractive as it may seem, this explanation is false. [Tweet]. Twitter. (2021). https://twitter.com/ylecun/status/1363985013147115528

Levin, S.: ‘They don’t care’: Facebook factchecking in disarray as journalists push to cut ties. The Guardian . (2018). https://www.theguardian.com/technology/2018/dec/13/they-dont-care-facebook-fact-checking-in-disarray-as-journalists-push-to-cut-ties

Mac, R.: “Growth At Any Cost: Top Facebook Executive Defended Data Collection In 2016 Memo—And Warned That Facebook Could Get People Killed.” Buzzfeed.News . (2018). https://www.buzzfeednews.com/article/ryanmac/growth-at-any-cost-top-facebook-executive-defended-data

Mac, R., Silverman, C.: “Mark Changed The Rules”: How Facebook Went Easy On Alex Jones And Other Right-Wing Figures. BuzzFeed.News . (2021). https://www.buzzfeednews.com/article/ryanmac/mark-zuckerberg-joel-kaplan-facebook-alex-jones

Mainstreaming Extremism: Social Media’s Role in Radicalizing America: Hearings before the Subcommittee on Consumer Protection and Commerce of the Committee on Energy and Commerce, 116th Cong. (2020) (testimony of Tim Kendall)

Meade, A.: “Facebook greatest source of Covid-19 disinformation, journalists say”. The Guardian . (2020). https://www.theguardian.com/technology/2020/oct/14/facebook-greatest-source-of-covid-19-disinformation-journalists-say

Oremus, W.: The Big Lie Behind the “Pivot to Video”. Slate . (2018). https://slate.com/technology/2018/10/facebook-online-video-pivot-metrics-false.html

Propagating and Debunking Conspiracy Theories on Twitter During the 2015–2016 Zika Virus Outbreak, Michael J. Wood, Cyberpsychology, Behavior, and Social Networking. 21 (8), (2018). https://doi.org/10.1089/cyber.2017.0669

Rajagopalan, M., Nazim, A.: “We Had To Stop Facebook”: When Anti-Muslim Violence Goes Viral. BuzzFeed.News . (2018). https://www.buzzfeednews.com/article/meghara/we-had-to-stop-facebook-when-anti-muslim-violence-goes-viral

Rosalsky, G.: “Are Conspiracy Theories Good For Facebook?”. Planet Money . (2020). https://www.npr.org/sections/money/2020/08/04/898596655/are-conspiracy-theories-good-for-facebook

Silverman, C., Mac, R.: “I Have Blood on My Hands”: A Whistleblower Says Facebook Ignored Global Political Manipulation. BuzzFeed.News . (2020). https://www.buzzfeednews.com/article/craigsilverman/facebook-ignore-political-manipulation-whistleblower-memo

Stecklow, S.: Why Facebook is losing the way on hate speech in Myanmar. Reuters . (2018). https://www.reuters.com/investigates/special-report/myanmar-facebook-hate/

Stoller, M.: Facebook: What is the Australian law? And why does FB keep getting caught for fraud?. Substack. (2021). https://mattstoller.substack.com/p/facecrook-dealing-with-a-global-menace

The spread of true and false news online, Soroush Vosoughi, Deb Roy, Sinan Aral, Science. 359 (6380), 1146–1151. https://doi.org/10.1126/science.aap9559

The White House 45 Archived [@WhiteHouse45]: “These THUGS are dishonoring the memory of George Floyd, and I won’t let that happen. Just spoke to Governor Tim Walz and told him that the Military is with him all the way. Any difficulty and we will assume control but, when the looting starts, the shooting starts. Thank you!” [Tweet]. Twitter. (2020) https://twitter.com/WhiteHouse45/status/1266342941649506304

UNICEF.: UNICEF poll: More than a third of young people in 30 countries report being a victim of online bullying. (2019). https://www.unicef.org/press-releases/unicef-poll-more-third-young-people-30-countries-report-being-victim-online-bullying

Download references

Author information

Authors and affiliations.

Urvin AI, 413 Virginia Ave, Collingswood, NJ, 08107, USA

David Lauer

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to David Lauer .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Lauer, D. Facebook’s ethical failures are not accidental; they are part of the business model. AI Ethics 1 , 395–403 (2021). https://doi.org/10.1007/s43681-021-00068-x

Download citation

Received : 13 April 2021

Accepted : 29 May 2021

Published : 05 June 2021

Issue Date : November 2021

DOI : https://doi.org/10.1007/s43681-021-00068-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research

How Facebook Fails 90 Percent of Its Users

Internal documents show the company routinely placing public-relations, profit, and regulatory concerns over user welfare. And if you think it’s bad here, look beyond the U.S.

Photo-illustration of a king  giving the Facebook thumbs-down

Listen to more stories on audm

I n the fall of 2019 , Facebook launched a massive effort to combat the use of its platforms for human trafficking. Working around the clock, its employees searched Facebook and its subsidiary Instagram for keywords and hashtags that promoted domestic servitude in the Middle East and elsewhere. Over the course of a few weeks, the company took down 129,191 pieces of content, disabled more than 1,000 accounts, tightened its policies, and added new ways to detect this kind of behavior. After they were through, employees congratulated one another on a job well done.

It was a job well done. It just came a little late. In fact, a group of Facebook researchers focused on the Middle East and North Africa had found numerous Instagram profiles being used as advertisements for trafficked domestic servants as early as March 2018. “Indonesian brought with Tourist Visa,” one photo caption on a picture of a woman reads, in Arabic. “We have more of them.” But these profiles weren’t “actioned”—disabled or taken down—an internal report would explain, because Facebook’s policies “did not acknowledge the violation.” A year and a half later, an undercover BBC investigation revealed the full scope of the problem: a broad network that illegally trafficked domestic workers, facilitated by internet platforms and aided by algorithmically boosted hashtags. In response, Facebook banned one hashtag and took down some 700 Instagram profiles. But according to another internal report, “domestic servitude content remained on the platform.”

Not until October 23, 2019, did the hammer drop: Apple threatened to pull Facebook and Instagram from its App Store because of the BBC report. Motivated by what employees describe in an internal document as “potentially severe consequences to the business” that would result from an App Store ban, Facebook finally kicked into high gear. The document makes clear that the decision to act was not the result of new information: “Was this issue known to Facebook before BBC enquiry and Apple escalation? Yes.”

The document was part of the disclosure made to the Securities and Exchange Commission and provided to Congress in redacted form by Frances Haugen, the whistleblower and former Facebook data scientist. A consortium of more than a dozen news organizations, including The Atlantic , has reviewed the redacted versions.

Reading these documents is a little like going to the eye doctor and seeing the world suddenly sharpen into focus. In the United States, Facebook has facilitated the spread of misinformation, hate speech, and political polarization. It has algorithmically surfaced false information about conspiracy theories and vaccines, and was instrumental in the ability of an extremist mob to attempt a violent coup at the Capitol. That much is now painfully familiar.

But these documents show that the Facebook we have in the United States is actually the platform at its best. It’s the version made by people who speak our language and understand our customs, who take our civic problems seriously because those problems are theirs too. It’s the version that exists on a free internet, under a relatively stable government, in a wealthy democracy. It’s also the version to which Facebook dedicates the most moderation resources. Elsewhere, the documents show, things are different. In the most vulnerable parts of the world—places with limited internet access, where smaller user numbers mean bad actors have undue influence—the trade-offs and mistakes that Facebook makes can have deadly consequences.

Gladiator battle viewers giving the thumbs down

A ccording to the documents, Facebook is aware that its products are being used to facilitate hate speech in the Middle East, violent cartels in Mexico, ethnic cleansing in Ethiopia, extremist anti-Muslim rhetoric in India, and sex trafficking in Dubai. It is also aware that its efforts to combat these things are insufficient. A March 2021 report notes, “We frequently observe highly coordinated, intentional activity … by problematic actors” that is “particularly prevalent—and problematic—in At-Risk Countries and Contexts”; the report later acknowledges, “Current mitigation strategies are not enough.”

Read: What Facebook did to American democracy

In some cases, employees have successfully taken steps to address these problems, but in many others, the company response has been slow and incomplete. As recently as late 2020, an internal Facebook report found that only 6 percent of Arabic-language hate content on Instagram was detected by Facebook’s systems. Another report that circulated last winter found that, of material posted in Afghanistan that was classified as hate speech within a 30-day range, only 0.23 percent was taken down automatically by Facebook’s tools. In both instances, employees blamed company leadership for insufficient investment.

In many of the world’s most fragile nations, a company worth hundreds of billions of dollars hasn’t invested enough in the language- and dialect-specific artificial intelligence and staffing it needs to address these problems. Indeed, last year, according to the documents, only 13 percent of Facebook’s misinformation-moderation staff hours were devoted to the non-U.S. countries in which it operates, whose populations comprise more than 90 percent of Facebook’s users. (Facebook declined to tell me how many countries it has users in.) And although Facebook users post in at least 160 languages, the company has built robust AI detection in only a fraction of those languages, the ones spoken in large, high-profile markets such as the U.S. and Europe—a choice, the documents show, that means problematic content is seldom detected.

The granular, procedural, sometimes banal back-and-forth exchanges recorded in the documents reveal, in unprecedented detail, how the most powerful company on Earth makes its decisions. And they suggest that, all over the world, Facebook’s choices are consistently driven by public perception, business risk, the threat of regulation, and the specter of “PR fires,” a phrase that appears over and over in the documents. In many cases, Facebook has been slow to respond to developing crises outside the United States and Europe until its hand is forced. “It’s an open secret … that Facebook’s short-term decisions are largely motivated by PR and the potential for negative attention,” an employee named Sophie Zhang wrote in a September 2020 internal memo about Facebook’s failure to act on global misinformation threats. (Most employee names have been redacted for privacy reasons in these documents, but Zhang left the company and came forward as a whistleblower after she wrote this memo.)

Sometimes, even negative attention isn’t enough. In 2019, the human-rights group Avaaz found that Bengali Muslims in India’s Assam state were “facing an extraordinary chorus of abuse and hate” on Facebook: Posts calling Muslims “pigs,” rapists,” and “terrorists” were shared tens of thousands of times and left on the platform because Facebook’s artificial-intelligence systems weren’t built to automatically detect hate speech in Assamese, which is spoken by 23 million people. Facebook removed 96 of the 213 “clearest examples” of hate speech Avaaz flagged for the company before publishing its report. Facebook still does not have technology in place to automatically detect Assamese hate speech.

In a memo dated December 2020 and posted to Workplace, Facebook’s very Facebooklike internal message board, an employee argued that “Facebook’s decision-making on content policy is routinely influenced by political considerations.” To hear this employee tell it, the problem was structural: Employees who are primarily tasked with negotiating with governments over regulation and national security, and with the press over stories, were empowered to weigh in on conversations about building and enforcing Facebook’s rules regarding questionable content around the world. “Time and again,” the memo quotes a Facebook researcher saying, “I’ve seen promising interventions … be prematurely stifled or severely constrained by key decisionmakers—often based on fears of public and policy stakeholder responses.”

Among the consequences of that pattern, according to the memo: The Hindu-nationalist politician T. Raja Singh, who posted to hundreds of thousands of followers on Facebook calling for India’s Rohingya Muslims to be shot—in direct violation of Facebook’s hate-speech guidelines—was allowed to remain on the platform despite repeated requests to ban him, including from the very Facebook employees tasked with monitoring hate speech. A 2020 Wall Street Journal article reported that Facebook’s top public-policy executive in India had raised concerns about backlash if the company were to do so, saying that cracking down on leaders from the ruling party might make running the business more difficult. The company eventually did ban Singh, but not before his posts ping-ponged through the Hindi-speaking world.

In a Workplace thread apparently intended to address employee frustration after the Journal article was published, a leader explained that Facebook’s public-policy teams “are important to the escalations process in that they provide input on a range of issues, including translation, socio-political context, and regulatory risks of different enforcement options.”

Adrienne LaFrance: The largest autocracy on Earth

Employees weren’t placated. In dozens and dozens of comments, they questioned the decisions Facebook had made regarding which parts of the company to involve in content moderation, and raised doubts about its ability to moderate hate speech in India. They called the situation “sad” and Facebook’s response “inadequate,” and wondered about the “propriety of considering regulatory risk” when it comes to violent speech.

“I have a very basic question,” wrote one worker. “Despite having such strong processes around hate speech, how come there are so many instances that we have failed? It does speak on the efficacy of the process.”

Two other employees said that they had personally reported certain Indian accounts for posting hate speech. Even so, one of the employees wrote, “they still continue to thrive on our platform spewing hateful content.”

We “cannot be proud as a company,” yet another wrote, “if we continue to let such barbarism flourish on our network.”

T aken together, Frances Haugen’s leaked documents show Facebook for what it is: a platform racked by misinformation, disinformation, conspiracy thinking, extremism, hate speech, bullying, abuse, human trafficking, revenge porn, and incitements to violence. It is a company that has pursued worldwide growth since its inception—and then, when called upon by regulators, the press, and the public to quell the problems its sheer size has created, it has claimed that its scale makes completely addressing those problems impossible. Instead, Facebook’s 60,000-person global workforce is engaged in a borderless, endless, ever-bigger game of whack-a-mole, one with no winners and a lot of sore arms.

Sophie Zhang was one of the people playing that game. Despite being a junior-level data scientist, she had a knack for identifying “coordinated inauthentic behavior,” Facebook’s term for the fake accounts that have exploited its platforms to undermine global democracy, defraud users, and spread false information. In her memo, which is included in the Facebook Papers but was previously leaked to BuzzFeed News , Zhang details what she found in her nearly three years at Facebook: coordinated disinformation campaigns in dozens of countries, including India, Brazil, Mexico, Afghanistan, South Korea, Bolivia, Spain, and Ukraine. In some cases, such as in Honduras and Azerbaijan, Zhang was able to tie accounts involved in these campaigns directly to ruling political parties. In the memo, posted to Workplace the day Zhang was fired from Facebook for what the company alleged was poor performance, she says that she made decisions about these accounts with minimal oversight or support, despite repeated entreaties to senior leadership. On multiple occasions, she said, she was told to prioritize other work.

Facebook has not disputed Zhang’s factual assertions about her time at the company, though it maintains that controlling abuse of its platform is a top priority. A Facebook spokesperson said that the company tries “to keep people safe even if it impacts our bottom line,” adding that the company has spent $13 billion on safety since 2016. “​​Our track record shows that we crack down on abuse abroad with the same intensity that we apply in the U.S.”

Zhang's memo, though, paints a different picture. “We focus upon harm and priority regions like the United States and Western Europe,” she wrote. But eventually, “it became impossible to read the news and monitor world events without feeling the weight of my own responsibility.” Indeed, Facebook explicitly prioritizes certain countries for intervention by sorting them into tiers, the documents show. Zhang “chose not to prioritize” Bolivia, despite credible evidence of inauthentic activity in the run-up to the country’s 2019 election. That election was marred by claims of fraud, which fueled widespread protests; more than 30 people were killed and more than 800 were injured.

Read: Facebook’s id is showing

“I have blood on my hands,” Zhang wrote in the memo. By the time she left Facebook, she was having trouble sleeping at night. “I consider myself to have been put in an impossible spot—caught between my loyalties to the company and my loyalties to the world as a whole.”

I n February, just over a year after Facebook’s high-profile sweep for Middle Eastern and North African domestic-servant trafficking, an internal report identified a web of similar activity, in which women were being trafficked from the Philippines to the Persian Gulf, where they were locked in their homes, denied pay, starved, and abused. This report found that content “should have been detected” for violating Facebook’s policies but had not been, because the mechanism that would have detected much of it had recently been made inactive. The title of the memo is “Domestic Servitude: This Shouldn’t Happen on FB and How We Can Fix It.”

What happened in the Philippines—and in Honduras, and Azerbaijan, and India, and Bolivia—wasn’t just that a very large company lacked a handle on the content posted to its platform. It was that, in many cases, a very large company knew what was happening and failed to meaningfully intervene.

That Facebook has repeatedly prioritized solving problems for Facebook over solving problems for users should not be surprising. The company is under the constant threat of regulation and bad press. Facebook is doing what companies do, triaging and acting in its own self-interest.

But Facebook is not like other companies. It is bigger, and the stakes of its decisions are higher. In North America, we have recently become acutely aware of the risks and harms of social media. But the Facebook we see is the platform at its best. Any solutions will need to apply not only to the problems we still encounter here, but also to those with which the other 90 percent of Facebook’s users struggle every day.

About the Author

facebook problem essay

More Stories

How Snacks Took Over American Life

The Hotdish Ticket

Essay On Facebook

500 words essay on facebook.

Facebook has become one of the most famous social networking sites. However, it comes with its own sets of pros and cons. While it has helped a lot of individuals and business to create their brand, it is also being used for wrong activities. Through an essay on Facebook, we will go through all this in detail.

essay on facebook

Benefits of Facebook

Facebook is experiencing dramatic growth currently where the number of users has reached one billion. It comes with a lot of benefits like video calling with your close ones and uploading your photos and videos without charge.

Most importantly, it allows you to get in touch with people from the other side of the world without spending a penny. It is also a great way to connect with old school friends and college friends.

Further, you can also make new friends through this platform. When you connect with people from all over the world, it opens doors to learning about new cultures, values and traditions from different countries.

It also gives you features for group discussions and chatting. Now, Facebook also allows users to sell their products or services through their site. It is a great way of increasing sales and establishing your business online.

Thus, it gives you new leads and clients. Facebook Ads help you advertise your business and target your audience specifically. Similarly, it also has gaming options for you to enjoy when you are getting bored.

Most importantly, it is also a great source of information and news. It helps in staying updated with the latest happenings in the world and subscribing to popular fan pages to get the latest updates.

Drawbacks of Facebook

While it does offer many advantages, it also gives you many drawbacks. First of all, it compromises your privacy at great lengths. Many cases have been filed regarding the same issue.

Further, you are at risk of theft if you use it for online banking and more. Similarly, it also gives virus attacks. A seemingly harmless link may activate a virus in your computer without you knowing.

Moreover, you also get spam emails because of Facebook which may be frustrating at times. The biggest disadvantage has to be child pornography. It gives access to a lot of pornographic photos and videos.

Similarly, it is also a great place for paedophiles to connect with minors and lure them easily under false pretence. A lot of hackers also use Facebook for hacking into people’s personal information and gaining from it.

Another major drawback is Facebook addiction . It is like an abyss that makes you scroll endlessly. You waste so much time on there without even realizing that it hampers the productivity of your life by taking more away from you than giving.

Get the huge list of more than 500 Essay Topics and Ideas

Conclusion of the Essay on Facebook

To sum it up, if we use Facebook in the right proportions and with proper care, it can be a powerful tool for anyone. Moreover, it can be great for marketing and networking. Further, any business can also leverage its power to make its business success. But, it is essential to remember to not let it become an addiction.

FAQ of Essay on Facebook

Question 1: What is the purpose of Facebook?

Answer 1: The purpose of Facebook is to allow people to build a community and make the world a smaller place. It helps to connect with friends and family and also discover all the latest happenings in the world.

Question 2: What is the disadvantage of Facebook?

Answer 2: Facebook is potentially addictive and can hamper the productivity of people. Moreover, it also makes you vulnerable to malware and viruses. Moreover, it has also given rise to identity theft.

Customize your course in 30 seconds

Which class are you in.

tutor

  • Travelling Essay
  • Picnic Essay
  • Our Country Essay
  • My Parents Essay
  • Essay on Favourite Personality
  • Essay on Memorable Day of My Life
  • Essay on Knowledge is Power
  • Essay on Gurpurab
  • Essay on My Favourite Season
  • Essay on Types of Sports

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Download the App

Google Play

Smart. Open. Grounded. Inventive. Read our Ideas Made to Matter.

Which program is right for you?

MIT Sloan Campus life

Through intellectual rigor and experiential learning, this full-time, two-year MBA program develops leaders who make a difference in the world.

Earn your MBA and SM in engineering with this transformative two-year program.

A rigorous, hands-on program that prepares adaptive problem solvers for premier finance careers.

A 12-month program focused on applying the tools of modern data science, optimization and machine learning to solve real-world business problems.

Combine an international MBA with a deep dive into management science. A special opportunity for partner and affiliate schools only.

A doctoral program that produces outstanding scholars who are leading in their fields of research.

Bring a business perspective to your technical and quantitative expertise with a bachelor’s degree in management, business analytics, or finance.

Apply now and work for two to five years. We'll save you a seat in our MBA class when you're ready to come back to campus for your degree.

Executive Programs

The 20-month program teaches the science of management to mid-career leaders who want to move from success to significance.

A full-time MBA program for mid-career leaders eager to dedicate one year of discovery for a lifetime of impact.

A joint program for mid-career professionals that integrates engineering and systems thinking. Earn your master’s degree in engineering and management.

Non-degree programs for senior executives and high-potential managers.

A non-degree, customizable program for mid-career professionals.

Credit: Roman Martyniuk/Unsplash

Academic study reveals new evidence of Facebook's negative impact on the mental health of college students

MIT Sloan Office of Communications

Sep 27, 2022

Researchers created control group by  comparing colleges that had access to the platform to colleges that did not during the first two years of its existence

CAMBRIDGE, Mass., Sept. 27, 2022 — A  new study  led by researchers from Tel Aviv University,  MIT Sloan School of Management  and Bocconi University reveals new findings about the negative impact of Facebook on the mental health of American college students. The study focuses on Facebook's first two-and-a-half years (2004-2006), when the new social network was gradually spreading through academic institutions, and it was still possible to detect its impact by comparing colleges that had access to the platform to colleges that did not. The findings found a rise in the number of students who had access to Facebook reporting severe depression and anxiety (7% and 20% respectively).

The study was led by  Dr. Roee Levy  of the School of Economics at Tel Aviv University,  Prof. Alexey Makarin  of MIT Sloan School of Management, and  Prof. Luca Braghieri  of Bocconi University. The paper is forthcoming in the academic journal  American Economic Review.

"Over the last fifteen years, the mental health trends of adolescents and young adults in the United States have worsened considerably," said Prof. Braghieri. "Since such worsening in trends coincided with the rise of social media, it seemed plausible to speculate that the two phenomena might be related."

The study goes back to the advent of Facebook at Harvard University in 2004, when it was the world's first social network. Facebook was initially accessible only to Harvard students who had a Harvard email address. Quickly spreading to other colleges in and outside the US, the network was finally made available to the general public in the US and beyond in September 2006.

The researchers studied Facebook's gradual expansion during those first two-and-a-half years to compare the mental health of students in colleges that had access to Facebook with that of students in colleges that did not have access to the platform at that time. Their methodology also took into account any differences in mental health over time or across colleges that were not related to Facebook. This approach enabled conditions similar to those of a 'natural experiment' - clearly impossible today now that billions of people use many different social networks.

Prof. Makarin said, "Many studies have found a correlation between the use of social media and various symptoms related to mental health. However, so far, it has been challenging to ascertain whether social media was actually the  cause  of poor mental health. In this study, by applying a novel research method, we were able to establish this causality."

The study combined information from two different datasets: the specific dates on which Facebook was introduced at 775 American colleges, and the National College Health Assessment (NCHA), a survey conducted periodically at American colleges.

The researchers built an index based on 15 relevant questions in the NCHA, in which students were asked about their mental health in the past year. They found a statistically significant worsening in mental health symptoms, especially depression and anxiety, after the arrival of Facebook:

  • a rise of 7% in the number of students who had suffered, at least once during the preceding year, from depression so severe that it was difficult for them to function;
  • a rise of 20% in those who reported anxiety disorders;
  • an increase in the percentage of students expected to experience moderate to severe depression - from 25% to 27%;
  • a rise in the percentage of students who had experienced impairment to their academic performance due to depression or anxiety - from 13% to 16%.

Moreover, the impact of Facebook on mental health was measured at 25% of the impact of losing a job, and 85% of the gap between the mental states of students with and without financial debt – with loss of employment known of employment and debt known to strongly affect mental health.

Dr. Levy said, "When studying the potential mechanisms, we hypothesized that unfavorable social comparisons could explain the effects we found, and that students more susceptible to such comparisons were more likely to suffer negative effects. To test this interpretation, we looked at more data from the NCHA. We found, for example, a greater negative impact on the mental health of students who lived off-campus and were consequently less involved in social activities, and a greater negative impact on students with credit card debts who saw their supposedly wealthier peers on the network."

"We also found evidence that Facebook had changed students' beliefs about their peers: more students believed that others consumed more alcohol, even though alcohol consumption had not changed significantly. We conclude that even today, despite familiarity with the social networks and their impact, many users continue to envy their online friends and struggle to distinguish between the image on the screen and real life."

About the MIT Sloan School of Management

The MIT Sloan School of Management is where smart, independent leaders come together to solve problems, create new organizations, and improve the world. Learn more at mitsloan.mit.edu .

Related Articles

AI character offering to help

The Facebook Dilemma

October 29, December 11, and October 30, 2018

Season 2018: Episode 4

The promise of Facebook was to create a more open and connected world. But from the company’s failure to protect millions of users’ data, to the proliferation of “fake news” and disinformation, mounting crises have raised the question: Is Facebook more harmful than helpful? This major, two-night event investigates a series of warnings to Facebook as the company grew from Mark Zuckerberg’s Harvard dorm room to a global empire. With dozens of original interviews and rare footage, The Facebook Dilemma examines the powerful social media platform’s impact on privacy and democracy in the U.S. and around the world. An encore presentation will air Dec. 11 and 18.

The Frontline Interviews: The Facebook Dilemma

Russian disinformation on facebook targeted ukraine well before the 2016 u.s. election, watch: facebook and "the data dilemma”, featured documentaries, video list slider.

facebook problem essay

As the FTC and States Sue Facebook, Revisit ‘The Facebook Dilemma’

Facebook CEO Mark Zuckerberg Testifies At House Hearing

Flashback: Inside Facebook’s First Tangle with the FTC

FRANCE-INTERNET-COMPANY-FACEBOOK-SOCIAL-NETWORK

Facebook Tackles Misinformation Around the 2020 Census

Facebook CEO Zuckerberg testifies before a U.S. Senate joint hearing on Capitol Hill in Washington

FRONTLINE Wins Peabody Award for “The Facebook Dilemma”

PHILIPPINES-PRESS-COURT-RESSA

"Seeded in Social Media": Jailed Philippine Journalist Says Facebook is Partly Responsible for Her Predicament

3704 FacebookDilema_Grabs052

Press Advocates Call Arrest of Filipina Journalist Maria Ressa "Politically Motivated"

3704_Scenes_Grabs 007

Introducing the Interactive Version of "The Facebook Dilemma"

MeganRobertson

Finding Zuckerberg: How FRONTLINE Amassed an Archive of the Facebook Founder

Facebook CEO Mark Zuckerberg testifies before the House Energy and Commerce Committee in Washington

Coming in December on FRONTLINE

3704 FacebookDilema_Grabs050

Filipina Journalist Maria Ressa, Facing Tax Charges, Vows to "Hold Government to Account"

Brad Parscale

Brad Parscale, Trump’s 2020 Campaign Manager, Calls Facebook Ad Policy “A Gift”

Monika Bickert

Facebook Executive Monika Bickert: “We’ve Been Too Slow” to Respond

Next on frontline, south korea's adoption reckoning.

Home Video DVDs of The Facebook Dilemma  are available from ShopPBS .

Educational DVDs of The Facebook Dilemma  are available from ShopPBS for Teachers .

Get Our Newsletter

Follow frontline, frontline newsletter, we answer to no one but you.

You'll receive access to exclusive information and early alerts about our documentaries and investigations.

I'm already subscribed

The FRONTLINE Dispatch

Don't miss an episode. sign-up for the frontline dispatch newsletter., sign-up for the unresolved newsletter..

Facebook Essay

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

To make this Facebook essay easy to understand for any reader, the author will start with terminology.

Facebook is among the most popular social media networking sites today. It is popular due to its multiple applications and the ease of communication it offers to the user. It allows people to share pictures, events and statuses on a single platform.

Facebook has several benefits, such as forming groups, chatting with friends and finding information on multiple topics. The platform is also highly informative due to the multiple pages on a host of topics, including but not limited to health, education, science, exercise, etc. It is also perfect for keeping in touch with relatives and friends who can stay connected to a single platform.

Below, this essay about Facebook will dive deeper into the platform’s advantages and how it can help kids, students, and adults communicate.

More recently, mobile companies have enabled users to connect to Facebook through their phones. Mobile phone technology such as GPRS now allows users to access Facebook from any location. This feature has made Facebook extremely popular among today’s generation.

Staying connected has never been so simple and effective than it is on Facebook. Talking to friends and relatives or family members is now possible with a single Facebook account which is a perfect platform to chat and communicate.

A more recent addition to the online chat program is the video calling feature which has gained immense popularity. Not only can one talk to people but also see them live with the help of this video chat feature.

Individuals no longer have to yearn to keep in touch with their friends and dear ones. A single Facebook account enables users to achieve several functions all at once.

Another very important feature of Facebook is the online gaming portal which it offers to its users. There are hundreds of thousands of games on Facebook which one can play at any given time. The interesting aspect is the ability to play these games with friends.

There are multiple games like Poker, Diamond Dash, Zuma, Farm Heroes Sage and others on Facebook.

Playing these games is a unique and special experience since it allows users to interact with friends and engage in healthy competition. There are no additional costs and users can play games absolutely free of cost.

Facebook is becoming a highly successful platform not only for making new friends and finding old ones, but for accessing global and local news as well. Most of the news and media companies have launched their Facebook pages.

This feature has added the extra benefit to Facebook, making it educational and purposeful. Besides being a medium to interact and communicate, Facebook has become a marketing platform for many popular brands. Today, one can easily access all the famous global brands on Facebook.

Several small time businesses have become successful on Facebook. People, who do not have the capital to open a store, have launched their products on Facebook, gaining financial success and recognition.

One can buy practically anything on Facebook from shoes, bags, accessories, clothes, phones, laptops, electronic etc. Many of these online stores offer the facility to make online payments and deliver goods to the buyer’s home.

Thus, through Facebook, people can engage in a host of activities such as playing games, interacting with friends, chatting, video conferencing, marketing, buying, selling and numerous others. Facebook is no longer only a social networking site to stay connected with friends and family.

It has become a platform with online marketing options for the users. When used responsibly, Facebook is an excellent medium for several purposes with extremely low cost and high benefits to the users.

  • 7Up Advertisement Objective
  • Coca Cola Company’s Communication Message
  • Facebook Should Be Banned
  • Social Networking and Depression
  • Netflix Customer Services
  • 7Up Advertisement Campaign
  • Measuring the Impact of Social Media: Challenges, Practices and Methods
  • Advertising Strategy for Cartier Bridal
  • Strategic Plan - Social Media in Women and Child Hospital
  • Advertisements Analysis and Comparison
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2019, July 5). Facebook Essay. https://ivypanda.com/essays/social-media-facebook/

"Facebook Essay." IvyPanda , 5 July 2019, ivypanda.com/essays/social-media-facebook/.

IvyPanda . (2019) 'Facebook Essay'. 5 July.

IvyPanda . 2019. "Facebook Essay." July 5, 2019. https://ivypanda.com/essays/social-media-facebook/.

1. IvyPanda . "Facebook Essay." July 5, 2019. https://ivypanda.com/essays/social-media-facebook/.

Bibliography

IvyPanda . "Facebook Essay." July 5, 2019. https://ivypanda.com/essays/social-media-facebook/.

IvyPanda uses cookies and similar technologies to enhance your experience, enabling functionalities such as:

  • Basic site functions
  • Ensuring secure, safe transactions
  • Secure account login
  • Remembering account, browser, and regional preferences
  • Remembering privacy and security settings
  • Analyzing site traffic and usage
  • Personalized search, content, and recommendations
  • Displaying relevant, targeted ads on and off IvyPanda

Please refer to IvyPanda's Cookies Policy and Privacy Policy for detailed information.

Certain technologies we use are essential for critical functions such as security and site integrity, account authentication, security and privacy preferences, internal site usage and maintenance data, and ensuring the site operates correctly for browsing and transactions.

Cookies and similar technologies are used to enhance your experience by:

  • Remembering general and regional preferences
  • Personalizing content, search, recommendations, and offers

Some functions, such as personalized recommendations, account preferences, or localization, may not work correctly without these technologies. For more details, please refer to IvyPanda's Cookies Policy .

To enable personalized advertising (such as interest-based ads), we may share your data with our marketing and advertising partners using cookies and other technologies. These partners may have their own information collected about you. Turning off the personalized advertising setting won't stop you from seeing IvyPanda ads, but it may make the ads you see less relevant or more repetitive.

Personalized advertising may be considered a "sale" or "sharing" of the information under California and other state privacy laws, and you may have the right to opt out. Turning off personalized advertising allows you to exercise your right to opt out. Learn more in IvyPanda's Cookies Policy and Privacy Policy .

  • Skip to main content
  • Keyboard shortcuts for audio player

The Facebook Papers: What you need to know about the trove of insider documents

Bill Chappell

facebook problem essay

Facebook CEO Mark Zuckerberg testifies on Capitol Hill in April 2018. A trove of insider documents known as the Facebook Papers has the company facing backlash over its effects on society and politics. Chip Somodevilla/Getty Images hide caption

Facebook CEO Mark Zuckerberg testifies on Capitol Hill in April 2018. A trove of insider documents known as the Facebook Papers has the company facing backlash over its effects on society and politics.

Facebook's rank-and-file employees warned their leaders about the company's effects on society and politics in the United States. And they say its inability to effectively moderate content has magnified those dangers, both in the U.S. and abroad. Those are two of the main takeaways from thousands of internal Facebook documents that NPR and other news outlets have reviewed.

The documents, known collectively as the Facebook Papers, were shared in redacted form with Congress after whistleblower Frances Haugen, a former Facebook product manager, disclosed them to the Securities and Exchange Commission.

Haugen alleges that the trove of statements and data prove that Facebook's leaders have repeatedly and knowingly put the company's image and profitability ahead of the public good — even at the risk of violence and other harm.

Here are 4 key points from the Facebook whistleblower's testimony on Capitol Hill

Here are 4 key points from the Facebook whistleblower's testimony on Capitol Hill

Some of the internal documents initially emerged last month in The Wall Street Journal . They include internal research findings and internal audits that the company performed on its own practices.

Here are four main takeaways from news outlets' review of the documents:

Facebook employees hotly debated its policies, especially after Jan. 6

When then-President Donald Trump's supporters mounted an insurrection at the U.S. Capitol on Jan. 6, Facebook rushed to take technical measures that aimed to clamp down on misinformation and content that might incite further violence. The next day, it banned Trump from the platform — at first temporarily, but then permanently.

In the weeks leading up to the violence, Facebook worked to defuse vitriol and conspiracy theories from Trump voters who refused to accept his defeat. As NPR's Shannon Bond and Bobby Allyn have reported , the company repeatedly shut down groups affiliated with the Stop the Steal movement. But those groups were attracting hundreds of thousands of users, and Facebook was unable to keep pace as the conspiracy theorists regrouped.

The post-election turmoil put a quick end to the relief that many at Facebook felt on Nov. 3, when the U.S. election played out mostly peacefully and without inklings of foreign meddling.

But then came Jan. 6 — and as the assault on the Capitol riveted and horrified audiences in the U.S. and elsewhere, Facebook employees aired their frustration and anger.

How the 'Stop the Steal' movement outwitted Facebook ahead of the Jan. 6 insurrection

Untangling Disinformation

How the 'stop the steal' movement outwitted facebook ahead of the jan. 6 insurrection.

"We've been fueling this fire for a long time and we shouldn't be surprised it's now out of control," one employee wrote on an internal message board, the documents show.

"Hang in there everyone," Mike Schroepfer, Facebook's chief technology officer, wrote on a message board, calling for calm as he explained the company's approach to the riot, according to the documents.

In response to Schroepfer's message, Facebook employees said it was too little too late.

"I came here hoping to effect change and improve society, but all I've seen is atrophy and abdication of responsibility," one commenter said, according to the documents.

In a statement to NPR, Facebook spokesman Andy Stone said Facebook did not bear responsibility for the Capitol siege.

"The responsibility for the violence that occurred on January 6 lies with those who attacked our Capitol and those who encouraged them," Stone said.

Content standards were contorted, often out of fear of riling high-profile accounts

One of the earliest revelations from the internal documents is the detail they provide about Facebook's separate set of content standards for high-profile accounts , such as those for Trump, or for celebrities.

During Trump's presidency, he regularly made false and inflammatory statements about a wide range of matters. But only a small handful were removed by Facebook, as when the then-president made dangerous claims like saying COVID-19 was less dangerous than the flu or stating that children were " almost immune from this disease."

Facebook has previously defended its approach to such controversial and misleading statements, saying politicians like Trump should be allowed to say what they believe so the public knows what they think. Facebook CEO Mark Zuckerberg has also repeatedly insisted that Facebook is merely a platform, not the "arbiter of truth."

Oversight Board slams Facebook for giving special treatment to high-profile users

Oversight Board slams Facebook for giving special treatment to high-profile users

But the documents suggest Facebook's policy of treating influential people differently — codified in a VIP system called XCheck — was created in large part to prevent a public relations backlash from celebrities and other high-profile users.

The entire premise of the XCheck system, the Journal 's Jeff Horwitz told NPR in September , "is to never publicly tangle with anyone who is influential enough to do you harm."

Facebook's own Oversight Board sharply criticized the program last week , saying the company has not been forthcoming enough about its varying standards for content moderation.

A Facebook spokesperson told NPR in a statement that the company asked the board to review the program because it aims "to be clearer in our explanations to them going forward."

Young people see Facebook content as "boring, misleading, and negative"

For much of the past decade, senior citizens have been the fastest-growing U.S. demographic on Facebook — a dramatic turnabout for a company whose founding mystique rests on the image of a hoodie-wearing coder creating a space for college kids to connect on.

During the same timespan, Facebook has seen younger people become less likely to join the site . It's a worrying trend for the company — Facebook insiders got an update on that trend this year, in an internal presentation that is reflected in the documents.

"Most young adults perceive Facebook as a place for people in their 40s and 50s," the company's researchers said, according to The Verge . "Young adults perceive content as boring, misleading, and negative. They often have to get past irrelevant content to get to what matters."

Ex-Facebook employee says company has known about disinformation problem for years

Ex-Facebook employee says company has known about disinformation problem for years

Along with that stumbling block, young users were found to have negative views of Facebook due to privacy concerns and its potential "impact to their wellbeing," The Verge reports.

Haugen previously leaked a Facebook study that found that 13.5% of British teen girls in a survey said their suicidal thoughts became more frequent after they joined Instagram.

In addition to its namesake platform, Facebook owns Instagram and WhatsApp.

"It is clear that Facebook prioritizes profit over the well-being of children and all users," Sen. Marsha Blackburn, R-Tenn., said during a Senate hearing this month in which Haugen testified.

Facebook's global reach exceeds its grasp

While much of the focus on Facebook in the U.S. has to do with its role in enabling and intensifying political divisions, the documents also fault the company for its activities in numerous other countries.

The documents portray Facebook as failing to deal with a number of social and language complexities stemming from its more than 2.8 billion users worldwide. The results have been especially dangerous and harmful in countries where unrest or rights abuses are common, the documents state.

"Two years ago, Apple threatened to pull Facebook and Instagram from its app store over concerns about the platform being used as a tool to trade and sell maids in the Mideast," The Associated Press reports .

The company routinely struggles with posts and comments in Arabic, both on its main platform and on Instagram, according to the documents. Arabic is one of the world's most widely spoken languages, but its many dialects are highly distinct from each another.

Facebook "doesn't have anyone who can speak most of them or can understand most of them in terms of sort of the vernacular," Horwitz told NPR. "And it also doesn't have a system to route content in those dialects to the right people."

The problem extends beyond Arabic and has a wide range of effects.

"In countries like Afghanistan and Myanmar, these loopholes have allowed inflammatory language to flourish on the platform," the AP reports , "while in Syria and the Palestinian territories, Facebook suppresses ordinary speech, imposing blanket bans on common words."

As similar stories emerged over the weekend about India and Ethiopia, Facebook said that it has more than 40,000 people "working on safety and security, including global content review teams in over 20 sites around the world reviewing content in over 70 languages."

Editor's note: Facebook is among NPR's recent financial supporters.

Find anything you save across the site in your account

How Facebook Makes Us Unhappy

facebook problem essay

No one joins Facebook to be sad and lonely. But a new study from the University of Michigan psychologist Ethan Kross argues that that’s exactly how it makes us feel. Over two weeks, Kross and his colleagues sent text messages to eighty-two Ann Arbor residents five times per day. The researchers wanted to know a few things: how their subjects felt overall, how worried and lonely they were, how much they had used Facebook, and how often they had had direct interaction with others since the previous text message. Kross found that the more people used Facebook in the time between the two texts, the less happy they felt—and the more their overall satisfaction declined from the beginning of the study until its end. The data, he argues, shows that Facebook was making them unhappy.

Research into the alienating nature of the Internet—and Facebook in particular—supports Kross’s conclusion. In 1998 , Robert Kraut, a researcher at Carnegie Mellon University, found that the more people used the Web, the lonelier and more depressed they felt. After people went online for the first time, their sense of happiness and social connectedness dropped, over one to two years, as a function of how often they used the Internet.

Lonelier people weren’t inherently more likely to go online, either; a recent review of some seventy-five studies concluded that “users of Facebook do not differ in most personality traits from nonusers of Facebook.” (Nathan Heller wrote about loneliness in the magazine last year.) But, somehow, the Internet seemed to make them feel more alienated. A 2010 analysis of forty studies also confirmed the trend: Internet use had a small, significant detrimental effect on overall well-being. One experiment concluded that Facebook could even cause problems in relationships, by increasing feelings of jealousy.

Another group of researchers has suggested that envy, too, increases with Facebook use: the more time people spent browsing the site, as opposed to actively creating content and engaging with it, the more envious they felt. The effect, suggested Hanna Krasnova and her colleagues, was a result of the well-known social-psychology phenomenon of social comparison. It was further exacerbated by a general similarity of people’s social networks to themselves: because the point of comparison is like-minded peers, learning about the achievements of others hits even harder. The psychologist Beth Anderson and her colleagues argue , in a recent review of Facebook’s effects, that using the network can quickly become addictive, which comes with a nagging sense of negativity that can lead to resentment of the network for some of the same reasons we joined it to begin with. We want to learn about other people and have others learn about us—but through that very learning process we may start to resent both others’ lives and the image of ourselves that we feel we need to continuously maintain. “It may be that the same thing people find attractive is what they ultimately find repelling,” said the psychologist Samuel Gosling, whose research focusses on social-media use and the motivations behind social networking and sharing.

But, as with most findings on Facebook, the opposite argument is equally prominent. In 2009 , Sebastián Valenzuela and his colleagues came to the opposite conclusion of Kross: that using Facebook makes us happier . They also found that it increases social trust and engagement—and even encourages political participation. Valenzuela’s findings fit neatly with what social psychologists have long known about sociality: as Matthew Lieberman argues in his book “ Social: Why Our Brains are Wired to Connect ,” social networks are a way to share, and the experience of successful sharing comes with a psychological and physiological rush that is often self-reinforcing. The prevalence of social media has, as a result, fundamentally changed the way we read and watch: we think about how we’ll share something, and whom we’ll share it with, as we consume it. The mere thought of successful sharing activates our reward-processing centers, even before we’ve actually shared a single thing.

Virtual social connection can even provide a buffer against stress and pain: in a 2009 study , Lieberman and his colleagues demonstrated that a painful stimulus hurt less when a woman either held her boyfriend’s hand or looked at his picture; the pain-dulling effects of the picture were, in fact, twice as powerful as physical contact. Somehow, the element of distance and forced imagination—a mental representation in lieu of the real thing, something that the psychologists Wendi Gardner and Cindy Pickett call “social snacking”—had an anesthetic effect‚ one we might expect to carry through to an entire network of pictures of friends.

The key to understanding why reputable studies are so starkly divided on the question of what Facebook does to our emotional state may be in simply looking at what people actually do when they’re on Facebook. “What makes it complicated is that Facebook is for lots of different things—and different people use it for different subsets of those things. Not only that, but they are also changing things, because of people themselves changing,” said Gosling. A 2010 study from Carnegie Mellon found that, when people engaged in direct interaction with others—that is, posting on walls, messaging, or “liking” something—their feelings of bonding and general social capital increased, while their sense of loneliness decreased. But when participants simply consumed a lot of content passively, Facebook had the opposite effect, lowering their feelings of connection and increasing their sense of loneliness.

In an unrelated experiment from the University of Missouri, a group of psychologists found a physical manifestation of these same effects. As study participants interacted with the site, four electrodes attached to the areas just above their eyebrows and just below their eyes recorded their facial expressions in a procedure known as facial electromyography. When the subjects were actively engaged with Facebook, their physiological response measured a significant uptick in happiness. When they were passively browsing, however, the positive effect disappeared.

This aligns with research conducted earlier this year by John Eastwood and his colleagues at York University in a meta-analysis of boredom . What causes us to feel bored and, as a result, unhappy? Attention. When our attention is actively engaged, we aren’t bored; when we fail to engage, boredom sets in. As Eastwood’s work, along with recent research on media multitasking, have illustrated, the greater the number of things we have pulling at our attention, the less we are able to meaningfully engage, and the more discontented we become.

In other words, the world of constant connectivity and media, as embodied by Facebook, is the social network’s worst enemy: in every study that distinguished the two types of Facebook experiences—active versus passive—people spent, on average, far more time passively scrolling through newsfeeds than they did actively engaging with content. This may be why general studies of overall Facebook use, like Kross’s of Ann Arbor residents, so often show deleterious effects on our emotional state. Demands on our attention lead us to use Facebook more passively than actively, and passive experiences, no matter the medium, translate to feelings of disconnection and boredom.

In ongoing research, the psychologist Timothy Wilson has learned, as he put it to me, that college students start going “crazy” after just a few minutes in a room without their phones or a computer. “One would think we could spend the time mentally entertaining ourselves,” he said. “But we can’t. We’ve forgotten how.” Whenever we have downtime, the Internet is an enticing, quick solution that immediately fills the gap. We get bored, look at Facebook or Twitter, and become more bored. Getting rid of Facebook wouldn’t change the fact that our attention is, more and more frequently, forgetting the path to proper, fulfilling engagement. And in that sense, Facebook isn’t the problem. It’s the symptom.

Maria Konnikova is the author of the New York Times best-seller “ Mastermind: How to Think Like Sherlock Holmes . ” She has a Ph.D. in psychology from Columbia University._

Photograph by Luong Thai Linh/EPA/Corbis

Land of the Flea

an out-of-focus image of the word facebook on a horizontal blue rectangle against a black background

Facebook has a misinformation problem, and is blocking access to data about how much there is and who is affected

facebook problem essay

Associate Professor of Public Policy, Communication, and Information, UMass Amherst

Disclosure statement

Ethan Zuckerman receives funding from the MacArthur Foundation, the Knight Foundation and the Ford Foundation. He is affiliated with the Danielle Allen for Governor (MA) campaign.

UMass Amherst provides funding as a founding partner of The Conversation US.

View all partners

Leaked internal documents suggest Facebook – which recently renamed itself Meta – is doing far worse than it claims at minimizing COVID-19 vaccine misinformation on the Facebook social media platform.

Online misinformation about the virus and vaccines is a major concern. In one study, survey respondents who got some or all of their news from Facebook were significantly more likely to resist the COVID-19 vaccine than those who got their news from mainstream media sources.

As a researcher who studies social and civic media , I believe it’s critically important to understand how misinformation spreads online. But this is easier said than done. Simply counting instances of misinformation found on a social media platform leaves two key questions unanswered: How likely are users to encounter misinformation, and are certain users especially likely to be affected by misinformation? These questions are the denominator problem and the distribution problem.

The COVID-19 misinformation study, “ Facebook’s Algorithm: a Major Threat to Public Health ”, published by public interest advocacy group Avaaz in August 2020, reported that sources that frequently shared health misinformation — 82 websites and 42 Facebook pages — had an estimated total reach of 3.8 billion views in a year.

At first glance, that’s a stunningly large number. But it’s important to remember that this is the numerator . To understand what 3.8 billion views in a year means, you also have to calculate the denominator . The numerator is the part of a fraction above the line, which is divided by the part of the fraction below line, the denominator.

Getting some perspective

One possible denominator is 2.9 billion monthly active Facebook users , in which case, on average, every Facebook user has been exposed to at least one piece of information from these health misinformation sources. But these are 3.8 billion content views, not discrete users. How many pieces of information does the average Facebook user encounter in a year? Facebook does not disclose that information.

Text that reads misinformation problem equals sign n over question mark

Market researchers estimate that Facebook users spend from 19 minutes a day to 38 minutes a day on the platform. If the 1.93 billion daily active users of Facebook see an average of 10 posts in their daily sessions – a very conservative estimate – the denominator for that 3.8 billion pieces of information per year is 7.044 trillion (1.93 billion daily users times 10 daily posts times 365 days in a year). This means roughly 0.05% of content on Facebook is posts by these suspect Facebook pages.

The 3.8 billion views figure encompasses all content published on these pages, including innocuous health content, so the proportion of Facebook posts that are health misinformation is smaller than one-twentieth of a percent.

Is it worrying that there’s enough misinformation on Facebook that everyone has likely encountered at least one instance? Or is it reassuring that 99.95% of what’s shared on Facebook is not from the sites Avaaz warns about? Neither.

Misinformation distribution

In addition to estimating a denominator, it’s also important to consider the distribution of this information. Is everyone on Facebook equally likely to encounter health misinformation? Or are people who identify as anti-vaccine or who seek out “alternative health” information more likely to encounter this type of misinformation?

Another social media study focusing on extremist content on YouTube offers a method for understanding the distribution of misinformation. Using browser data from 915 web users , an Anti-Defamation League team recruited a large, demographically diverse sample of U.S. web users and oversampled two groups: heavy users of YouTube, and individuals who showed strong negative racial or gender biases in a set of questions asked by the investigators. Oversampling is surveying a small subset of a population more than its proportion of the population to better record data about the subset.

The researchers found that 9.2% of participants viewed at least one video from an extremist channel, and 22.1% viewed at least one video from an alternative channel, during the months covered by the study. An important piece of context to note: A small group of people were responsible for most views of these videos. And more than 90% of views of extremist or “alternative” videos were by people who reported a high level of racial or gender resentment on the pre-study survey.

While roughly 1 in 10 people found extremist content on YouTube and 2 in 10 found content from right-wing provocateurs, most people who encountered such content “bounced off” it and went elsewhere. The group that found extremist content and sought more of it were people who presumably had an interest: people with strong racist and sexist attitudes.

The authors concluded that “consumption of this potentially harmful content is instead concentrated among Americans who are already high in racial resentment,” and that YouTube’s algorithms may reinforce this pattern. In other words, just knowing the fraction of users who encounter extreme content doesn’t tell you how many people are consuming it. For that, you need to know the distribution as well.

Superspreaders or whack-a-mole?

A widely publicized study from the anti-hate speech advocacy group Center for Countering Digital Hate titled Pandemic Profiteers showed that of 30 anti-vaccine Facebook groups examined, 12 anti-vaccine celebrities were responsible for 70% of the content circulated in these groups, and the three most prominent were responsible for nearly half. But again, it’s critical to ask about denominators: How many anti-vaccine groups are hosted on Facebook? And what percent of Facebook users encounter the sort of information shared in these groups?

Without information about denominators and distribution, the study reveals something interesting about these 30 anti-vaccine Facebook groups, but nothing about medical misinformation on Facebook as a whole.

A hand holds a smart phone displaying a message from Facebook about limiting COVID-19 misinformation

These types of studies raise the question, “If researchers can find this content, why can’t the social media platforms identify it and remove it?” The Pandemic Profiteers study, which implies that Facebook could solve 70% of the medical misinformation problem by deleting only a dozen accounts, explicitly advocates for the deplatforming of these dealers of disinformation. However, I found that 10 of the 12 anti-vaccine influencers featured in the study have already been removed by Facebook.

Consider Del Bigtree, one of the three most prominent spreaders of vaccination disinformation on Facebook. The problem is not that Bigtree is recruiting new anti-vaccine followers on Facebook; it’s that Facebook users follow Bigtree on other websites and bring his content into their Facebook communities. It’s not 12 individuals and groups posting health misinformation online – it’s likely thousands of individual Facebook users sharing misinformation found elsewhere on the web, featuring these dozen people. It’s much harder to ban thousands of Facebook users than it is to ban 12 anti-vaccine celebrities.

This is why questions of denominator and distribution are critical to understanding misinformation online. Denominator and distribution allow researchers to ask how common or rare behaviors are online, and who engages in those behaviors. If millions of users are each encountering occasional bits of medical misinformation, warning labels might be an effective intervention. But if medical misinformation is consumed mostly by a smaller group that’s actively seeking out and sharing this content, those warning labels are most likely useless.

[ You’re smart and curious about the world. So are The Conversation’s authors and editors. You can read us daily by subscribing to our newsletter .]

Getting the right data

Trying to understand misinformation by counting it, without considering denominators or distribution, is what happens when good intentions collide with poor tools. No social media platform makes it possible for researchers to accurately calculate how prominent a particular piece of content is across its platform.

Facebook restricts most researchers to its Crowdtangle tool, which shares information about content engagement, but this is not the same as content views. Twitter explicitly prohibits researchers from calculating a denominator, either the number of Twitter users or the number of tweets shared in a day. YouTube makes it so difficult to find out how many videos are hosted on their service that Google routinely asks interview candidates to estimate the number of YouTube videos hosted to evaluate their quantitative skills.

The leaders of social media platforms have argued that their tools, despite their problems, are good for society , but this argument would be more convincing if researchers could independently verify that claim .

As the societal impacts of social media become more prominent, pressure on the big tech platforms to release more data about their users and their content is likely to increase. If those companies respond by increasing the amount of information that researchers can access, look very closely: Will they let researchers study the denominator and the distribution of content online? And if not, are they afraid of what researchers will find?

  • Social media
  • Misinformation
  • Disinformation
  • Vaccine misinformation
  • Health misinformation
  • Deplatforming
  • Social media bans
  • X (formerly Twitter)

facebook problem essay

University Relations Manager

facebook problem essay

2024 Vice-Chancellor's Research Fellowships

facebook problem essay

Head of Research Computing & Data Solutions

facebook problem essay

Community member RANZCO Education Committee (Volunteer)

facebook problem essay

Director of STEM

Peer Reviewed

Research note: The scale of Facebook’s problem depends upon how ‘fake news’ is classified

Article metrics.

CrossRef

CrossRef Citations

Altmetric Score

PDF Downloads

Ushering in the contemporary ‘fake news’ crisis, Craig Silverman of Buzzfeed News reported that it outperformed mainstream news on Facebook in the three months prior to the 2016 US presidential elections. Here the report’s methods and findings are revisited for 2020. Examining Facebook user engagement of election-related stories, and applying Silverman’s classification of fake news, it was found that the problem has worsened, implying that the measures undertaken to date have not remedied the issue. If, however, one were to classify ‘fake news’ in a stricter fashion, as Facebook as well as certain media organizations do with the notion of ‘false news’, the scale of the problem shrinks. A smaller scale problem could imply a greater role for fact-checkers (rather than deferring to mass-scale content moderation), while a larger one could lead to the further politicization of source adjudication, where labelling particular sources broadly as ‘fake’, ‘problematic’ and/or ‘junk’ results in backlash.

Media Studies Department, University of Amsterdam, Netherlands

facebook problem essay

Research Questions

  • To what extent is ‘fake news’ (as defined in the 2016 seminal news article) present in the most engaged-with, election-related content on Facebook in the run-up to the 2020 US presidential elections?
  • How does the current ‘fake news’ problem compare to that of the 2016 election period, both with the same as well as a stricter definition of ‘fake news’?
  • How does the scale of the problem affect the viability of certain approaches put forward to address it?
  • Is there more user engagement with hyperpartisan conservative or progressive sources in political spaces on Facebook? How does such engagement imply a politicization of the ‘fake news’ problem?

research note Summary

  • The ‘fake news’ problem around the US elections as observed in 2016 has worsened on Facebook in 2020. In the early months in 2020 the proportion of user engagement with ‘fake news’ to mainstream news stories is 1:3.5, compared to 1:4 during the same period in 2016. It is both an observation concerning the persistence of the problem and an admonition that the measures undertaken to date have not lessened the phenomenon. 
  • If one applies a stricter definition of ‘fake news’ such as only imposter news and conspiracy sites (thereby removing hyperpartisan sites as in Silverman’s definition), mainstream sources outperform ‘fake’ ones by a much greater proportion. 
  • The findings imply that how one defines such information has an impact on the perceived scale of the problem, including the types of approaches to address it. With a smaller-scale problem, fact-checking and labelling become more viable alongside the ‘big data’ custodial approaches employed by social media firms.
  • Given there are more hyperpartisan conservative sources engaged with than hyperpartisan progressive ones, the research points to how considerations of what constitutes ‘fake news’ may be politicized.
  • The findings are made on the basis of Facebook user engagement of the top 200 stories returned for queries for candidates and social issues. Based on existing labelling sites, the stories and by extension the sources are classified along a spectrum from more to less problematic and partisan.

Implications

The initial ‘fake news’ crisis (Silverman, 2016; 2017) had to do with fly-by-night, imposter, conspiracy as well as so-called ‘hyperpartisan’ news sources outperforming mainstream news on Facebook in the run up to the 2016 US presidential elections. In a sense it was both a critique of Facebook as ‘hyperpartisan political-media machine’ (Herrman, 2016) but also that of the quality of a media landscape witnessing a precipitous rise in the consumption and sharing of ‘alternative right’ news and cultural commentary (Benkler et al., 2017; Holt et al., 2019).

The events of the first crisis have been overtaken by a second one where politicians as President Trump in the US and elsewhere employ the same term for certain media organizations in order to undermine their credibility. Against the backdrop of that politicization as well as rhetorical tactic, scholars and platforms alike have demurred using the term ‘fake news’ and instead offered ‘junk news,’ ‘problematic information,’ ‘false news’ and others (Vosoughi et al., 2018). Some definitions (as junk news and problematic information) are roomier, while others are stricter in their source classification schemes. 

Subsumed under the original ‘fake news’ definition are imposter news, conspiracy sources and hyperpartisan (or ‘overly ideological web operations’) (Herrman, 2016), and the newer term ‘junk news’ covers the same types of sources but adds the connotation of attractively packaged junk food that when consumed could be considered unhealthy (Howard, 2020; Venturini, 2019). It also includes two web-native source types. ‘Clickbait’ captures how the manner in which it is packaged or formatted lures one into consumption, and ‘computational propaganda’ refers to dubious news circulation by bot and troll-like means, artificially amplifying its symbolic power. Problematic information is even roomier, as it expands its field of vision beyond news to cultural commentary and satire (Jack, 2017). Stricter definitions such as ‘false news’ would encompass imposter and conspiracy but are less apt to include hyperpartisan news and cultural commentary, discussing those sources as ‘misleading’ rather than as ‘fake’ or ‘junk’ (Kist & Zantingh, 2017). 

Rather than an either/or proposition, ‘fake news’ could be understood as a Venn diagram or matryoshka dolls with problematic information encompassing junk news, junk news fake news, and fake news false news (Wardle, 2016; 2017). (While beyond the scope, the definition could be broadened even further to include more media than stories and sources, such as video and images.) 

Depending on the definition, the scale of the problem changes as does the range of means to address it. With ‘false news’, it grows smaller, and fact-checking again would be a profession to which to turn for background research into the story and the source. Fact-checking has been critiqued in this context because of the enormity of the task and the speed with which the lean workforces must operate. Facebook for one employs the term ‘false news’ and has striven to work with fact-checking bodies, though its overall approach is multi-faceted and relies more on (outsourced) content reviewers (Roberts, 2016; Gillespie, 2018). Other qualitative approaches such as media literacy and bias labelling are also manual undertakings, with adjudicators sifting through stories and sources one by one. When the problem is scaled down, these too become viable. 

Roomier definitions make the problem larger and result in findings such as the most well-known ‘fake news’ story of 2016. ‘Pope Francis Shocks World, Endorses Donald Trump for President’ began as satire and was later circulated on a hyperpartisan, fly-by-night site (Ending the Fed). It garnered higher engagement rates on Facebook than more serious articles in the mainstream news. When such stories are counted as ‘fake’, ‘junk’ or ‘problematic’, and the scale increases, industrial-style custodial action may be preferred such as mass contention moderation as well as crowd-sourced and automated flagging, followed by platform escalation procedures and outcomes such as suspending or deplatforming stories, videos and sources. 

As more content is taken down as a result of roomy source classification schemes, debates about freedom of choice may become more vociferous rather than less. It recalls the junk food debate, and in this regard, Zygmunt Bauman stressed how we as  homo eligens  or ‘choosing animals’ are wont to resist such restrictions, be it in opting for ‘hyperprocessed’ food or hyperpartisan news and cultural commentary (2013). 

Labelling hyperpartisan news as ‘fake’ or ‘junk’, moreover, may lead to greater political backlash. Indeed, as our findings imply, the ‘fake news’ or ‘junk news’ problem is largely a hyperpartisan conservative source problem, whereas the ‘false news’ one is not. As recently witnessed in the Netherlands, the designation of hyperpartisan conservative sources as ‘junk news’ drew the ire of the leader of a conservative political party, who subsequently labelled mainstream news with the neologism, ‘junk fake news’ (Rogers & Niederer, 2020; Van Den Berg, 2019). Opting for the narrower ‘false news’ classification would imply a depoliticization of the problem.

Finally, it should be remarked that the period of study under question is some months away from the US presidential elections, and like 2016 it is also one when the ‘fake news’ problem was not pronounced. That said, the sources outputting questionable content in 2020 do not appear to be the fly-by-night, imposter news sites in operation in 2016, but rather more ‘established’ conspiracy and hyperpartisan sites. While speculation, if Facebook categorizes imposter news sites as exhibiting ‘inauthentic behavior’ and demonetizes or deplatforms them all together, then the scale of problem in the run-up to the 2020 elections may remain as we have found it. If it does not, and they appear and are consumed as in 2016, the problem could worsen substantially, with the prospect for the headline, ‘fake news outperforms mainstream news (again)’.

This study revisits the initial ‘fake news’ findings made by Craig Silverman of Buzzfeed News in 2016, where it was found that in the three months prior to the 2016 US presidential elections ‘fake news’ stories received more interactions on Facebook than mainstream stories (see Figure 1).

facebook problem essay

Finding 1: If we employ the same definition of ‘fake news’ as Silverman did during 2016, to date the problem has slightly worsened. 

Whereas 1 in 4 ‘fake news’ sources were most engaged-with in February-April 2016, in February-March 2020 it is now 1 in 3.5 (see figures 1 and 2). The main finding, in other words, is that the ‘fake news problem’ of 2016 has not been remedied four years later, at least for the initial 2020 timeframe.

facebook problem essay

Finding 2: If, however, one tightens the definition of ‘fake news’ sites to imposter and conspiracy sites (as the definition of ‘false news’ would have it), thereby removing hyperpartisan sources from the categorization scheme, the proportion of most engaged-with ‘fake news’ to mainstream news in February-March 2020 lessens to 1 in 9 (see Figure 3). Such sites are not as well engaged with as they once were, at least for the period in question.

facebook problem essay

Note that the 2016 problem also could be thought to diminish if one were to disaggregate Silverman’s original source list and remove hyperpartisan stories and sites. An examination of his list for the period in question indicates, however, that most sources are imposter or conspiracy sites, rather than hyperpartisan (Silverman, 2016). Breitbart News, for one, is not among the most engaged with sources in February-April 2016. It only appears towards the top during the period just prior the 2016 elections when ‘fake news’ outperformed mainstream news. Imposter sites such as the Denver Guardian (which is no longer online) were also in the top results. As the Denver Post wrote, ‘[t]here is no such thing as the Denver Guardian, despite that Facebook post you saw’ (Lubbers, 2016).

Finding 3: Overall, from March 2019 to March 2020, using either the roomier or stricter definition of so-called ‘fake news’ sites, mainstream news stories about the US presidential candidates and main social issues had a greater number of interactions than stories from these alternative sources (see Figures 4 and 5). Thus, for stories related to Trump, Biden, Sanders and major social issues, including the coronavirus (present in February and March 2020), mainstream sources are preferred.

facebook problem essay

Finding 4: There are certain issues where more alternative sources provide the coverage that was consumed, but, with the strict definition, in no case did they outperform mainstream sources (see Figure 6). 

facebook problem essay

Finding 5: There is more engagement with hyperpartisan conservative sources than hyperpartisan progressive ones both overall as well as for the majority of the candidates and issues (see Figures 8 and 9). The finding suggests that any ‘fake news’ definition that includes hyperpartisan sources will associate the problem more with conservative sources. When adjusting the definition to exclude such sources, ‘fake news’ itself becomes less politicized.

facebook problem essay

Methods 

This study revisits the initial ‘fake news’ report written by Craig Silverman and published in Buzzfeed News in 2016. It employs a similar methodology, albeit introducing a ‘slider’ or gradient to indicate the extent of the problem depending on how one classifies sources. The research enquires into the current scale of the problem and compares it to the same timeframe in 2016. It also demonstrates how roomier definitions of ‘fake news’ make the problem appear larger, compared to stricter definitions. 

First, a list of candidates and social issues is curated. The candidates chosen are the ones from the major parties, still in the race and campaigning at the time of the study. For social issues, the issue lists at four voting aid sources are first merged, and then filtered for those that appear on multiple lists: Politico, VoteSmart, On the Issues and Gallup (see Table 1). 

facebook problem essay

Next we queried BuzzSumo, the marketing research and analysis tool, for each candidate and issue keyword, using the date range of March 23, 2019 to March 23, 2020, and the filter ‘English’. We also retained non-American sources, in order to ensure that we did not miss highly engaging, problematic sources that are from outside the US. BuzzSumo returns a list of web URLs, ranked by interactions, which is the sum of likes, shares and comments. The study of engagement (or interactions) concerns a combination of rating (like), reading (comment) and circulating (share). In that sense, it is a rather comprehensive measure. For every candidate and issue, we examined only the top 200 stories returned, which is a limitation. Analyzing Facebook user engagement with ‘top’ content follows Silverman’s original method. 

Each of the source names, headlines and any description text are read, and the sources are roughly labelled by concatenating pre-existing source classification schemes (or when in disagreement choosing the majority label). To gain an indication of their genre (non-problematic or problematic news including imposter news, conspiracy site, satire or clickbait) and (hyper)partisanship, the sources are checked against media bias labelling sites including AllSides (2020), Media Bias/Fact Check (2020), ‘The Chart’ (Otero, 2017) and NewsGuard (2020); news sources’ Wikipedia entries are also consulted. We also searched for them online and consulted news and analysis that mention the sources. Additionally, we checked the source lists against a recent study of imposter sources called ‘pink slime sites’, or sites that imitate local or national news sites (Bengani, 2019). (No pink slime sites were in the top 200 most engaged-with stories for any candidate or social issue.) 

Subsequently, we characterized the stories as problematic or non-problematic, where the former adheres to the strict ‘false news’ definition (imposter or conspiracy sites). These are then graphed overtime using RAW graphs. We also applied the roomier definitions of ‘fake news’, which adds to imposter and conspiracy sites ‘hyperpartisan’ sources. We graphed these values anew. These graphs display the proportion of ‘fake news’ versus non-problematic sources in Facebook for the results of each candidate and social issue query over the one-year timeframe, March 2019 to March 2020.

We then compared the 2020 findings with the 2016 results, in two ways. First, we compared the 2020 results with the roomier definition (imposter + conspiracy + hyperpartisan) to the ‘fake news’ findings of 2016 as proportions, finding that now, for largely the same period, there are 1 in 3.5 sources that are ‘fake’ compared to 1 in 4 in 2016. Thus, the ‘original’ ‘fake news problem’ has worsened. Second, we examined the source list from February to April 2016 in order to ascertain whether the findings were based on a strict or roomy definition for that particular period, and concluded that those sources were largely conspiracy, but the best-performing story by far was actually from a reputable source that mistakenly published a ‘fake story’, originating from a tweet by Sean Hannity of Fox News that the then candidate Trump had used his own private plane to transport ‘200 stranded marines’ (American Military News, 2016). For a sense of how definitions politicize, we also examined which candidates were associated with hyperpartisan news, noting how Biden is targeted far more often in such sources.

To study the politicization of the ‘fake news’ problem further, we compared the overall engagement on Facebook of hyperpartisan sources, both conservative and progressive, as well as the candidates and issues that had each type most associated with it, finding that conservative, so-called hyperpartisan sources far outperformed hyperpartisan progressive ones.

  • / Fake News
  • / Partisan Issues

Cite this Essay

Rogers, R. (2020). Research note: The scale of Facebook’s problem depends upon how ‘fake news’ is classified. Harvard Kennedy School (HKS) Misinformation Review . https://doi.org/10.37016/mr-2020-43

Bibliography

AllSides. (2020). Media Bias Ratings . https://www.allsides.com/media-bias/media-bias-ratings#ratings

American Military News. (2016, May 23). Article removed – here’s why . American Military News. https://americanmilitarynews.com/2016/05/donald-trump-sent-his-own-plane-to-transport-200-stranded-marines/

Bauman, Z. (2013). Does the richness of the few benefit us all? Polity.

Bengani, P. (2019, December 18). Hundreds of ‘pink slime’ local news outlets are distributing algorithmic stories and conservative talking points. Tow Reports. https://www.cjr.org/tow_center_reports/hundreds-of-pink-slime-local-news-outlets-are-distributing-algorithmic-stories-conservative-talking-points.php

Benkler, Y., Faris, R., Roberts, H., & Zuckerman, E. (2017, March 3). Study: Breitbart-led right-wing media ecosystem altered broader media agenda. Columbia Journalism Review . https://www.cjr.org/analysis/breitbart-media-trump-harvard-study.php

BuzzSumo. (2020). Web content analyzer . Buzzsumo. https://app.buzzsumo.com/content/web.

Herrman, J. (2016, August 24). Inside Facebook’s (totally insane, unintentionally gigantic, hyperpartisan) political-media machine. The New York Times . https://www.nytimes.com/2016/08/28/magazine/inside-facebooks-totally-insane-unintentionally-gigantic-hyperpartisan-political-media-machine.html

Holt, K., Figenschou, T. U., & Frischlich, L. (2019). Key dimensions of alternative news media. Digital Journalism , 7 (7), 860-869. https://doi.org/10.1080/21670811.2019.1625715

Gillespie, T. (2018). Custodians of the internet. Yale University Press.

Howard, P. (2020). Lie machines . Yale University Press.

Jack, C. (2017). Lexicon of lies: Terms for problematic information [report]. Data & Society Research Institute.

Kist, R., & Zantingh, P. (2017, March 6). Geen grote rol nepnieuws in aanloop naar verkiezingen. NRC Handelsblad . https://www.nrc.nl/nieuws/2017/03/06/fake-news-nee-zo-erg-is-het-hier-niet-7144615-a1549050

Lubbers, E. (2016, November 5). There is no such thing as the Denver Guardian, despite that Facebook post you saw. The Denver Post . https://www.denverpost.com/2016/11/05/there-is-no-such-thing-as-the-denver-guardian

Media Bias/Fact Check. (2020). Media source ratings . https://mediabiasfactcheck.com

NewsGuard. (2020). NewsGuard nutrition label. https://api.newsguardtech.com

Otero, V. (2017). The chart . Ad fontes media. https://www.adfontesmedia.com/the-chart-version-3-0-what-exactly-are-we-reading/

Roberts, S. T. (2016). Commercial content moderation: Digital laborers’ dirty work. In S. U. Noble & B. M. Tynes (Eds.), The intersectional internet: Race, sex, class and culture online (pp. 147–160). Peter Lang.

Rogers, R. & Niederer, S. (Eds.). (2020). The politics of social media manipulation . Amsterdam University Press.

Silverman, C. (2016, November 16). This analysis shows how viral fake election news stories outperformed real news on Facebook . Buzzfeed News. https://www.buzzfeednews.com/article/craigsilverman/viral-fake-election-news-outperformed-real-news-on-facebook

Silverman, C. (2017, December 31). I helped popularize the term “fake news” and now I cringe every time I hear it. Buzzfeed News. https://www.buzzfeednews.com/article/craigsilverman/i-helped-popularize-the-term-fake-news-and-now-i-cringe

Wardle, C. (2016, November 18). 6 types of misinformation circulated this election season. Columbia Journalism Review . https://www.cjr.org/tow_center/6_types_election_fake_news.php

Wardle, C. (2017, February 16). Fake news. It’s complicated. First Draft. https://firstdraftnews.org/latest/fake-news-complicated/

Van Den Berg, E. (2019, July 30). Opnieuw misser bij Forum voor Democratie: Persoonlijke advertentie Thierry Baudet offline gehaald . NPO3. https://www.npo3.nl/brandpuntplus/opnieuw-misser-bij-forum-voor-democratie-persoonlijke-advertentie-thierry-baudet-offline-gehaald

Venturini, T. (2019). From fake to junk news: The data politics of online virality. In D. Bigo, E. Isin, & E. Ruppert (Eds.), Data politics: Worlds, subjects, rights (pp. 123-144). Routledge.

Vosoughi, S., Roy, D. & Aral, S. (2018). The spread of true and false news online. Science , 359 (6380), 1146-1151.

Funding supplied by First Draft.

Competing Interests

No conflicts of interest.

No ethics issues.

This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided that the original author and source are properly credited.

Data Availability

The data are available via the Harvard Dataverse at https://doi.org/10.7910/DVN/VFMJUH

  • Share full article

Advertisement

Supported by

Guest Essay

Clearly, Facebook Is Very Flawed. What Will We Do About It?

facebook problem essay

By Kate Klonick

Dr. Klonick is a lawyer and assistant professor at St. John’s University Law School. She is a fellow at Yale Law School’s Information Society Project and the Brookings Institution and is currently writing a book on Facebook and Airbnb.

Two weeks ago, The Wall Street Journal published “ The Facebook Files ,” a damning series based on a cache of leaked internal documents that revealed how much the company knew about the harms it was causing and how little it did to stop it.

In a hearing on Thursday, senators on the consumer protection subcommittee accused Facebook of hiding vital information on its impact on users. “It has attempted to deceive the public and us in Congress about what it knows, and it has weaponized childhood vulnerabilities against children themselves,” Senator Richard Blumenthal, the chairman of the subcommittee and a Democrat from Connecticut, charged.

I’ve spent the last six years researching how platforms govern speech online, including a year inside Facebook following the development of its Oversight Board. While the “factory floor” of the company is full of well-intentioned people, much of what the series has reported confirmed what I and other Facebook watchers have long suspected.

The Journal’s reporting showed that Facebook regularly gave preferential treatment to elites if their speech was flagged on the platform; that it implemented shoddy solutions to mitigate the harmful mental and emotional health effects of its products on teenagers; and that it underinvested in enforcing its own rules about what is allowed on the site outside of the United States. The series has stirred the now familiar outrage at Facebook for failing to take responsibility for how people use its platform. While these revelations are disturbing, they also point to some opportunities for reform.

One of those opportunities is redefining how Facebook determines what a “good” product is. For much of its history, the company’s key metric has been user engagement — how long users log in, the pages they spend time on, which ads they click. The greater the user engagement, the more valuable Facebook’s ads, and the more profit for shareholders. But the “Facebook Files” stories have put to rest any doubt that this narrow concept of engagement fails to capture the platform’s real impact — both the bad and, yes, the good.

Facebook is perfectly capable of measuring “user experience” besides the narrow concept of “engagement,” and it is time those measurements were weighted more heavily in company decision-making. That doesn’t mean just weighing harmful effects on users; it could also mean looking at and measuring the good things Facebook offers — how likely you are to attend a protest or give to a charitable cause you hear about on Facebook. However it ends up being calculated, it needs to be transparent and it needs to become a bigger part of the company’s decision-making going forward.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

  • Essay Topic Generator
  • Essay Grader
  • Reference Finder
  • AI Outline Generator
  • Paragraph Expander
  • Essay Expander
  • Literature Review Generator
  • Thesis Generator
  • Text Editing Tools
  • AI Rewording Tool
  • AI Sentence Rewriter
  • AI Article Spinner
  • AI Grammar Checker
  • Spell Checker
  • PDF Spell Check
  • Paragraph Checker
  • Free AI Essay Writer
  • Paraphraser
  • Grammar Checker
  • Citation Generator
  • Plagiarism Checker
  • AI Detector
  • AI Essay Checker
  • Proofreading Service
  • Editing Service
  • AI Writing Guides
  • AI Detection Guides
  • Citation Guides
  • Grammar Guides
  • Paraphrasing Guides
  • Plagiarism Guides
  • Summary Writing Guides
  • STEM Guides
  • Humanities Guides
  • Language Learning Guides
  • Coding Guides
  • Top Lists and Recommendations
  • AI Detectors
  • AI Writing Services
  • Coding Homework Help
  • Citation Generators
  • Editing Websites
  • Essay Writing Websites
  • Language Learning Websites
  • Math Solvers
  • Paraphrasers
  • Plagiarism Checkers
  • Reference Finders
  • Spell Checkers
  • Summarizers
  • Tutoring Websites
  • Essay Checkers
  • Essay Topic Finders

Most Popular

11 days ago

How To Write a Biography Essay

Apu students get flexible on-campus working hours and other benefits, dorm overbooking and transitional housing: problems colleges are trying to solve, new program drives more latina students to colleges what problems do they face daily, how to write a profile essay, the pros and cons of using facebook essay sample, example.

Admin

Facebook is a great way to keep in touch with people who are far away. According to surveys, friends who reside in different countries and use Facebook to communicate with each other display a more optimistic mood and feel calmer about those who are close to them, compared to those who do not use any social networks, or use only email (IFR Database). People who use Facebook tend to feel like they are in touch with the rest of the world regardless of distances, and this sensation makes them feel better.

Facebook is a reasonable option for people who want to stay updated with the news of the topics that are of interest to them. Joining various communities regarding all kinds of activities, and receiving updates from them turn Facebook into an easy-to-use, completely-customizable newsline. Hence, Facebook can be a useful tool for those who need to receive operational and fresh information.

At the same time, Facebook is known to be a factor that distorts one’s perception of reality, declines the satisfaction of one’s life and personality, and negatively affects relationships between people. According to the research held by Ethan Kross of the University of Michigan and Philippe Verduyn of Leuven University, people who use Facebook often display a growing dissatisfaction with their lives, whereas respondents who use Facebook infrequently and socialize with peers in real life felt happier and healthier (The Economist).

The same research showed how the most common emotion experienced by people who regularly use Facebook is envy. This is due to the fact that people usually do their best to make their lives look better than they are in reality, and at the same time believe in the reality of “virtual lives” created by other Facebook users.

Facebook can be dangerous for teenagers and children. Parents who would like to protect their children from negative information on the Internet should consider how Facebook is full of links to other media resources, some of which can be explicit. Whereas it is possible, to some extent, to control a child’s use of Facebook, it is impossible to predict where browsing these links could lead them (TheOnlineMom). Due to the same reason, parents have a right to feel worried about the friends of friends. One can know all the friends of their children, but these friends have other friends, who can have a negative influence on these children.

Facebook is a tool which should be used with caution. Though it is a convenient way to stay in touch with friends and acquaintances who live far away, and to stay updated about events which are of interest for a particular person, it can also have negative impacts on one’s personality. For example, Facebook causes its regular users to feel envious about the lives of other people; it can also provoke dissatisfaction with one’s own life, especially compared to people who socialize more in real life rather than online. The relationship Facebook has with the world is bittersweet: but we may witness its relationship turn for the worse in the coming decades.

“Get a Life!” The Economist. The Economist Newspaper, 17 Aug. 2013. Web. 07 Mar. 2014. <http://www.economist.com/news/science-and-technology/21583593-using-social-network-seems-make-people-more-miserable-get-life>.

“The Pros and Cons of Facebook for Kids.” TheOnlineMom. N.p., n.d. Web. 07 Mar. 2014. <http://theonlinemom.com/secondary.asp?id=1275>

“What’s Wrong About Facebook?” IFR Database. 25 Jun. 2011. Web. 07 Mar. 2014.

Follow us on Reddit for more insights and updates.

Comments (0)

Welcome to A*Help comments!

We’re all about debate and discussion at A*Help.

We value the diverse opinions of users, so you may find points of view that you don’t agree with. And that’s cool. However, there are certain things we’re not OK with: attempts to manipulate our data in any way, for example, or the posting of discriminative, offensive, hateful, or disparaging material.

Comments are closed.

More from Analytical Essay Examples and Samples 2024

Hirschi's Social Bond Theory

Nov 28 2023

Hirschi’s Social Bond Theory

Another Brick In The Wall Meaning

Nov 27 2023

Another Brick In The Wall Meaning

Themes in The Crucible

Themes in The Crucible

Related writing guides, writing an analysis essay.

Remember Me

What is your profession ? Student Teacher Writer Other

Forgotten Password?

Username or Email

Home — Essay Samples — Business — Corporations — Facebook

one px

Essays on Facebook

Facebook has become an integral part of our daily lives, influencing the way we communicate, socialize, and consume information. As a result, it has also become a popular topic for essays across various disciplines. Whether you're an aspiring journalist, a social media marketer, or a sociology student, choosing the right Facebook essay topic is crucial for a successful paper. In this guide, we'll discuss the importance of the topic, provide advice on choosing a topic, and offer a detailed list of recommended essay topics, divided by category.

Writing an essay about Facebook can provide valuable insights into the impact of social media on society, the psychology of online interactions, and the business strategies of a tech giant. The right topic can help you explore these themes in depth and offer new perspectives on the role of Facebook in our lives. Additionally, choosing a relevant and engaging topic can make your essay stand out and capture the reader's attention from the start.

When choosing a Facebook essay topic, consider your interests and the specific focus of your assignment. Are you writing for a psychology class? A communications course? Or perhaps a business management program? Tailoring your topic to your discipline can help you delve into the subject matter more effectively. Additionally, consider the current trends and controversies surrounding Facebook, as these can inspire thought-provoking essay topics.

30 Facebook Essay Topics for Different Occasions and Interests

Psychology and sociology.

  • The impact of Facebook on self-esteem and body image
  • Online friendships and social capital on Facebook
  • The psychology of online interactions: anonymity and self-disclosure
  • The role of Facebook in social movements and activism
  • Facebook addiction and its psychological effects
  • The influence of Facebook on relationships and communication

Marketing and Business

  • Facebook's advertising algorithms and targeting strategies
  • The impact of Facebook on consumer behavior and purchasing decisions
  • Brand management and reputation on Facebook
  • The role of influencers and sponsored content on Facebook
  • The ethical implications of data mining and user privacy on Facebook
  • Facebook's role in digital marketing and social media campaigns

Media and Journalism

  • The influence of Facebook on news consumption and media literacy
  • Fact-checking and misinformation on Facebook
  • The role of Facebook in shaping public opinion and political discourse
  • The impact of Facebook's algorithms on content visibility and news distribution
  • The future of journalism in the age of Facebook and social media
  • Facebook's role in shaping the public narrative and agenda setting

Law and Ethics

  • Regulatory challenges and legal issues surrounding Facebook
  • User rights and data protection on Facebook
  • The ethical implications of targeted advertising and data collection on Facebook
  • The role of Facebook in shaping public discourse and freedom of speech
  • Comparative analysis of Facebook's privacy policies and practices
  • The intersection of antitrust laws and Facebook's market dominance

Technology and Innovation

  • Facebook's impact on technology and digital innovation
  • The future of virtual reality and Facebook's involvement
  • Facebook's data collection and privacy concerns in the age of AI
  • Comparing Facebook's features and evolution with other tech giants
  • The role of Facebook in shaping the future of communication and connectivity

With these recommended essay topics, you can explore the multifaceted impact of Facebook on society, business, and culture. Whether you're interested in psychology, marketing, journalism, or ethics, there's a wealth of compelling topics to choose from. By selecting a topic that aligns with your interests and expertise, you can craft a well-researched and insightful essay that contributes to the ongoing discourse surrounding Facebook and its influence on our lives.

Evolution of Social Media in Modern Marketing

Benefits and disadvantages of concerning using facebook, made-to-order essay as fast as you need it.

Each essay is customized to cater to your unique preferences

+ experts online

The Popularity of Facebook and Reasons to Use It

Assessment of the pros and cons of facebook as a social platform, pros and cons of social media: analysis of facebook, the understanding of university students toward fake news on facebook, let us write you an essay from scratch.

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

Disadvantages of Adolescent Communication on Facebook

Social media echo chambers: good or bad, mark zukerberg's entrepreneurial story, facebook and its effect on female representation, get a personalized essay in under 3 hours.

Expert-written essays crafted with your exact needs in mind

Mark Zuckerberg: Co-founder of The Social-networking Website Facebook

Facebook: how a college experiment changed the world, positives and negatives of the facebook analytica scandal, facebook’s algorithm: code to the new bible, technologies of labour and the politics of contradiction, a report on facebook social networking corporation, the youngest billionaire mark zuckerberg, two ways to attract activity from facebook to your organizations, own cryptocurrency should be launced by facebook, a case study of analyzing facebook posts of different bangladeshi newspapers, my attitude to the advertising on facebook, facebook's war on free will, facebook and the use of uncertainty reduction theory, effect of facebook on our emotional state, why facebook should be banned: the case of indian teenagers, self-disclosure in digital and interpersonal communication, the impact and future of facebook, the leadership style of mark zuckerberg, ethical and social implications of mark zuckerberg's decisions, the complex relationships between tech giants and governments, relevant topics.

  • Comparative Analysis
  • John D. Rockefeller
  • Advertisement
  • Time Management
  • Madam Cj Walker

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

facebook problem essay

Search

  • I nfographics
  • Show AWL words
  • Subscribe to newsletter
  • What is academic writing?
  • Academic Style
  • What is the writing process?
  • Understanding the title
  • Brainstorming
  • Researching
  • First draft
  • Proofreading
  • Report writing
  • Compare & contrast
  • Cause & effect
  • Problem-solution
  • Classification
  • Essay structure
  • Introduction
  • Literature review
  • Book review
  • Research proposal
  • Thesis/dissertation
  • What is cohesion?
  • Cohesion vs coherence
  • Transition signals
  • What are references?
  • In-text citations
  • Reference sections
  • Reporting verbs
  • Band descriptors

Show AWL words on this page.

Levels 1-5:     grey  Levels 6-10:   orange 

Show sorted lists of these words.

 
-->

Any words you don't know? Look them up in the website's built-in dictionary .

Choose a dictionary .  Wordnet  OPTED  both

Problem-solution essays Situation-problem-solution-evaluation

Problem-solution essays are a common essay type, especially for short essays such as subject exams or IELTS . The page gives information on what they are , how to structure this type of essay, and gives an example problem-solution essay on the topic of obesity and fitness levels.

What are problem-solution essays?

Problem-solution

Problem-solution essays consider the problems of a particular situation, and give solutions to those problems. They are in some ways similar to cause and effect essays , especially in terms of structure (see below). Problem-solution essays are actually a sub-type of another type of essay, which has the following four components:

The 'situation' may be included in the essay prompt, in which case it will not be needed in the main body. If it is needed, it can often be included in the introduction, especially for short essays, as with the example essay below . The 'evaluation' may be included as part of the conclusion (also as in the example below), or omitted altogether, especially for short essays. For these reasons, problem-solution essays are more common than situation-problem-solution-evaluation essays (or SPSE essays).

There are two main ways to structure a problem-solution essay. These are similar to the ways to structure cause and effect essays , namely using a block or a chain structure. For the block structure, all of the problems are listed first, and all of the solutions are listed afterwards. For the chain structure, each problem is followed immediately by the solution to that problem. Both types of structure have their merits. The former is generally clearer, especially for shorter essays, while the latter ensures that any solutions you present relate directly to the problems you have given.

The two types of structure, block and chain , are shown in the diagram below. This is for a short essay, which includes the 'situation' in the introduction and 'evaluation' in the conclusion. A longer essay, for example one of around 1,000 words, with citations , would probably have these two sections as separate paragraphs in the main body.





Example essay

Below is a problem-solution essay on the topic of obesity and poor fitness . It uses the block structure . Click on the different areas (in the shaded boxes) to highlight the different structural aspects in this essay, i.e. Situation, Problem, Solution, Evaluation. This will highlight not simply the paragraphs, but also (for problems and solutions) the thesis statement and summary , as these repeat the problems and solutions contained in the main body.

   
         
   
                   

Consumption of processed and convenience foods and our dependence on the car have led to an increase in obesity and reduction in the fitness level of the adult population. In some countries, especially industrialized ones, the number of obese people can amount to one third of the population. This is significant as obesity and poor fitness lead to a decrease in life expectancy , and it is therefore important for individuals and governments to work together to tackle this issue and improve their citizens' diet and fitness. Obesity and poor fitness decrease life expectancy. Overweight people are more likely to have serious illnesses such as diabetes and heart disease, which can result in premature death. It is well known that regular exercise can reduce the risk of heart disease and stroke, which means that those with poor fitness levels are at an increased risk of suffering from those problems. Changes by individuals to their diet and their physical activity can increase life expectancy. There is a reliance today on the consumption of processed foods, which have a high fat and sugar content. By preparing their own foods, and consuming more fruit and vegetables, people could ensure that their diets are healthier and more balanced, which could lead to a reduction in obesity levels. In order to improve fitness levels, people could choose to walk or cycle to work or to the shops rather than taking the car. They could also choose to walk up stairs instead of taking the lift. These simple changes could lead to a significant improvement in fitness levels. Governments could also implement initiatives to improve their citizens' eating and exercise habits. This could be done through education, for example by adding classes to the curriculum about healthy diet and lifestyles. Governments could also do more to encourage their citizens to walk or cycle instead of taking the car, for instance by building more cycle lanes or increasing vehicle taxes. While some might argue that increased taxes are a negative way to solve the problem, it is no different from the high taxes imposed on cigarettes to reduce cigarette consumption. In short, obesity and poor fitness are a significant problem in modern life, leading to lower life expectancy . Individuals and governments can work together to tackle this problem and so improve diet and fitness . Of the solutions suggested, those made by individuals themselves are likely to have more impact, though it is clear that a concerted effort with the government is essential for success. With obesity levels in industrialized and industrializing countries continuing to rise, it is essential that we take action now to deal with this problem.

 
 
           
 

Academic Writing Genres

GET FREE EBOOK

Like the website? Try the books. Enter your email to receive a free sample from Academic Writing Genres .

Below is a checklist for the main body of an essay. Use it to check your own writing, or get a peer (another student) to help you.

The essay is a essay
An appropriate is used, either or
The essay has a clear
Each paragraph has a clear
The essay has strong support (facts, reasons, examples, etc.)
The conclusion includes a of the main points

Next section

Find out about writing definitions and definition essays in the next section.

Previous section

Go back to the previous section about cause & effect essays .

  • Cause/effect

logo

Author: Sheldon Smith    ‖    Last modified: 22 January 2022.

Sheldon Smith is the founder and editor of EAPFoundation.com. He has been teaching English for Academic Purposes since 2004. Find out more about him in the about section and connect with him on Twitter , Facebook and LinkedIn .

Compare & contrast essays examine the similarities of two or more objects, and the differences.

Cause & effect essays consider the reasons (or causes) for something, then discuss the results (or effects).

Discussion essays require you to examine both sides of a situation and to conclude by saying which side you favour.

Problem-solution essays are a sub-type of SPSE essays (Situation, Problem, Solution, Evaluation).

Transition signals are useful in achieving good cohesion and coherence in your writing.

Reporting verbs are used to link your in-text citations to the information cited.

IMAGES

  1. Facebook Essay

    facebook problem essay

  2. 📌 Paper Example. Facebook Technological Challenges and Solutions

    facebook problem essay

  3. Facebook Has A Problem

    facebook problem essay

  4. Facebook's biggest problem isn't ethics, hate or fake news. It's Facebook

    facebook problem essay

  5. Essay on Facebook in English for Students

    facebook problem essay

  6. Facebook

    facebook problem essay

VIDEO

  1. Facebook’s AI Problem

  2. How To Fix Facebook Content Monetisation Policy Issues

  3. yby fund app logging problem essay solve

  4. How to Solve the Problem of Not Recommended Facebook Profiles

  5. traffic problem✌ #shorts #ytshorts

  6. Facebook Problem ? Fb Problem ? Instagram Problem ? Facebook Session Expired ? Facebook Login FIX ✅

COMMENTS

  1. Facebook's ethical failures are not accidental; they are part of the

    Facebook's problem is not a technology problem. It is a business model problem. This is why solutions based in technology have failed to stem the tide of problematic content. If Facebook employed a business model focused on efficiently providing accurate information and diverse views, rather than addicting users to highly engaging content ...

  2. How Facebook Failed the World

    How Facebook Fails 90 Percent of Its Users. Internal documents show the company routinely placing public-relations, profit, and regulatory concerns over user welfare. And if you think it's bad ...

  3. The Impact and Future of Facebook: [Essay Example], 532 words

    Conclusion. In conclusion, Facebook has had a significant impact on society, both positive and negative. It has revolutionized communication and social interaction, but it has also raised concerns about privacy and data protection. Facebook's business model and revenue streams have allowed it to become one of the most successful companies in ...

  4. Essay on Facebook in English for Students

    500 Words Essay On Facebook. Facebook has become one of the most famous social networking sites. However, it comes with its own sets of pros and cons. While it has helped a lot of individuals and business to create their brand, it is also being used for wrong activities. Through an essay on Facebook, we will go through all this in detail.

  5. Social Media: Facebook Problems, Decisions and Actions

    Get a custom case study on Social Media: Facebook Problems, Decisions and Actions. In 2005, there was a wave of fear that job seekers, as well as college admission officers, used students' profiles to find the most appropriate candidates. In 2006, Facebook introduced "News Feed". It commenced the second wave of indignation concerning the ...

  6. Academic study reveals new evidence of Facebook's negative impact on

    facebook; X; linkedin; email; print; open share links close share links. Researchers created control group by comparing colleges that had access to the platform to colleges that did not during the first two years of its existence CAMBRIDGE, Mass., Sept. 27, 2022 — A new study led by researchers from Tel Aviv University, MIT Sloan School of Management and Bocconi University reveals new ...

  7. The Facebook Dilemma

    With dozens of original interviews and rare footage, The Facebook Dilemma examines the powerful social media platform's impact on privacy and democracy in the U.S. and around the world. An ...

  8. The Meta Narrative: What We've Learned from the Facebook Papers

    The problem lay on the "demand side"—in other words, the problem was Facebook's users. The post received six hundred and seventy-nine reactions, all "like"s and "love"s, and a ...

  9. Opinion

    Nathaniel Persily, a professor at Stanford Law School, has a neat way of describing the most basic problem in policing Facebook: "At present," Persily has written, "we do not know even what ...

  10. Facebook Essay

    Facebook is among the most popular social media networking sites today. It is popular due to its multiple applications and the ease of communication it offers to the user. It allows people to share pictures, events and statuses on a single platform. Facebook has several benefits, such as forming groups, chatting with friends and finding ...

  11. The Facebook Papers: What you need to know : NPR

    A trove of insider documents known as the Facebook Papers has the company facing backlash over its effects on society and politics. Facebook's rank-and-file employees warned their leaders about ...

  12. Pros and Cons of Social Media: Analysis of Facebook

    One of the problems that Facebook had is its rapid report of others' activities. Events such as party, family trip or birthday are one of the few components that make people desire to check Facebook. ... Facebook: How a College Experiment Changed the World Essay. Facebook is a social networking site that allows its user to connect easily with ...

  13. The Facebook Papers and their fallout.

    The challenges are global. In India, Facebook's biggest market, the problems are bigger, too. Internal documents show a struggle with misinformation, hate speech and celebrations of violence.

  14. How Facebook Makes Us Unhappy

    September 10, 2013. No one joins Facebook to be sad and lonely. But a new study from the University of Michigan psychologist Ethan Kross argues that that's exactly how it makes us feel. Over two ...

  15. Facebook has a misinformation problem, and is blocking access to data

    The Pandemic Profiteers study, which implies that Facebook could solve 70% of the medical misinformation problem by deleting only a dozen accounts, explicitly advocates for the deplatforming of ...

  16. Research note: The scale of Facebook's problem depends upon how 'fake

    Ushering in the contemporary 'fake news' crisis, Craig Silverman of Buzzfeed News reported that it outperformed mainstream news on Facebook in the three months prior to the 2016 US presidential elections. Here the report's methods and findings are revisited for 2020. Examining Facebook user engagement of election-related stories, and applying Silverman's classification of fake news,

  17. Essay: As Facebook turns 20, I have something to confess

    CNN —. Facebook turns 20 years old today, and if you don't like it, I'm sure you have your reasons. The numerous scandals. The loss of privacy. The time it drains away from other, better ...

  18. Clearly, Facebook Is Very Flawed. What Will We Do About It?

    According to Facebook's S.E.C. filings, the average revenue per users in the United States and Canada in the last quarter of 2020 was $53.56. Europe, its next-largest market, accounted for only ...

  19. Facebook: How a College Experiment Changed The World

    Facebook is a social networking site that allows its user to connect easily with their friends and loves ones. At the age of 23, Mark Zuckerberg founded Facebook and developed a few other social networking sites, while attending Harvard University. In February of 2004, Mark Zuckerberg launched Facebook, allowing students at Harvard University ...

  20. The Facebook Papers may be the biggest crisis in the company's ...

    Haugen has suggested Facebook's failure to fix such problems is in part because it prioritizes profit over societal good, and, in some cases, because the company lacks the capacity to put out ...

  21. The Pros and Cons of Using Facebook Essay Sample, Example

    People who use Facebook tend to feel like they are in touch with the rest of the world regardless of distances, and this sensation makes them feel better. Facebook is a reasonable option for people who want to stay updated with the news of the topics that are of interest to them. Joining various communities regarding all kinds of activities ...

  22. Essays on Facebook

    The Understanding of University Students Toward Fake News on Facebook. 6 pages / 2574 words. The majority of Cambodia population is young people which 65.3 percent are under 30 years old (UNDP, 2017). The age of young people is between 15 to 24 years old, yet it's subject to define an international perspective (UNGA, 1985).

  23. Facebook

    Connect with friends and the world around you on Facebook. Log In. Forgot password?

  24. Problem-solution essays

    Problem-solution essays consider the problems of a particular situation, and give solutions to those problems. They are in some ways similar to cause and effect essays, especially in terms of structure (see below). Problem-solution essays are actually a sub-type of another type of essay, which has the following four components: Situation. Problem.