Meta’s Internal Research

Meta’s Internal Research

What The Company Learned About Social Media and Harms to Mental Health from Dozens of Internal Studies

Compiled by Bennett Sippel, Nikolaus Greb, Emma Park, Zach Rausch, and Jonathan Haidt at the Tech and Society Lab at NYU Stern.

Date launched: January 13, 2026. Last updated: January 13, 2026.

Brief Summary

In this project we gather and summarize reports of all of the available internal research studies that Meta has carried out related to the question of whether its products — particularly Instagram — are harming young people.

These reports come from two primary sources: Whistleblowers who brought out thousands of screenshots or other records of internal company communications, and lawsuits filed by various state Attorneys General, who obtained documents in the process of discovery. To date we have located reports of 31 such studies. We will continue to update this page as more studies and information are revealed.

On this “Central Doc” you will find an overview of the project, some cautions around the  limitations of our sources, and then brief summaries of the studies themselves, organized by the methods employed for each study.

For those who want to go deeper, we have also compiled three supplements that contain all of the available information about the 31 known studies:

  • Supplement 1: All of the studies. This is a publicly viewable Google Doc with one tab/page for each study. Each tab/page contains all of the information we have located.
  • Supplement 2: 99 Exhibits from Francis Haugen, as posted by the Attorney General of Tennessee.
  • Supplement 3: All of the lawsuits. This is a Google Doc that links to all of the briefs posted by the various Attorneys General and other plaintiffs who are suing Meta.

Read the following studies to see what Meta’s own internal research reveals about how its products harm young people — and how the company has responded to that knowledge. The studies show that Meta has obtained extensive evidence, from many different kinds of research, that its products facilitate and enable vast direct harms to young people (e.g., cyberbullying, unwanted sexual contact) and that its products are likely harming users’ mental health, particularly for adolescent girls, particularly via harmful social comparisons, promotion of eating disorders, body-image problems, and increased depression.

We will continue updating this page as new studies and documents become public, and we welcome tips, corrections, and additional materials from researchers, journalists, policymakers, and others who wish to contribute to this record.

Clickable Table of Contents:

  • Brief Summary
  • Project Origin and Overview
  • On the Studies
  • Benefits and Harms
  • Project Structure
  • The Supplements
  • Cautions and Caveats
  • Studies by Methodology
  • Line 1. Surveys That Include Perspectives of Young People
  • 1.1 Bad Experiences and Encounters Framework (BEEF) Survey (2021)
  • 1.2 Appearance-Based Social Comparison (December 2020)
  • Line 2. Surveys of All Users
  • 2.1 Unnamed All User Survey 1 (2018)
  • Line 3. Surveys of Experts
  • 3.1 Unnamed Survey of Experts 1 — Clinicians (2022)
  • Line 4. Cross-Sectional Studies
  • 4.1 Social Comparison on Instagram Wellbeing Research (November 2018)
  • 4.2 Sensitive High Negative Appearance Comparison (“High-NAC”) Content – A 3-Study Series (2021)
  • Line 5. Longitudinal Studies
  • 5.1 “People Disagree Content” Seen by Teens Reporting Different Levels of Body Dissatisfaction After Viewing Content on IG (2024)
  • Line 6. Experimental Studies
  • 6.1 Project Daisy (2019)
  • 6.2 Project Mercury (2019)
  • Line 7. Internal Conceptual Models and Review Papers
  • 7.1 Teen Ecosystem (May 2020)
  • 7.2 Unnamed Review Paper 1 (2020)
  • Conclusion
  • Acknowledgements
  • Footnotes

Project Origin and Overview

In his opening statement to the U.S. Senate on January 31, 2024, Mark Zuckerberg stated, under oath, “Mental health is a complex issue and the existing body of scientific work has not shown a causal link between using social media and young people having worse mental health outcomes.” Later in the hearing, under questioning from Senator Osoff, Zuckerberg acknowledged that social media use correlates with depression, but he countered that “There is a difference between correlation and causation.”

Was he right in his summary of “the existing body of scientific work”? Is there no clear evidence that social media is harming young people?

For more than a decade, researchers have debated whether, and to what degree, social media use is harming mental health, especially among adolescents. One reason this debate has been difficult to resolve is that university-based researchers do not have easy access to high-quality data. Researchers must recruit children and adolescents to study, and they must obtain parental permission for participants under 18. As a result, much academic research — including nearly all experimental studies — relies on college students or young adults rather than adolescents going through puberty, who are at the center of the public concern. Without direct access to user data, these researchers are also often forced to rely on less precise and more noisy data.

The social media companies, in contrast, have extraordinarily detailed and precise data, known as “user-behavioral log data.” They know exactly what their algorithms deliver, how long the user watches, and what behavior follows each viewing. They can infer the users’ emotional states from those behaviors (a technique they use to target advertisements), and they can survey those users and match their responses to the log data, so they know which kinds of experiences cause which kinds of mental suffering. In short, internal researchers can observe patterns, risks, and harms at a scale and level of precision that is simply unavailable to outside scholars.

These companies also have access to data from tens of millions of children and adolescents without needing parental permission. Children do not need parental consent to open an account; they need only state that they are 13 years old. As a result, internal researchers can easily and directly study millions of minors, including many who are in fact under 13. So when it comes to understanding what is happening to young people on social media platforms, the companies’ internal researchers have a front row seat on the action and a full transcript of the dialogue, while external academic researchers are forced to sit in the back row of the upper balcony, struggling to hear the dialogue and barely able to discern what’s happening onstage. 1^{1}

So, while academic researchers are caught up in the debates over imperfect surveys and experiments, Meta itself has conducted dozens of internal studies using far more detailed data than is available to outside researchers, and the findings of those studies have largely remained internal.

The central question we address in this project is: What has Meta itself observed about the harms tied to its products?

On the Studies

Thanks to thousands of screenshots of internal communications and presentations brought out by whistleblowers Frances Haugen (in 2021) and Arturo Béjar (in 2023), we now know of more than two dozen internal studies examining the psychological effects of Meta’s products. Several of these studies came to light in 2025 through the discovery process in ongoing lawsuits.

These studies span a range of nations and populations including adults, mixed-age users, experts, and adolescents.2^{2} They also span a great variety of methods. There are surveys of young people, parents, and of mental-health clinicians. There are correlational studies and longitudinal studies. There are even two experiments, which are generally considered to be the most effective method for identifying causation. While a few of the studies are already widely known — such as the one in which researchers wrote, about Instagram, “We make body image issues worse for one in three teen girls” — most of them are new to the public and/or the academic research community.

Here, for the first time, we gather, summarize, and interpret all available internal Meta studies in one place. We show what Meta did in each study, what it learned, and, when available, how it responded to the findings. In several cases we have memos or emails that acknowledge the harm and/or discuss how to dismiss or hide the findings. The result is a detailed record of internal awareness of harm that complicates, and in some cases directly contradicts, Meta’s public narrative about uncertainty and ignorance.3^{3} The studies reveal belief and knowledge that their products are harming vast numbers of young people.

Benefits and Harms

Nothing complex is black and white, so, expectedly, some of the internal studies   include evidence that many adolescents point to benefits of using social media, such as feeling “more connected.” We do not doubt that many young people experience these kinds of benefits.4^{4}

But for consumer products used daily by hundreds of millions of children and adolescents, the relevant question is not whether more users report harms than benefits. Any other consumer product that caused serious harm to even 1 percent of the children and teens who used it would likely be removed from the market. For this reason, the focus should be on understanding how many young users are being harmed, with special attention to serious harms.

As you’ll see below, there are numerous studies that indicate direct harms (such as unwanted sexual contact and damaging effects on mental health that seem to apply to far more than 1 percent of all users. In the case of Instagram, some of the harms range from 10 to 30 percent of teens, as you can see in Study 1.1. For instance, 13 percent of 13–15 year-olds experience unwanted sexual advances every week and 8 percent are exposed to suicide content each week. At scale, this translates to hundreds of millions of teens.

In another internal study of all Facebook users (see study 2.1), Meta researchers found that 3.1 percent of users met Meta’s threshold for “severe problematic use.” In internal correspondence responding to the study, Mark Zuckerberg himself stated that “3 percent of billions of people is a lot of people…it’s millions of people.” In fact, 3 percent of three billion people (the total number of monthly active Facebook users) is 90 million people who are struggling with severe problematic use.

In response to another internal study, a senior data scientist at Meta put those statistics on problematic use into context:

“It seems clear from what’s presented here that some of our users are addicted to our products. And I worry that driving sessions incentivizes us to make our product more addictive, without providing much more value. How to keep someone returning over and over to the same behavior each day? Intermittent rewards are most effective (think slot machines) reinforcing behaviors that become especially hard to extinguish—even when they provide little reward, or cease providing reward at all.”

In other words, Meta knows that many of their users are addicted to their products, and they know the mechanisms by which this is happening (see Study 7.1 for additional internal research on addiction, dopamine, and social media use).

We note that one of Meta’s common defenses is a “net positive” framing: the claim that if the percent of teens claiming that Instagram helped them in some way is larger than the percent who claimed that it hurt them, then the product is a net benefit to teens and there is little cause for alarm or regulation. This framing appears clearly in Meta’s public characterization of its internal research on teen girls, where the company emphasized that some respondents reported Instagram made certain challenges “better rather than worse,” despite nearly equivalent proportions reporting that it made those same challenges worse.

This framing is misleading. The relevant health and regulatory question is not whether some users experience benefits, but whether there are serious, predictable harms — particularly to minors.

Meta’s internal research demonstrates clear knowledge of frequent and severe harms (e.g., increases in social comparison, body dissatisfaction, anxiety, depressive symptoms, exposure to sexual content and harassment, and distress linked to algorithmic amplification and engagement-driven design) at scale. Recasting those harms as acceptable because they are not universal, or because many users report benefits, obscures Meta’s moral responsibility to not harm children, and diverts attention from the company’s obligation to act on what it knows.

Project Structure

We organized the studies by research method. This Central Doc offers an overview of the project, explaining each method and giving one or two salient examples.5^{5} Readers who want to go deeper can explore the three online supplements, which contain extensive quotations and screenshots from internal company emails, memos, and presentations. We included all of the publicly available internal research from Meta Inc. that we have been able to locate.

To compile these studies, we began by reading the collection of 99 internal exhibits released to the public in January 2024 by the Tennessee Attorney General.6^{6} These documents, drawing heavily from whistleblowers Francis Haugen and Arturo Bejar, contain descriptions of 19 internal studies, along with reports, email exchanges, and message threads. We also included three additional studies released by Haugen that were not part of the Tennessee brief. Finally, we drew on a major lawsuit brought by U.S. school districts against Meta and other platforms (released November 21, 2025) that unearthed nine additional studies that had not been previously available to the public. (We will refer to this brief as School Districts v. Social Media7^{7} throughout).8^{8}

We extracted all 31 internal studies referenced in those documents, then organized them by methodology:

  1. Surveys That Include Perspectives of Young People
  2. Surveys of All Users
  3. Surveys of Experts
  4. Cross-Sectional Studies
  5. Longitudinal Studies
  6. Experimental Studies
  7. Internal Conceptual Models and Review Papers

Meta’s chief defense is that the existing research is merely “correlational,” but when we categorize the research that Meta itself carried out, we can see that only one category is correlational (or cross-sectional). Meta has conducted many different kinds of research — often using precise user-behavioral log data — and several of their methods offer direct or indirect evidence of causation.9^{9}

The Supplements

You can go deeper and learn more about all 31 of Meta’s publicly available studies, including those not featured in this Central Doc, by visiting Supplement 1. Each of the tabs in Supplement 1 includes all available internal slides, notes, and relevant excerpts from the underlying documents, as well as any publicly available discussions between employees or executives about the respective study. We will continue to update Supplement 1 as additional information and new internal studies enter the public record.

To go even deeper, you can view all 99 exhibits from the Tennessee Attorney General's complaint against Meta and other relevant documents, including internal emails, communications, reports, posts, and research documents in Supplement 2.

Finally, you can view the full legal documents we drew on for this research in Supplement 3, which provides the fuller context of each of the many lawsuits. While not exhaustive, this represents a substantial collection of relevant legal filings.

Cautions and Caveats

The 31 internal studies catalogued here reflect Meta’s own research decisions, data analysis, and framing. The public only has access to the documents that were leaked or disclosed during legal proceedings, which means that we are limited to a) what the company chose to study, and b) what happened to be captured in the information that has come into the public domain.

  • Methodological Knowledge Gaps: Meta itself did not officially release these studies, and consequently the methodological information available varies. For some studies we have comprehensive methodological descriptions, while for others we have access only to partial details or findings. We have documented all of the information we have been able to locate. 
  • Selection Bias: The available internal documents come primarily from whistleblower disclosures or lawsuits, which may introduce selection effects. Whistleblowers typically release materials that reveal institutional concerns or problematic findings. If Meta has conducted internal research concluding that social media use has neutral or beneficial effects on teen depression, those studies are not currently accessible to us. Readers should consider this potential gap when evaluating the scope of evidence presented in this project.
  • Study Categorization and Incomplete Research: We categorize studies based on the results available, not on Meta’s initial intentions. Because several studies were halted before completion, the original study design does not always match the data ultimately collected. Studies designed as longitudinal research but conducted only once are classified as surveys, and experiments lacking sufficient methodological documentation are treated as descriptive studies.10^{10}
  • Caution about Studies from School Districts v. Social Media: The internal research studies obtained through discovery in this ongoing litigation need to be approached with caution. Plaintiffs' counsel have obvious incentives to characterize Meta's research in ways that support their legal claims, potentially presenting findings in the least charitable light or removing important context.
    • Meta has consistently argued that plaintiffs have taken their research "out of context" but has declined to publicly release the full studies or provide additional context. We cannot verify whether additional context would meaningfully alter these conclusions.
    • We have documented these studies as they appear in legal filings, acknowledging that we may be working with incomplete or strategically framed information. 
    • Readers should consider both the potential for plaintiff bias in presentation and Meta's unwillingness to provide countervailing evidence.

Studies by Methodology

What follows is the main section of the Central Document — the repository of publicly available internal studies conducted by researchers at Meta Inc., categorized by type.

Refer to this key to differentiate between the various speakers/writers/quotations:

  • Black Text: Written by the authors of this document.
  • Red Text: Any phrase or statistic taken directly from a Meta internal document.
  • Purple Text: Quotes drawn from legal briefings (most are from School Districts v. Social Media) but not from Meta's internal documents. We include these external, non-Meta quotes because they provide context for studies whose internal materials remain under seal. We note that these are allegations contained in legal briefs. Without having access to the data or full report of the studies themselves, we cannot be certain of the accuracy of these allegations.

Each study links to Supplement 1, where readers can access the full internal document associated with our summary, unless that document is still under seal.

Line 1. Surveys That Include Perspectives of Young People

This first line of research comprises Meta's known internal surveys that include the reports of young people (under 18) about their experiences using Meta’s products.

1.1 Bad Experiences and Encounters Framework (BEEF) Survey (2021)

image

OVERVIEW: Completed in July 2021, this study attempted to create a holistic measurement system for tracking user harm across Instagram's ecosystem. It captured detailed data on the experience and frequency of all major categories of platform harm, creating an internal database of negative user experiences that could be used to inform product development and risk assessment. It was overseen by Arturo Béjar, who had been a senior leader at Meta, and who returned to the company in 2019 with the goal of improving the safety of Meta’s products after his 14-year-old daughter received repeated unwanted sexual advances on Instagram.

ACCESS: This study was brought to attention to the public by whistleblower Arturo Béjar in 2023. It is also one of the 99 exhibits in Tennessee v. Meta.

METHODS: 237,923 respondents from a random sample of Instagram users (all ages). The survey covered 22 distinct harm categories including bullying and harassment, unwanted advances/solicitation, self-harm content, violent and sexual content, and negative social comparison (e.g., “Have you ever felt worse about yourself because of other peoples’ posts on Instagram?”11^{11}). The study asked users to report only what they experienced in the previous seven days. Each respondent was asked about five randomly selected issues from the 22 categories, with follow-up questions for any reported experiences, including the frequency and intensity with which the harms were experienced. Responses were analyzed across multiple variables including “surface type” (e.g., stories, DMs, feed, comments), creator status, frequency, user proximity (stranger vs. known in real life), age, and gender demographics.

Study uses log data.

FINDINGS:

  • The BEEF survey documented widespread harm across Instagram's user base, with over half of respondents (51.6 percent) reporting at least one negative experience in the previous seven days.
  • The youngest users (ages 13–15) were disproportionately affected: 13.9 perfect received unwanted sexual advances, 21.4 percent experienced negative comparison (three times the rate of users 45+), and 19.2 percent encountered unwanted nudity.
  • “DM/chat has the two highest rates of issues across surfaces: 73.1% of the time for fake account contact, and 68.6% of the time for unwanted sexual advances,” and “93.8% of unwanted sexual advances are from people the respondent doesn’t know.” Those who experienced unwanted advances had an average weekly frequency of 3.14.
  • “Males and females experience issues at significantly different rates, and those patterns remain consistent across age groups.” In the 13–15 age group, 27.4% of females experienced negative comparison in the last 7 days, compared to only 14.6% of all males.
  • About one in ten respondents ages 13–17 said they were the target of bullying in the last seven days, and nearly one in three said they had watched it happen.
Figure 1.
Figure 1. A table displaying the percentages of respondents who selected “Yes, during the last 7 days” for each of the issues, split by self-reported age. The green cells represent the lowest incidence for that issue across all age groups, and the red cells represent the highest incidence.” You can see the full table, for all seven age groups, in Supplement 1.1.

LEARN MORE IN SUPPLEMENT 1

COMMENTARY FROM THE TECH & SOCIETY LAB AT NYU STERN (T&S LAB): Meta’s BEEF survey, overseen and made public by Béjar, is very important for the debate about social media’s impact on teen well-being and mental health because it captures first-hand reports of the wide range of direct harms that young people experience on Instagram. When academic research focuses on associations between broad mental-health outcomes and social media use, these direct harms are not typically captured, yetthe data from the BEEF survey makes it clear that these direct harms are widespread.

Describing the findings, Béjar said: “Instagram hosts the largest-scale sexual harassment of teens to have ever happened.” He made it clear in his Congressional testimony that he had briefed Meta executives on his findings, confirming that they are aware of the harms teens experience onInstagram.

The BEEF survey data are not associations; they are direct reports from adolescents about the harms they have experienced on Instagram. Béjar’s analysis shows that teens encounter these harms very frequently. Most strikingly, 13 percent of 13–15-year-olds report experiencing unwanted sexual advances

every week

.

1.2 Appearance-Based Social Comparison (December 2020)

OVERVIEW: This study attempted to quantify how Instagram influences body image concerns and appearance-related social comparison across a wide international sample. This research built directly on earlier social comparison studies (see Social Comparison on Instagram 2018 and 2020 in Supplement 1), but it focused specifically on appearance-driven content and demographic vulnerabilities.

ACCESS: This study was initially released by Francis Haugen in 2021. It is also one of the 99 exhibits in Tennessee v. Meta. 

METHODS: Survey with 50,590 respondents across ten countries: Australia, Brazil, France, Germany, Great Britain, India, Japan, Korea, Mexico, and the United States. Participants were asked detailed questions about body image, self-perception, and the prevalence of appearance-based social comparison in their Instagram experience. Results were broken down by age, gender, and country. Responses were cross-referenced with log data, enabling researchers to identify specific surfaces (feed, stories, explore, reels) and content types most strongly correlated with harmful comparison. Study uses log data.

FINDINGS:

  • “Appearance-based comparison is common on Instagram. One-third (33%) of people say they compare their appearances to others’ often or always. Nearly half (48%) of teen girls do.”
  • More than one-third of teen girls (37 percent) reported often or always seeing posts that made them feel worse about their bodies, compared to 26 percent of users overall.
  • “Appearance comparison is worse for women at nearly all ages (23% worse than men on average), and it begins to decline around age 30.”
  • These gendered patterns of harm remained consistent across different countries, though prevalence rates varied by cultural context.
  • Appearance-based comparisons were worse in Western nations (the U.S., Britain, Australia, and France) than in Asian nations (Korea, Japan, and India).

FIGURE/IMAGE:

Figure 2.
Figure 2. This is an appearance-comparison scale, made by IG by averaging the answers to the following three questions, split by age and gender: 1) “How often do you compare your appearance to the appearance of people on Instagram?” 2) “How much pressure do you feel to look perfect on Instagram?” 3) “How often do you see posts on Instagram that make you feel worse about your body or appearance?” Note that it is females ages 13–18 who suffer the most from social comparison on Instagram.

COMMENTS FROM T&S LAB: TThe findings of this broad international study are consistent with a substantial body of academic research linking social media use, social comparison, and body-image concerns, particularly among girls — and reveal that Meta was internally aware that appearance-based social comparison was a significant problemfor its younger female users. The study also helps contextualize the widely cited finding from another Meta internal study, “Hard Life Moments” (see Supplement 1, Study 1.6), that “one in three teen girls report that Instagram makes their body image worse.”

LEARN MORE IN SUPPLEMENT 1

We highlighted two out of six surveys that Meta provides on the teen perspective. Visit Supplement 1 to find the other four known surveys.

Line 2. Surveys of All Users

Though this next set of surveys does not use age-specific segmentation, it still provides critical insights into the overall environment in which all users, including teenagers, operate. As illustrated in Studies 1.1 and 1.2, teens, and especially teen girls, generally suffer more harm than any other group.

2.1 Unnamed All User Survey 1 (2018)

(Quoted and abbreviated from p. 24, School Districts v. Social Media)

OVERVIEW: Internal researchers paired a survey of Facebook users with demographic and user log data to assess “problematic use,” which is defined as “excessive social media use that significantly disrupts functioning in areas such as school achievement, relationships, and overall well-being”). The results of this study were shared directly with Mark Zuckerberg and Meta COO Sheryl Sandberg.

ACCESS: Details and findings from the study became publicly available on November 21, 2025, from the unsealing of the recent School Districts v. Social Media litigation. Due to the current status of legal proceedings, the full internal documentation of this study is not yet unsealed for public access.

METHODS: Meta’s researchers “paired a survey of 20,000 U.S. Facebook users measuring perceptions of problematic use with demographic and behavioral data for the prior month.” (p. 24) Study uses log data.

FINDINGS:

  • That study… provided a “deep understanding” that the prevalence of problematic use among U.S. Facebook users is “55% mild, 3.1% severe.” (p. 24)
  • Meta published the following statement: “We estimate (as an upper bound) that 3.1% of Facebook users in the US experience problematic use.” (p. 24)

COMMENTS FROM T&S LAB: Most discussion of this study has focused on the finding that 3.1 percent of users met Meta’s threshold for “severe problematic use.” In internal correspondence responding to the study, Mark Zuckerberg highlighted this statistic, stating that “3 percent of billions of people is a lot of people…it’s millions of people.”

Not as widely discussed, but of equal importance, is the finding that 55 percent of users exhibit at least mild problematic use, which corresponds to more than a billion people. 

hese findings show that widespread problematic use is the direct result of product-design decisions. Company executives and employees have repeatedly stated that they intentionally design Meta’s products to maximize engagement and increase usage time. For example, in 2017, Facebook’s founding president, Sean Parker, stated, “The thought process that went into building these applications, Facebook being the first of them, ... was all about: ‘How do we consume as much of your time and conscious attention as possible?’”

LEARN MORE IN SUPPLEMENT 1

We highlighted one out of three surveys that Meta provides on the perspectives of all users. Visit Supplement 1 to find the other two known surveys.

Line 3. Surveys of Experts

Expert opinions play an important role when a company evaluates whether it is harming its users. A single survey of “mental health clinicians” done by Meta provides the basis for our third category of research, surveys of experts.

While information from this study is limited to the unsealing of the School Districts v. Social Media brief, it reveals that Meta sought input from clinical professionals, and offers a window into how frontline mental-health experts perceive the relationship between social media use and mental health.

3.1 Unnamed Survey of Experts 1 — Clinicians (2022)

(Quoted and abbreviated from pp. 28 & 30–31, School Districts v. Social Media.)

OVERVIEW: Meta conducted this survey of experienced “mental health clinicians” to get their perspectives on the potential connection between social media and mental health.

ACCESS: Details and findings from the study became publicly available on November 21, 2025 from the unsealing of the recent School Districts v. Social Media litigation. Due to the current status of legal proceedings, the full internal documentation of this study is not yet unsealed for public access.

METHODS: Meta conducted a “mixed method study,” surveying and interviewing over 1,000 “mental health clinicians (including psychiatrists, psychologists, therapists, social workers).” Eligible clinicians had “2+ years experience post-licensure” and “Provided care for at least 30 patients in the past 3 months.” (p. 28)

FINDINGS:

  • This study revealed that “the majority of clinicians believe that social media can be addictive,” with fully 85% of U.S. clinicians endorsing this proposition. (p. 28)
  • Meta’s 2022 survey of 1,000 mental health clinicians confirmed what its own researchers already knew—81% of clinicians said social media exacerbated patients’ anxiety disorders, and 78% said it worsened depressive disorders. (p. 31)
  • Over 30% of clinicians believed that social media “had a negative role” in suicidal behavior disorder and non-suicidal self-injury disorder. (p. 32)

COMMENTS FROM T&S LAB: Clinicians offer an important form of eyewitness testimony on the mental-health impacts of social media. To our knowledge, there are few external studies that systematically survey clinicians on this question, aside from one small study in Australia and New Zealand. This internal Meta study offers compelling eye-witness evidence that clinicians are concerned about the impacts of social media on their clients. Their findings are consistent with the experience of Lotte Rubæk, a clinical psychologist and advisor to Meta on suicide prevention and self-harm who resigned after accusing Meta for “turning a blind eye” to harmful content on Instagram and “repeatedly ignoring expert advice.”

While this study does not specify the ages of patients, meaning the findings cannot be attributed exclusively to adolescents, they remain highly relevant to debates about youth mental health.

LEARN MORE IN SUPPLEMENT 1

Line 4. Cross-Sectional Studies

Our fourth line of evidence comprises Meta’s cross-sectional (or correlational) studies. These studies involve examining associations between variables (e.g., time spent on social media and internalizing disorders such as depression), and use statistical analyses to identify correlations and patterns in the data. Here we present two studies that provide valuable findings on social comparison and sensitive content, respectively. 

4.1 Social Comparison on Instagram Wellbeing Research (November 2018)

image

OVERVIEW: This is Meta's earliest comprehensive investigation into platform youth mental health harms that the public has access to. It was presented internally in November 2018, titled Social Comparison on Instagram: Wellbeing Research. The study aimed to quantify social-comparison prevalence, identify vulnerable populations, pinpoint comparison triggers, evaluate social comparison’s relationship with authentic expression, and assess impacts on engagement, well-being, and emotions.

ACCESS: This study was released by Francis Haugen in 2021. It is one of the 99 exhibits in Tennessee v. Meta. 

METHODS: This study employed a mixed-methods approach. The first method was a survey of 5,793 randomly selected respondents from seven countries (Brazil, Canada, France, Great Britain, Japan, Mexico, and the United States). The second method was a set of 14 interviews of two people at a time, conducted with habitual and daily active users, for a total of 28 participants aged 15–24, from the United States. Separate from those two studies, the document contains information on an experiment “to test if priming people their past social comparison experiences may affect wellbeing.” “Survey takers [were] randomly put into two groups, where the question order is flipped.” “[This] can test whether social comparison causes lowered well-being or is only associated with it.” Study uses log data.

FINDINGS:

  • “51% of people do social comparisons on IG. Positive (33%) and negative (35%) social comparisons are almost equally prevalent.”
  • “Women do more social comparison than men (53% vs. 43%),” but “women are more affected by negative comparison, whereas men are more affected by positive ones.”
  • “Teen girls and young women do more social comparison, especially negative social comparison,” and “logistic regression with log data shows that age and gender are the strongest predictors of negative social comparison.”
  • “All else equal, the odds of experiencing negative social comparison on IG… for females are 1.84 times larger compared to males.”
  • “All else equal, the odds of experiencing negative social comparison on IG… for 13-17 year olds are 4.42 times larger compared to being 25+ years old.”
  • “Controlling for tenure, engagement level, no. of followers and country, the odds for teens girls to do negative SC are 8x and for young women 5x larger than 25+ years old men.”
  • After asking, “...when you felt worse about yourself after seeing things on Instagram, how long did that negative feeling last when it happened last time?”, 38% of respondents reported negative feelings lasting only minutes, and 33 percent of users experienced deteriorating self-perception for "several months to a year." Meta’s researchers also understood age differences regarding the timelength of negative emotional experience. “Younger people experienced longer lasting negative emotions.”
  • The internal researchers concluded, “Negative social comparison is associated with worsened well-being measures across the board,” and that this “experiment shows at least some of this association is causal.”

FIGURE/IMAGE:

Figure 3.
Figure 3. Table of “Odds ratio of logistic regression on negative social comparison on IG” for specific demographics, including ages 13–17 vs. 25+-years-old (4.42), ages 18–24 vs. 25+-years-old (2.814), and female vs. male (1.84) (left) as well as their statistical significance status (right).

COMMENTS FROM T&S LAB: The Social Comparison on Instagram: Wellbeing Research study found clear and substantial associations between Instagram use, negative social comparison, and harmful well-being outcomes. These associations are stronger among adolescents, especially teen girls. The patterns observed here closely matches what much of the academic research, including our own, has found for years. 

This study directly contradicts claims by some researchers that there is “no association” between social media use and teen mental health outcomes.The evidence from Meta’s own internal research confirms there is an association.

We also note that Meta researchers embedded an RCT (Randomized Controlled Trial) in this otherwise cross-sectional study: they randomly assigned some users to answer the social comparison question before the well-being question, versus the other order. They did this in order to separate correlation from causation. They found that reflecting on social comparison on Instagram caused responses to the next question, about well-being, to decline relative to those who answered the well-being question first. They concluded that this “experiment shows at least some of this association is causal.”

LEARN MORE IN SUPPLEMENT 1

4.2 Sensitive High Negative Appearance Comparison (“High-NAC”) Content – A 3-Study Series (2021)

OVERVIEW: Through this series of studies, Meta researchers aimed to develop a systematic definition of “sensitive content.” They integrated user behavioral data with psychological survey responses to identify what content categories most strongly trigger Negative Appearance Comparison (NAC) among Instagram users. Researchers analyzed how specific content classifications, platform surfaces (e.g. reels, feed, explore, etc.), and account relationships (whether accounts were connected by real-world relationships or were strangers) correlated with varying rates of NAC induction. The findings included actionable recommendations for reducing user exposure to sensitive content and enhancing overall user well-being.

ACCESS: This study series appears to have been conducted between 2020–2021. These studies were initially released to the public by Frances Haugen in 2021. Each study is one of the 99 exhibits in Tennessee v. Meta.

METHODS: Methods for each of the three studies can be found in Supplement 1:

  • Methods for Study 4.2.1
  • Methods for Study 4.2.2
  • Methods for Study 4.2.3

FINDINGS:

  • “Women and teen girls experienced the highest exposure rates, with approximately 20% of their content classified as High-NAC.” In contrast, men and boys saw roughly half this amount, averaging around “10% High-NAC content exposure.” (Study 4.2.1)
  • Researchers identified that when “11-13% of a user's content consumption consists of High-NAC material,” the majority of teen girls begin to experience comparison issues.
    • “Using survey responses of comparison … when people saw more than 11-13% sensitive content, the majority reported experiencing appearance comparison at least half the time. By that measure, approximately 70% of teen girls may see ‘too much’ sensitive content’” (Study 4.2.2).
  • Some topics that are likely to make viewers, especially teen girls, feel worse about their bodies include: fashion and beauty, relationships, western pop stars who emphasize their bodies, and images that emphasize women’s bodies generally. “These High NAC [negative-appearance comparison] subtopics comprise 25% of what people see on Instagram (33% for teen girls)” (Study 4.2.2).
  • In terms of differences across platform surfaces, “People see more High-NAC content on average on Explore than on Feed and Stories (18% vs. 14% and 9%, respectively), these differences persist across demographic groups”, and “Low-NAC content (e.g. outdoor activities, soccer, art) appeared at lower rates across the board. Explore, Feed, and Stories had rates of 11%, 10%, and 6%, respectively” (Study 4.2.3).
  • Demographically, “Women and teen girls saw higher rates of High-NAC content than men and boys” (Study 4.2.3).

FIGURE/IMAGE:

Figure 4.
Figure 4. Examples of images that Meta researchers found elicited greater appearance comparison. 

LEARN MORE IN SUPPLEMENT 1: Study 4.2.1; Study 4.2.2; Study 4.2.3

We highlighted two of the four publicly available cross-sectional studies that Meta ran. Visit Supplement 1 to find the other two known cross-sectional studies.

Line 5. Longitudinal Studies

Longitudinal studies track the same participants over time, examining how levels or changes in one variable at time 1 relate to levels or changes in another variable at time 2. This temporal dimension allows researchers to make stronger inferences about causality than cross-sectional designs, as they measure whether changes in one behavior (such as spending a lot more time on Instagram) precede or follow changes in well-being or mental illness. If high levels of depression come first and are followed by increases in Instagram use, it would suggest (but not prove) that depressed people choose to use more Instagram, perhaps searching for help. But if high levels of instagram use predict later depression, it would suggest (but not prove) that heavy instagram use causes subsequent depression.

While several Meta studies were originally designed with longitudinal intentions, many were apparently halted before collecting multiple waves of data, or else the findings from follow-up waves were not included in the documents released to date.

5.1 “People Disagree Content” Seen by Teens Reporting Different Levels of Body Dissatisfaction After Viewing Content on IG (2024)

Note: The full study release specifies that this was a longitudinal study, but only results from the first wave of data collection are presented to the public.

OVERVIEW: This 2024 Instagram study examined whether teens experiencing high amounts of body dissatisfaction were algorithmically served more "body-focused content" than teens who experienced less. The research tracked the same cohort of teens over the 2023–2024 school year, measuring their reported body dissatisfaction at an initial survey wave and then analyzing what content they viewed over the subsequent three months.

The researchers wrote, “The goal of the analysis… is to use the research-derived people disagree content framework (formerly known as teen sensitive content), including objective labeling guidelines for body-focused content, to provide a directional assessment of whether teens in the MYST study [see study 4.4 in the Supplement 1] who report frequently experiencing body dissatisfaction after viewing others' posts on IG may also see more body-focused content and more eating disorder-adjacent content than other teens in the study.”

Definitions of terms:

  • "People Disagree Content": “unrestricted content that some parents (as well as teens and experts) are not aligned with teens seeing, especially in high quantities. Alignment on the appropriateness of this content for teens can vary across individuals and cultures. People Disagree Content captures content that is NOT part of our current content policies.”
  • "Body-focused content": “any content in which there is a prominent display of body shapes or sexualized body parts (specifically, chest, buttocks, or thighs). It also includes content in which there are explicit judgments or comparisons of body shapes and compositions”)
  • "Eating Disorder-Adjacent content": “Content related to topics that do not explicitly reference eating disorders (ED), and are thus not classified as violating or borderline, but which may be related to, and/or may be triggering to someone experiencing disordered eating (i.e., body-focused, weight loss, dieting, health-related). This may include content related to disordered eating and/or negative body image, or ED topics that fall below the threshold for violating or borderline content policies due to the scope of these policies (e.g., [sic] pica, compulsive overeating) or the specified strength of signaling ("goal weight" without mention of "current weight").”.

ACCESS: Jeff Horowitz, writing at Reuters, released the full study. The release coincided with the unsealing of the recent School Districts v. Social Media litigation on November 21, 2025, which also included findings from this study.

METHODS: Survey of 1,149 U.S. teens and their parents. Data was collected from September through December 2023 as a three-month longitudinal study, with “content samples derived from Meta and Youth Social Emotional Trends (MYST)” survey data paired with teens' behavioral data. “Teens completed several questions that assess the extent to which and frequency with which they experience body dissatisfaction after viewing others' posts on IG: how often they compare themselves to others on IG, and how often they feel worse about their own bodies after engaging in such comparison.” “[T]eens in the sample were grouped into two groups: 1) teens who scored high on the IG content-specific body dissatisfaction scale (i.e., reported that they often compared themselves to others on IG, and frequently felt worse about their bodies after doing so; n=223 teens), and 2) all other teens in the sample (n=795 teens). We then generated a random, VPV-weighted sample12^{12} of 500 pieces of content viewed by teens in each group over the 3 month period ending 5/12/24.” “A well-being subject matter expert researcher systematically labeled all available pieces of sampled content for each group” and that labeling was compared to Meta’s "content classifiers."13^{13} Meta notes that “these results represent directional, qualitative, estimated prevalence of teen sensitive content topics for US teens in this sample who report frequent body dissatisfaction after viewing other people's posts”, but that “[t]hese estimates may not be generalizable to the IG teen-using population.”

Study uses log data.

FINDINGS:

  • “Three-quarters (74%) of teens who reported frequent IG content-specific body dissatisfaction at wave 1 identified as female, compared to 50% of the teens who reported none to occasional IG content-specific body dissatisfaction. Roughly half of the teens in each group were early teens (ages 13-15 years old; 54% in the high group and 50% in the low-to-moderate group).”
  • “Teens who report high IG content-specific body dissatisfaction at wave 1 may have seen almost three times as much body-focused/ED-adjacent content compared to other teens.”
  • Meta's detection tools missed nearly all the problematic content that human reviewers identified, suggesting the systems aren't sensitive enough to measure the problem accurately. Researchers stated that this “is not necessarily surprising, given that People Disagree content captures unrestricted content that is not covered by Meta's current content policies.”
  • “It is not possible to establish the causal direction of these findings (e.g., teens who report high IG content-specific body dissatisfaction may also be more likely to seek out these types of content, or a 3rd factor may explain the observed patterns).”

FIGURE/IMAGE:

Figure 5.
Figure 5. Table of the estimated prevalence of each content sub-theme classified as “People Disagree Content” among the group of teens in the sample who experience “frequent IG content-specific body dissatisfaction” and the group of teens that do not.

COMMENTS FROM T&S LAB: This study is a strong example of a common pattern: those with pre-existing vulnerabilities often experience different (i.e., more severe) outcomes from the same exposures. In this study, teens who reported frequent, Instagram-specific body dissatisfaction (a group that was disproportionately female) were subsequently shown much higher quantities of body-focused and eating-disorder–adjacent content — about three times more (10.5 percent vs. 3.3 percent). This means that Instagram’s algorithm was serving up more body-focused/eating-disorder content to those who are most vulnerable.

This study, along with several others, indicates that Meta conducted longitudinal research with a degree of detail and specificity (because of their user log data) far beyond the studies conducted by academic researchers. At present only a single wave of those studies is publicly available. If released, these longitudinal studies would show whether use at one point in time predicts outcomes at a later point, which would help to establish causal relationships. In most cases, only cross-sectional findings are accessible. We hope that Meta will release more findings about the later waves of data collection. See Supplement 1, Section 5.

LEARN MORE IN SUPPLEMENT 1

We highlighted one of three publicly available longitudinal studies that Meta has run (or intended to run). Visit Supplement 1 to find the other two known studies.

Line 6. Experimental Studies

Our sixth line of evidence is experimental studies, some of which use random assignment (Randomized Controlled Trials) which is  the gold standard for establishing causal relationships. Unlike cross-sectional research that examines naturally occurring patterns, these experiments involve deliberate manipulation of variables under controlled conditions. Participants are randomly assigned to a treatment condition (such as reducing their use of social media for a week) versus a control condition (such as making no changes.) If the outcome differs between conditions (such as decreased depression), and if there are no other differences between the two groups (because they were randomly assigned) then it follows that the manipulation caused the outcome difference.

Experimental studies also allow researchers to test potential interventions and safeguards before (or after) implementing them at scale. They offer a controlled environment that helps us understand not just whether harm occurs, but how and why it occurs — and sometimes, what can be done about it.

Here we present Meta’s internal experimental research, which examined causal effects of platform features, as well as the effects of reduction of use on user well-being outcomes.

6.1 Project Daisy (2019)

OVERVIEW: Initially begun in late 2019, this project included multiple studies that tested the effects of hiding public “Like” counts on user posts on Instagram and Facebook (aka “Pure Daisy”) and the effects of maintaining like counts but adding alternative displays such as spelling the number out (aka “Popular Daisy”). The primary goal of Pure Daisy (our focus in this section) was to assess whether removing public Like counts would reduce posting pressure and social comparison tied specifically to visible Like metrics, particularly among teenage users. Across multiple experiments and analyses, Meta researchers found that hiding Like counts reduced the frequency with which users, especially teens, compared Like counts and cared about the number of Likes their posts received. 

ACCESS: Internal materials related to Project Daisy became public following disclosures by Frances Haugen in 2021. Complete internal presentations on Project Daisy are accessible via https://fbarchive.org/. Three internal documents are available: "Project Daisy — version presented to Mark" (odoc003714w35), "Project daisy initial version of deck presented to Mark" (odoc003705w35), and "Project Daisy, Likes and Social Comparison" (odoc003630w35).

METHODS: Meta tested multiple Daisy variants, including Pure Daisy (which hid public Like counts from all viewers except the post creator) and Popular Daisy (which preserved Like counts while altering how they were displayed). Testing on Instagram included a six-week Global Network (cluster) experiment conducted in late 2019, with approximately 3,000 users per condition (pure, popular, control) across multiple countries. Facebook testing was more limited in scope and was conducted in Australia. They also conducted several qualitative surveys before and during the execution of Project Daisy.  Study uses log data. 

FINDINGS:

“Pure Daisy likely reduces negative social comparison. People with Pure Daisy tended to report 0.10 points less negative social comparison (on a five-point scale, + 0.08, p < 0.05). Popular Daisy had no effect on negative social comparison (-0.01 + 0.08, 0.76).

Pure Daisy also likely reduces negative social comparison among highly active teens and teen girls. We observed similar effects among highly active teens (-0.18 + 0.20, p = 0.07) and teen girls (-0.24 + 0.23, p<0.05). …

Are these findings causal? Yes (with caveats). The Daisy network test is a randomized experiment, but responding to a survey on social comparison is not random. Still, matching (as described above) to balance observable covariates can account for some of this response bias. …

Is this finding any different from what the previous IG Daisy surveys found? In the present survey, the measure of negative social comparison was a broader one; the IG Daisy surveys asked about social comparison specifically in relation to Like counts. Further, in IG Daisy surveys, the question specifically about comparing likes also did not differentiate between positive and negative social comparison. As such, if we consider the findings as a whole, not only does Pure Daisy reduce negative social comparison from Likes counts, but it also likely reduces feelings of negative social comparison overall.

Also, how does this finding relate to other previous work?

This finding is also consistent with the other findings here (e.g., on Like counts being related to greater negative comparison), as well as prior qualitative observations where people talked about how they felt worse after seeing Like counts, or how they treated getting Likes as a form of competition. Still, we do not expect its impact on negative social comparison to be large, given that social comparison on Instagram occurs in more ways than just Like counts.”

COMMENTS FROM T&S LAB: Project Daisy is notable because it tested a concrete design change intended to reduce posting pressure and social comparison by hiding public Like counts. While it may not have been a fully randomized, controlled trial, Meta implemented the intervention in live settings and observed reductions in negative social comparison when like counts were hidden. These effects were observed across demographic groups, including teens. The effects were observed many times, across multiple countries.

Meta’s own researchers concluded that this work demonstrated a causal link between removing like counts and reduced feelings of negative social comparison.

This is a clear example of Meta continuing to use a design feature that its own research had linked to harmful outcomes because removing the feature reduced ad revenue by 1%.

LEARN MORE IN SUPPLEMENT 1

6.2 Project Mercury (2019)

(Quoted and abbreviated from p. 26, School Districts v. Social Media)

OVERVIEW: This study is Meta’s own social media reduction experiment that they ran in September 2019 with Nielsen, “an experimental deactivation study.” The project, code-named Project Mercury, asked a group of users to deactivate their Facebook and Instagram accounts for one month. Meta claimed “our design is of much higher quality” than the existing literature and that this study was “one of our first causal approaches to understand the impact that Facebook has on people’s lives… Everyone involved in the project has a PhD.”

ACCESS: Details and findings from the study became publicly available on November 21, 2025, from the unsealing of the recent School Districts v. Social Media litigation. Due to the current status of legal proceedings, the full internal documentation of this study is not yet unsealed.

METHODS: This was “an experimental deactivation study, in which we will randomly ask some people to stop using Facebook and Instagram for a month (compared to a group who will continue to use as normal), helping us explore the impact that our apps have on polarization, news consumption, well-being, and daily social interactions.” Meta had run a pilot test of this “deactivation study” model with an unknown sample size–the results of these pilot tests led to a halting of the project. Meta partnered “with Nielsen and use[d] a combination of (a) surveys, (b) log data from Facebook and Instagram, and (c) usage data from smartphones.” (p. 26) Study uses log data.

FINDINGS:

  • The researchers found, in the pilot test, that “people who stopped using Facebook for a week reported lower feelings of depression, anxiety, loneliness, and social comparison.” (p. 26)

COMMENTS FROM T&S LAB: This social media time-reduction experiment is very similar to the main body of experiments at the heart of the debate. These are experiments in which some people are randomly assigned to reduce or stop their use of social media while others make no change, and the dependent variable (the outcome measure) is the users’ levels of depression, anxiety, or other measures of mental health or wellbeing. Meta’s own researchers found — in an experiment they believed was better designed than any external study done thus far — that reducing time on their platforms improved mental health and well-being, specifically depression, anxiety, loneliness, and social comparison. (This is consistent with recent meta-analyses (Burnell et al., 2025), and directly contradicts claims of “no effect” by scholars such as Ferguson (2025), whose meta-analysis has been shown to suffer from many flaws.

The Meta researchers themselves (all of whom have PhDs) stated that “the Nielsen study does show causal impact on social comparison.” They acknowledge directly that social media reduction caused these improvements.

LEARN MORE IN SUPPLEMENT 1

Line 7. Internal Conceptual Models and Review Papers

Our seventh and final category encompasses research that doesn’t fit neatly into traditional empirical study designs, yet remains essential for understanding Meta’s internal knowledge base. Note that the studies described below do not generally involve new data collection. Rather, they are literature reviews and conceptual models formed within Meta, usually drawn from academic literature.

These documents provide a window into Meta’s internal thinking and priorities. Including this material completes our review of Meta’s internal research landscape, showing not just what they studied empirically, but how they thought about the problems at hand.

7.1 Teen Ecosystem (May 2020)

OVERVIEW: This presentation aggregated a wide array of information on “[a]dolescent development concepts, neuroscience[,] as well as nearly 80 studies of our own product research” to assess Instagram's appropriateness with teenage users and “[e]stablish our foundation of existing teen product knowledge and identify unmet needs in the teenage IG experience.” The presentation illustrates Meta’s interest in and understanding of the teenage psyche.

ACCESS: This review was released by Francis Haugen in 2021. It is one of the 99 exhibits in Tennessee v. Meta. 

METHODS: N/A

FINDINGS:

  • “The teenage brain is usually about 80% mature. The remaining 20% rests in the frontal cortex… At this time teens are highly dependent on their temporal lobe where emotions, memory and learning, and the reward system reign supreme.”
  • “Teens’ decisions and behavior are mainly driven by emotion, the intrigue of novelty and reward… While these all seem positive, they make teens very vulnerable at the elevated levels they operate on. Especially in the absence of a mature frontal cortex to help impose limits on indulgence in these.”
  • “Teens are insatiable when it comes to “feel good” dopamine effects.”
  • “[D]ue to the immature brain [teens] have a much harder time stopping even though they want to — our own product foundation research has shown teens are unhappy with the amount of time they spend on our app.”
  • “[S]adly, a short term reward and inexperience makes teens prone to risky behavior and there are plenty that present themselves online and on Instagram. This could be engaging with predators, consuming dark content, sharing nude photos or copycat self-harm.”
  • The study documented safety concerns including the finding that “7% of teens report experiencing bullying on IG”, saying that “40% of all bullying experiences are reported in DM.”
  • Teens show heightened anxiety about content quality and judgment due to developmental concepts like "imaginary audience[s]," while valuing diverse content including humor, competition, and music beyond one-dimensional preferences.

FIGURE/IMAGE:

Figure 6.
Figure 6. Screenshot of the summary of the Teen Ecosystem presentation

COMMENTS FROM T&S LAB: This review shows that Meta employees had internal knowledge of the likely negative impacts of many of the features on Instagram and Facebook. Meta understood that teens were particularly vulnerable to compulsive use, social pressure, and risky online behaviors, yet continued to build and optimize products that leaned into these developmental sensitivities in order to keep up with competing platforms.

LEARN MORE IN SUPPLEMENT 1

7.2 Unnamed Review Paper 1 (2020)

(Quoted and abbreviated from p. 29, School Districts v. Social Media)

OVERVIEW: In 2020, an Instagram researcher (who holds a doctorate in public health) developed a “conceptual model of adolescent social media behaviors and mental health,” in collaboration with his colleagues. (p. 29)

ACCESS: Details and findings from this review became publicly available on November 21, 2025, from the unsealing of the recent School Districts v. Social Media litigation. Due to the current status of legal proceedings, the full internal documentation of this review is not yet unsealed.

METHODS: N/A

FINDINGS:

  • The model shows that problematic social media use is linked to poor sleep, low self-esteem, negative body image, and mental health challenges—each of which fuels further problematic use, creating a self-reinforcing cycle (“there’s a feedback loop with mental health challenges”). It also identifies adolescents with “existing vulnerabilities” as particularly at risk, whether those vulnerabilities operate independently or in combination. (p. 29)

FIGURE/IMAGE:

Figure 7.
Figure 7. Visual representation of the theoretical model illustrating the connections between social media use and mental-health challenges. This image comes directly from the School Districts v. Social Media litigation.

COMMENTS FROM T&S LAB: This internal review shows that Meta researchers had a sophisticated understanding of the pathways through which social media use can contribute to a range of harmful outcomes. The model explicitly identifies feedback loops linking problematic use, poor sleep, low self-esteem, negative body image, and mental-health challenges, rather than treating these as isolated effects. The model also makes it clear that they know that their product causes poor sleep, poor self esteem, and negative body image, especially in the most vulnerable population: adolescent girls, especially those who already have mental health challenges and/or are poor.

LEARN MORE IN SUPPLEMENT 1

We highlighted two out of nine publicly available reviews that Meta conducted. Visit Supplement 1 to find the other seven known reviews.

Conclusion

Meta has carried out many studies on the effects of its products on adolescent wellbeing. We have found descriptions of 31 of them. Across these 31 studies, which used a variety of methods, the company learned repeatedly that its products — particularly Instagram — are harming young people on a vast scale. The company’s leaders know about many of these harms, and in several identifiable cases they failed to act. Meta researchers talked with each other about their findings, noting that Meta’s leadership reacted negatively, not constructively, when they learned about harms. This is exactly what happened to whistleblower Arturo Béjar when he reported the results of the BEEF survey (Study 1.1), as he explained in his Senate testimony.

Here is one more example, an excerpt from a chat between two Meta researchers:

“oh my gosh yall IG is a drug…We’re basically pushers… We are causing Reward Deficit Disorder bc people are binging on IG so much they can’t feel reward anymore…like their reward tolerance is so high…I know Adam [Mosseri] doesn’t want to hear it – he freaked out when I talked about dopamine in my teen fundamentals leads review but its undeniable! Its biological and psychological….the top down directives drive it all towards making sure people keep coming back for more. That would be fine if its productive but most of the time it isn’t…the majority is just mindless scrolling and ads.”

School Districts v. Social Media, p. 33

When Mark Zuckerberg testified before the U.S. Senate, under oath, he made two statements that are true: “Mental health is a complex issue,” and “There is a difference between correlation and causation.” But when he said, “The existing body of scientific work has not shown a causal link between using social media and young people having worse mental health outcomes,” he was saying something at odds with Meta’s own research.15^{15}

This collection of studies demonstrates that Meta used a great variety of methods to study the effects of its products, and they found a great variety of harms, including evidence of causal impact.

The most compelling evidence of causality is found in Project Mercury, in which users were randomly assigned to stop using Facebook and Instagram for a week. The researchers found that those who stopped using these social media platforms experienced notable improvements in their mental health. Meta’s leadership discussed these findings and searched for ways to dismiss them. During the discussion, one Meta employee warned: 

“If the results are bad and we don’t publish and they leak, is it going to look like tobacco companies doing research and knowing cigs were bad and then keeping that info to themselves?”16^{16}

The results have leaked, and Meta is starting to look an awful lot like Big Tobacco.

Acknowledgements

We thank Mckenzie Love, Jacob Lebwhol, Anum Aslam, Arturo Béjar, and Casey Mock for their comments and feedback on this project.

Footnotes

Footnote #
Reference Text
Note
1
“So when it comes to understanding what is happening to young people on social media platforms, the companies’ internal researchers have a front row seat on the action and a full transcript of the dialogue, while external academic researchers are forced to sit in the back row of the upper balcony, struggling to hear the dialogue and barely able to discern what’s happening onstage.”
The fact that cross-sectional associations between time spent on social media use and adolescent depression and experimental effects showing that social media reduction leads to improvements to mental health are consistently found in academic research despite these obstacles is a testament to the robustness of the underlying associations and causal relationships.
2
“These studies span a range of nations and populations including adults, mixed-age users, experts, and adolescents.”
While Meta has had other whistleblower leaks covering different topics (e.g., political polarization), this document focuses only on the research related to teenage mental health.
3
“The result is a detailed record of internal awareness of harm that complicates, and in some cases directly contradicts, Meta’s public narrative about uncertainty and ignorance.”
We note that there is far more material about these studies and additional studies that has been collected during the legal proceedings, but much of that has not yet been released to the public. We will update this site as new documents are unsealed.
4
“Nothing complex is black and white, so, expectedly, some of the internal studies include evidence that many adolescents point to benefits of using social media, such as feeling “more connected.” We do not doubt that many young people experience these kinds of benefits.”
This document focuses primarily on Meta’s internal findings related to harms associated with its products, rather than on benefits. Where relevant, findings related to benefits are noted in footnotes or referenced for context, but they are not the focus of this project.
5
“We organized the studies by research method. This Central Doc offers an overall view of the project, explaining each method and giving one or two salient examples.”
In this Central Document we generally prioritized studies that rely on log-based data (behavioral user data collected by the company), have larger sample sizes and geographic diversity, and have most relevance to adolescent well-being and social media use.
6
“To compile these studies, we began by reading the collection of 99 internal exhibits released to the public in January 2024 by the Tennessee Attorney General.”
Many of these exhibits originated from Frances Haugen’s and Arturo Béjar's whistleblower disclosures
7
“(We will refer to this brief as School Districts v. Social Media)”
Though this lawsuit also involved Snap Inc., TikTok and Youtube, we cite exclusively from the section on internal research “Meta’s knowledge of harms” from pages 21-33.
8
“Finally, we drew on a major lawsuit brought by U.S. school districts against Meta and other platforms (released November 21, 2025) that unearthed nine additional studies that had not been previously available to the public. (We will refer to this brief as School Districts v. Social Media throughout).”
Note that we do not have full access to the entirety of any study because Meta has chosen not to release them. Until they do so, these documents (and the way the studies are presented in the lawsuits) are our best resource for understanding what Meta did, what they learned, and when they learned it.
9
“Meta has conducted many different kinds of research — often using precise user-behavioral log data — and several of their methods offer direct or indirect evidence of causation.”
Organizing these studies by methodology also allows a direct comparison between Meta’s internal research and the findings of academic researchers. We have written a review of the academic literature for the upcoming 2026 World Happiness Report, where we use almost all of the same categories as those listed above. You can find the academic essay here: Social Media is Harming Adolescents at a Scale Large Enough to Cause Changes at the Population Level
10
“Studies designed as longitudinal research but conducted only once are classified as surveys, and experiments lacking sufficient methodological documentation are treated as descriptive studies.”
These studies include: 1) Social Comparison on Instagram Wellbeing Research (November 2018). We categorized this as a cross-sectional study, though it also contains an experimental component. 2) Social Comparison on Instagram (April 2020). This was a longitudinal study that collected data in two waves (100,000 participants in the first wave, 15,000 in the second) that did not report any information about the results from the second wave. 3) Bad Experiences and Encounters Framework (BEEF) Survey (2021). This was designed as an experiment that measured the effectiveness of safety tools over time, but it was halted after the initial wave of data collection and we lack sufficient information on the experimental groupings.
11
“e.g., ‘Have you ever felt worse about yourself because of other peoples’ posts on Instagram?’”
Response options for all questions are: “Yes, during the last 7 days,” “Yes, but more than 7 days ago,” and “No.”
12
“We then generated a random, VPV-weighted sample of 500 pieces of content viewed by teens in each group over the 3 month period ending 5/12/24.”
A “VPV-weighted sample” likely refers to a sample adjusted using Viewers Per View weights to correct for sampling bias, though Meta does not explicitly define this term in the study.
13
“A well-being subject matter expert researcher systematically labeled all available pieces of sampled content for each group” and that labeling was compared to Meta’s "content classifiers.”
Meta’s “content classifiers” are machine-learning systems that score content to identify potentially harmful material that is still permissible under Meta’s current content policy. These systems were both used and assessed for efficacy in this study.
14
But when he said, “The existing body of scientific work has not shown a causal link between using social media and young people having worse mental health outcomes,” he was not correct.
For an external 2025 meta-analysis showing causal benefits of reducing social media use, see Burnell et al. (2025).
15
“If the results are bad and we don’t publish and they leak, is it going to look like tobacco companies doing research and knowing cigs were bad and then keeping that info to themselves?”
From School Districts v. Social Media.
image

The Tech and Society Lab conducts research to help the public understand the social and psychological effects of the technological changes reshaping our lives