Facebook Accused of Tracking Teen Girls’ Deleted Selfies to Serve Beauty Ads

Posted: by Alvin Palmejar

image ofHealthy Skin

Facebook is facing renewed scrutiny following allegations that the social media giant tracked teen girls’ emotional behavior—specifically when they deleted selfies—in order to serve them targeted beauty advertisements. The disturbing claim comes from Careless People, a new book by former Facebook executive Sarah Wynn-Williams, who worked at the company between 2011 and 2017.

Wynn-Williams alleges that Facebook (now Meta) developed tools to monitor and exploit moments of emotional vulnerability in young users. In particular, she claims the company detected when teenage girls deleted selfies they had just posted, and then used that behavioral data to trigger beauty ads aimed at their insecurities.

“This is the business, Sarah. We’re proud of this,” one Meta executive reportedly told Wynn-Williams. “This is what puts money in all our pockets. And these statements make it look like it’s something nefarious.”

The tactic, if true, underscores the dark side of what many now call “surveillance capitalism”—a system in which personal data is not just collected, but weaponized to manipulate consumer behavior, particularly among the most vulnerable populations. Teenagers, especially girls, are already prone to anxiety, self-esteem issues, and pressure from social media. Critics argue that Meta’s practices deepen those problems for profit.

Back in 2017, an internal Facebook pitch deck obtained by The Australian suggested the company had already developed techniques to analyze the emotional states of users aged 13 to 17. The deck boasted that Facebook could identify when teens felt “worthless,” “anxious,” “stressed,” or “defeated” — and that such data could be used to deliver targeted ads at just the right (or wrong) moment.

Wynn-Williams adds that internal discussions revealed the company was also developing tools to help advertisers do this themselves—bypassing any need for Meta’s direct involvement. A deputy chief privacy officer reportedly confirmed that product teams were building systems to allow outside advertisers to access this kind of behavioral data on demand.

Though Meta denies offering tools to target users based on emotional vulnerability, its history suggests otherwise. Facebook has long resisted full transparency about how its algorithms operate, often citing proprietary technology or “anonymous and aggregated data” in its defenses.

In response to questions about the book, Meta pointed to a 2017 blog post that dismissed the allegations as misleading. “Facebook does not offer tools to target people based on their emotional state,” the post claimed. “The analysis done by an Australian researcher was intended to help marketers understand how people express themselves on Facebook. It was never used to target ads.”

But many experts and former employees remain unconvinced. “The targeting of emotional moments—especially among kids—isn’t just unethical, it’s dangerous,” said one former Meta engineer who spoke on condition of anonymity. “They don’t see teens as people; they see them as revenue.”

The revelations have reignited debate about how far surveillance capitalism has gone. Once limited to tracking clicks and page views, modern advertising now leans heavily on psychographics: mining data about users’ feelings, mental health, and even fleeting moments of self-doubt to influence what they buy, click, or view.

“This is beyond manipulative,” said Dr. Angela Pritchard, a child psychologist. “We’re talking about using a young person’s private emotional experience—like deleting a photo they felt insecure about—as a trigger to show them an ad that reinforces the very insecurity that made them delete it. That’s predatory.”

Meta has also been accused of racially and demographically targeting emotional states, including a so-called “Hispanic and African American Feeling Fantastic Over-index,” suggesting the company mapped emotional behavior along racial lines for advertisers.

Despite growing backlash, there’s little regulation in place to curb such practices. Data privacy laws in the U.S. remain fragmented and weak compared to the EU’s stricter GDPR rules. And while tech executives often promise reforms, critics say these are usually cosmetic at best.

“Meta’s business model thrives on this kind of surveillance,” Wynn-Williams writes. “You can’t tweak it out of existence. You have to rethink the whole system.”

As public awareness of these tactics grows, pressure is mounting for lawmakers to act. But until then, every click, every selfie, and every moment of insecurity could continue to feed a machine designed not to protect—but to profit.

Previous article

What Is Neurodivergent? Jamie Oliver Opens Up About Raising a Neurodiverse Family

Next article

Mount Etna’s Latest Eruption Went Viral — But What Did We Actually Learn From It?