Something just shifted, but you might not have felt it yet.
Even so, big things are happening...
A recent U.S. jury verdict against Meta and YouTube marks a turning point in how we think about social media harm. In this case, a young woman argued that years of exposure to their platforms contributed to serious mental health issues. And the jury agreed, finding that the platforms’ very design played a significant enough role to award damages.
This is big. Not the settle amount, that was relatively small for the pockets of both Meta and YouTube. Rather, it was the very idea that platform design itself, not just user content, can be harmful.
Until now, social platforms have hidden behind a law (Section 230) that protected them from user generated content. This case shows a willingness by the court and jurors to hold them accountable for their delivery system. Can we get a hallelujah!
It’s important to understand though that this isn't over, appeals are already in motion. The legal system moves slowly, and one verdict, even a groundbreaking one, does not instantly rewrite the rules for an entire industry.
Still. I think we're headed down the right path here.
Because we've been here before.
Asbestos producers, who delayed accountability until the damage was undeniable.
Tobacco companies, once untouchable, later exposed for knowingly engineering addiction.
Opioid manufacturers, whose practices reshaped public health policy after years of denial.
In each case, the sequence was similar:
- Harm becomes visible to the public
- Institutions resist or deny
- Cultural awareness grows
- Legal frameworks begin to catch up
We are somewhere in the middle of that cycle now with tech.
The only difference is that this time, the product is not physical, it’s behavioral.
So what comes next. Sure, court cases matter. Regulation matters. But what tips it all over the edge?
If we’re serious about harm reduction across all of society; not just on social media, but in how products are designed, marketed, and monetized, then we have to be honest about where the real pressure comes from:
Us.
Not just as users, but as consumers, creators, voters, participants in digital culture.
Companies don’t change out of goodwill. They change when it becomes financially necessary. It becomes legally unavoidable. Or it becomes culturally unacceptable to continue as they are.
Right now, we are in the phase where cultural tolerance is eroding.
And I have to say it. This makes me really happy.
I'm happy because the current model is optimized for engagement at all costs. And this means it is fundamentally misaligned with human well-being. Instead of fostering community and collaboration it rewards compulsion over intention, outrage over nuance and quantity over quality.
And it disproportionately harms children, vulnerable populations or people already struggling with mental health issues.
This is not an accidental by-product. It is a predictable outcome of the incentives in place. And it has to stop.
At this point, I don’t think it’s controversial to say that Meta, in particular, has caused substantial societal harm. Not just at the individual level but at scale. They have amplified political instability, enabled manipulation during global conflicts, and shaped perception in ways that are neither neutral nor transparent.
They have operated for years with too much leeway too little oversight and too few meaningful consequences.
So yes, this is where I stand. Meta needs to be brought to its knees.
But because systems this powerful require accountability.
So let's ask the following:
- Is current Meta leadership fit for the responsibility it holds?
- Are existing regulations remotely adequate?
- Should a company with this much influence remain intact as a single entity?
What if we demanded harm reduction be built into the design. That transparent systems shaped our reality. That platforms were community-oriented rather than life sucking machines. That profit models were benevolent rather than merciless.
What would that world look like?


0 comments