Content Moderation & Trust and Safety
The trust and safety of users is foundational to the digital economy. We support policies that reflect platform diversity and protect users.
Key Takeaways
- Trust and safety are essential to the health of the digital economy.
- Content moderation is dynamic, not one-size-fits-all.
- Tools and approaches vary by platform size, function, and user base.
- Rigid policies risk harming innovation and competition.
- Middle Tech companies need flexibility to protect users responsibly.
Why Content Moderation Matters
The trust and safety of users is the foundation of the digital economy. Internet companies have a legal and ethical obligation to protect their customers, and they use a range of tools—from automation to human review—keep platforms safe.
But content moderation is not one-size-fits-all. Different content types (livestreams, video, images, or text) require different approaches. What works for one platform may be ineffective or even harmful on another. That’s why Internet Works advocates for flexible, scalable policies that reflect differences in platform functionality, business models, size, and demographics.
What We’re Fighting For

Section 230 protects free expression online and enables safety across the Internet. It’s an existential promise for innovation — shielding responsible platforms from costly lawsuits while holding bad actors accountable.

We support a national privacy law that preempts the patchwork of state rules and aligns with global standards, ensuring trust, clarity, and user protection without punishing smaller platforms.

Middle Tech companies are integrators and deployers of AI, expanding access and boosting productivity. We support a risk-based policy approach that regulates the right part of the AI supply chain.

Today’s digital markets favor incumbents. We advocate for tech policy that opens markets, right-sizes compliance, and gives Middle Tech a fair chance to compete and innovate.