Key Takeaways

  • Trust and safety are essential to the health of the digital economy.
  • Content moderation is dynamic, not one-size-fits-all.
  • Tools and approaches vary by platform size, function, and user base.
  • Rigid policies risk harming innovation and competition.
  • Middle Tech companies need flexibility to protect users responsibly.

Why Content Moderation Matters

The trust and safety of users is the foundation of the digital economy. Internet companies have a legal and ethical obligation to protect their customers, and they use a range of tools — from automation to human review–to keep platforms safe.

But content moderation is not one-size-fits-all. Different content types ( livestreams, video, images, or text) require different approaches. What works for one platform may be ineffective or even harmful on another. That’s why Internet Works advocates for flexible, scalable policies that reflect differences in platform functionality, business models, size, and demographics.

How Platforms Moderate Responsibly

Section 230 is often misunderstood as just a shield for Big Tech, but it’s essential for platforms of all sizes. From neighborhood forums to job boards to creative marketplaces, Internet Works members rely on Section 230 to protect users and make responsible choices about content. Content moderation isn’t one-size-fits-all, and Section 230 ensures that companies can tailor their approach to fit their platforms.

Test your understanding with this quick, 5-question quiz featuring real-world content moderation scenarios made possible by Section 230.

What We’re Fighting For

At a time when Big Tech dominates the conversation, we’re advancing thoughtful, right-sized tech policy that promotes trust, protects users, and preserves the Internet as a place of limitless opportunity.

Section 230 protects free expression online and enables safety across the Internet. It’s an existential promise for innovation —” shielding responsible platforms from costly lawsuits while holding bad actors accountable.

We support a national privacy law that preempts the patchwork of state rules and aligns with global standards, ensuring trust, clarity, and user protection without punishing smaller platforms.

Middle Tech companies are integrators and deployers of AI, expanding access and boosting productivity. We support a risk-based policy approach that regulates the right part of the AI supply chain.

Today’s digital markets favor incumbents. We advocate for tech policy that opens markets, right-sizes compliance, and gives Middle Tech a fair chance to compete and innovate.