Key Takeaways

  • Section 230 protects free expression and encourages responsible content moderation.
  • It enables Middle Tech companies to operate without fear of existential lawsuits.
  • One-size-fits-all reforms would hurt competition and innovation.
  • Safe harbor laws must reflect a company’s size, risk, and role.
  • The framework supports both safety and accountability online.

Why 230 Matters

Section 230 is a foundational part of Internet law. It not only protects free expression, but it also gives Internet companies the legal certainty they need to moderate harmful content and protect users. For Middle Tech companies, this is existential: without Section 230, they could face costly, opportunistic lawsuits that threaten their ability to operate, innovate, and serve their users.

At Internet Works, we advocate for thoughtful policies that preserve the balance at the heart of Section 230: protecting speech while allowing platforms to take responsibility for safety and security. As policymakers consider changes, it’s essential to recognize the diversity of companies impacted and avoid reforms that reinforce Big Tech dominance by making compliance too costly or complex for smaller platforms.

Myths vs. Facts

  • Myth Section 230 primarily helps large social media platforms.
    Fact Section 230 protects internet sites and users by providing a legal basis for organizations of all shapes and sizes to moderate content. It prevents internet service providers (ISPs), internet sites of all sizes, and users from being held liable for objectionable content posted by other users. Section 230 doesn’t just apply to social media platforms. It also protects online services that provide volunteer community moderation, such as message boards, as well as other organizations including PTAs, schools, and libraries. Without the protection Section 230 provides, many of these organizations could face crippling lawsuits over user-posted content. Unfortunately, only the largest corporations or organizations could withstand the possible wave of litigation over user-posted content which could occur if Section 230 is weakened or repealed.
  • Myth Under Section 230, Internet companies don’t have an incentive to moderate user content because it gives them blanket immunity.
    Fact Internet companies must moderate because Section 230 doesn’t provide unconditional legal immunity. Section 230 allows platforms to decide what content appears on their platforms and allows them to remove “objectionable” content without fear of legal liability. It does not provide blanket protection, so companies need to remove some types of harmful content — like spam or obscene material—off their platforms to protect users. Section 230 provides limited immunity to Internet sites that allow user-generated content. Since Section 230 went into effect, courts have ruled in multiple cases that there are limits to these protections. For example, providers have no legal immunity if they materially contribute to illegal user content.
  • Myth Section 230 has failed and needs to be reformed.
    Fact The Internet would not be the engine of economic growth and force for bringing the world closer that it has become without the protections of Section 230. Policymaker concerns are rooted in the way some companies have moderated content under Section 230. That does not mean the law has failed. It means the Internet community must work constructively to ensure users have a better understanding of the rules of the online services they are using. Section 230 allows Internet companies to innovate and moderate content, helps promote freedom of expression online, and enables online businesses to offer products, services, and features that users expect. Section 230 does not provide blanket immunity to online businesses. Some kinds of content — including material that’s considered criminal, like child pornography — do not have Section 230 protections. Law enforcement agencies can prosecute online businesses that host illegal content.
  • Myth Changing Section 230 will only impact the big companies, like Facebook and Google.
    Fact Some proposals to overhaul Section 230 could have a fatal impact on millions of small and medium-sized Internet-based companies by creating costly and inflexible regulations on content moderation. This would have the unintended outcome of further entrenching the position of the largest online platforms. Big companies can afford to design expensive moderation programs to review user-generated content. Other companies use technologies such as filters to flag obscene words and images or content that violates their terms of service. Organizations need flexibility to ensure content moderation policies on their sites make sense for their particular focus or service. Smaller companies cannot afford to defend themselves against lawsuits by users upset about content moderation policies. They may opt instead to change their sites to avoid certain types of user-generated content altogether.
  • Myth The First Amendment gives people the right to say anything they want on the Internet.
    Fact The First Amendment applies to the government, not private organizations or companies. In fact, it empowers private entities to make their own decisions about association. Just as a retail store, as a private business, is free to bar service to someone speaking rudely toward staff or other customers, Internet companies are also free to set and enforce standards for appropriate content and behavior on their platforms.

What We’re Fighting For

At a time when Big Tech dominates the conversation, we’re advancing thoughtful, right-sized tech policy that promotes trust, protects users, and preserves the Internet as a place of limitless opportunity.

Trust and safety are essential to the digital economy. We support flexible content moderation policies that reflect platform diversity and protect all users, not one-size-fits-all rules that burden startups.

We support a national privacy law that preempts the patchwork of state rules and aligns with global standards, ensuring trust, clarity, and user protection without punishing smaller platforms.

Middle Tech companies are integrators and deployers of AI, expanding access and boosting productivity. We support a risk-based policy approach that regulates the right part of the AI supply chain.

Today’s digital markets favor incumbents. We advocate for tech policy that opens markets, right-sizes compliance, and gives Middle Tech a fair chance to compete and innovate.