Key Takeaways

  • AI policy must reflect the role a company plays in the AI ecosystem.
  • Most Middle Tech companies deploy or integrate AI—not develop it.
  • Risk-based regulation ensures safety without stifling innovation.
  • Overly broad AI rules may entrench incumbents and harm competition.
  • Smart policy supports both trust and technological advancement.

Right-Sized Rules for a Rapidly Evolving Field

Artificial Intelligence is transforming how people connect, work, and interact online. Middle Tech companies are playing a vital role in democratizing access to these tools: boosting productivity and helping users discover, create, and grow.

But most of our members are deployers or integrators of AI tools—not developers of foundational models. That distinction matters. AI policy must be risk-based and assign obligations where they belong: with the entities designing or controlling model behavior, not the small companies putting those tools to practical use.

What We’re Fighting For

At a time when Big Tech dominates the conversation, we’re advancing thoughtful, right-sized tech policy that promotes trust, protects users, and preserves the Internet as a place of limitless opportunity.

Section 230 protects free expression online and enables safety across the Internet. It’s an existential promise for innovation — shielding responsible platforms from costly lawsuits while holding bad actors accountable.

Trust and safety are essential to the digital economy. We support flexible content moderation policies that reflect platform diversity and protect all users, not one-size-fits-all rules that burden startups.

We support a national privacy law that preempts the patchwork of state rules and aligns with global standards, ensuring trust, clarity, and user protection without punishing smaller platforms.

Today’s digital markets favor incumbents. We advocate for tech policy that opens markets, right-sizes compliance, and gives Middle Tech a fair chance to compete and innovate.