Congress on Wednesday will examine a little-known law that has made the internet the space for self-expression and connection that it is today. The law, Section 230 of the Communications Decency Act (CDA 230), is one of the most speech protective laws Congress has ever enacted and it is now under threat.

The internet today provides us an indispensable platform to communicate freely with others who might otherwise be beyond reach. One person with an idea or a desire to create change can reach millions. April Reign coined the hashtag #OscarsSoWhite in 2015 and spawned an online movement drawing attention to the lack of representation of people of color in the nominated films.

https://twitter.com/ReignOfApril/statuses/555725291512168448

Like Reign, people around the world are leveraging the internet to fight back against anything from systemic racism to the tactics of oppressive regimes. And the benefits can be personal too — new parents needing advice on a stroller can turn to online parent message boards, home gardeners seeking lawn care tips can turn to DIY gardening blogs, and more.

This is possible because so many online forums enable speakers to communicate freely on their platforms. Wikipedia provides a free online encyclopedia in scores of languages, thanks to volunteers around the world. Yelp lets us give recommendations on anything from restaurants to nail salons. Consumer watchdog sites encourage the public to submit reports of corporate malfeasance. Environmental activists at sites like Frack Check WV ask citizens to submit horror stories about fracking in their communities. The Bed Bug Registry asks users to report bed bug infestations. And then, of course, there are Facebook and Twitter.

CDA 230 makes communication on these platforms possible by assuring online platforms that they generally won’t be liable for user-generated content. Yelp can’t be held legally responsible every time one of its users posts a potentially false negative review. The Bed Bug Registry doesn’t have to visit every hotel with a magnifying glass to confirm the public reports. And Facebook can offer a forum for billions of users to share their thoughts, pictures, memes, and videos freely without having to approve every post before it goes up.  

If it weren’t for CDA 230, no website owner would permit public posts knowing that the site could be investigated, shut down, sued, or charged with a felony over one user’s speech. Avoiding legal risk would require even the smallest blog to hire an army of lawyers to assess in real-time all content created and uploaded by users. It’s unaffordable. Instead, sites would avoid legal liability by simply refusing to host user-generated content at all. 

Of course, users make mistakes. We get facts wrong. We can be terrible to one another in ways that break the law, offend, or hurt. Bad actors can — and do — abuse the internet for nefarious and destructive purposes. But there are already safeguards in place to address harmful content not protected under the First Amendment, and Section 230 does not shield bad actors or lawbreakers. If you use Facebook to harass someone (please, don’t do that), you remain responsible for those actions.

CDA 230 also doesn’t stop online platforms from trying to cultivate orderly, pleasant, and useful sites. While the biggest social media companies, responsible for hosting the speech of billions, should resist calls to censor lawful speech, CDA 230 allows sites to delete abusive accounts, remove content that violates the site’s terms of service, or refuse to carry pornography without risking liability for the speech that they do host.

Despite these safeguards, the obvious good CDA 230 has done in creating a free, vibrant forum for speech in the modern era, and the clear harm that would result for the speech of billions should it no longer exist, some lawmakers are considering rolling the law’s protections back in ways that are poorly informed and even dangerous. One lawmaker has introduced a bill that would require a federal agency to decide whether a platform complies with a “political neutrality” requirement as a precondition for immunity. Others have proposed revoking platforms’ immunity when moderating “objectionable” content while retaining immunity for moderating “unlawful content” in good faith.

Setting aside the obvious constitutional problems with a government entity judging the political content of speech, or dictating the censorship decisions of online platforms, these proposals would make it far less palatable for online services to host others’ speech at all. If enacted, the internet’s marketplace of ideas — and our freedom to communicate online — would suffer.

The ACLU has continued to fight for Section 230 to protect people’s ability to create and communicate online. We have encouraged courts to interpret the law’s immunity provisions to enable as much free expression online as possible under U.S. law. We will remain vigilant in ensuring that the internet remains a place for self-expression and creation for all. We urge Members of Congress as they examine CDA 230’s role in the free expression to do the same.

Kate Ruane, Senior Legislative Counsel, ACLU &
Jennifer Stisa Granick, Surveillance and Cybersecurity Counsel, ACLU

Date

Tuesday, October 15, 2019 - 8:30pm

Featured image

A phone screen featuring social media apps

Show featured image

Hide banner image

Tweet Text

[node:title]

Share Image

ACLU: Share image

Related issues

Free Speech Privacy

Show related content

Imported from National NID

26142

Menu parent dynamic listing

22

Imported from National VID

26169

Imported from National Link

Show PDF in viewer on page

Style

Standard with sidebar

The state of California just made it clear: Face recognition surveillance isn’t inevitable. We can — and should — fight hard to protect our communities from this dystopian technology.

Building on San Francisco’s first-of-its-kind ban on government face recognition, California this week enacted a landmark law that blocks police from using body cameras for spying on the public. The state-wide law keeps thousands of body cameras used by police officers from being transformed into roving surveillance devices that track our faces, voices, and even the unique way we walk. Importantly, the law ensures that body cameras, which were promised to communities as a tool for officer accountability, cannot be twisted into surveillance systems to be used against communities. 

The rise of face and other biometric surveillance technologies gives governments an unprecedented power to track, classify, and discriminate against people based on their most personal, innate features. This risks forever altering the balance of power between the people and their government, and undermines bedrock democratic values of freedom and privacy.

The threat is no longer science fiction: right now, governments abroad are using this technology to target and oppress marginalized populations. Federal and local agencies in the United States are rushing to deploying these systems, too.

As police agencies and companies in the United States team up to rapidly and recklessly supercharge the surveillance state with face recognition, California is sending a powerful warning: We can — and will — defend our privacy and civil liberties.

California’s law is part of a larger and growing movement to prevent the spread of ubiquitous face surveillance. In May, San Francisco became the first city to prohibit the government acquisition and use of face recognition technology. Since then, Oakland and Berkeley, California, and Somerville and Cambridge, Massachusetts, have introduced or adopted bans of their own. And in Detroit and New York City, activists are fighting to prevent the face surveillance of Black communities, tenants, and school children.

These towns and cities are joined by legislatures in Massachusetts, Washington, New York, and Michigan that have introduced state-wide legislation strictly limiting face recognition surveillance. And in Washington D.C., members of Congress on both sides of the aisle are now considering legislation to rein in this technology and have held a series of hearings to investigate its use.

Even companies and shareholders are beginning to recognize a new responsibility to act. This summer, Axon, the country’s largest body camera supplier, announced it would ban face recognition on its products for the foreseeable future. Before that, Google announced it would press pause on a face recognition products for governments.

This impressive progress to bring face surveillance technology under democratic control is no accident. The ACLU’s Community Control Over Police Surveillance (CCOPS) effort is designed to ensure residents — through their local governments and elected officials — are empowered to decide if and how surveillance technologies are used, and to promote government transparency. We’ve brought together a coalition of organizations fighting for the rights of immigrants, Black people, the unhoused, LGBTQ people, criminal defense attorneys, Muslim-Americans, and so many more. Shareholders, AI researchers, and tech employees have also joined in. These campaigns find political power in their diversity.

We’ve exposed law enforcement’s quiet expansion of face surveillance into our communities. Our team has demonstrated how the technology’s numerous flaws can lead to wrongful arrests, use of force, and grave harm. We’ve explained how even perfectly accurate face surveillance technology would remain a grave threat to civil rights, enabling the automatic and invasive tracking of our private lives and undermining First Amendment-protected activity.

Community members are directly reaching out to their legislators to share their personal experiences of police misconduct and discriminatory surveillance. They’re explaining how face recognition — with its unprecedented ability to impose official power and control — will amplify those existing harms and further undermine trust in law enforcement. And they’re demanding their local leaders step up efforts to block this technology from entering their communities.

But as people and their policymakers make progress, companies like Amazon and Microsoft continue to seek profits from face recognition sales to governments. Amazon even pitched its face recognition product — called "Rekognition" — to Immigration and Customs Enforcement. And companies like Microsoft have attempted to advance laws that they claim would protect communities, but actually entrench dangerous and discriminatory uses.

Decisions about whether the government has the immense power to identify who attends protests, political rallies, church, or simply walks down the street must be made by you and your elected leaders. They should not be made by corporate executives or by police chiefs acting alone.

Our democracy gives us the power as a society to reject surveillance that is invasive, discriminatory, and wide-reaching. We will continue to use that power to create a society free of face surveillance. We hope you’ll join us in this fight.

Matt Cagle, Technology and Civil Liberties attorney, ACLU of Northern California

Date

Friday, October 11, 2019 - 11:45am

Featured image

Police body camera

Show featured image

Hide banner image

Tweet Text

[node:title]

Share Image

Police body camera

Related issues

Police Practices Privacy

Show related content

Menu parent dynamic listing

22

Show PDF in viewer on page

Style

Standard with sidebar

Pages

Subscribe to ACLU of Florida RSS