
The ACLU’s first Civil Rights in the Digital Age (CRiDA) AI Summit in July brought together civil society, nonprofit, academia, and industry leaders to thoughtfully consider how to center civil rights and liberties in a constantly changing digital age.
Leaders and experts from the ACLU, Patrick J. McGovern Foundation, Hugging Face, Amnesty International, Future of Life Institute, Kapor Foundation, Mozilla Foundation and other major organizations discussed how organizations can create an equitable and just future of AI.
Organizations Must Collaborate to Shape the Future of AI
One way experts shared organizations can develop AI responsibly is through partnerships and collaboration. This includes working with civil rights organizations, community participation, and collaborating with other diverse perspectives to provide input on AI policies that are being considered for adoption.
“We need conversations about how AI supports more than just profit, but purpose. And here at the ACLU this morning, we really dug into that question,” Vilas Dhar, president and trustee of the Patrick J. McGovern Foundation, shared. “What are the institutions we have to build and support that protect all of our interests in an AI-enabled age?”
To lead by example, the ACLU has a cross-functional working group of experts representing many business functions within the organization who carry out a holistic findings process when adopting generative AI tools to discover if a tool does not align with ACLU values. This approach ensures that innovation does not leave groups of people behind.
CRiDA experts, like Dhar, also explored why innovation should not be rushed. Technology developers and leaders must take the time to be curious, learn, and engage in collaboration on AI systems’ design, deployment, and evaluation — and carefully evaluate critical issues related to AI’s impact on the environment and privacy, among other topics. While AI is an exciting frontier, organizations can employ tools, such as vendor questionnaires, to identify and understand the risks of specific AI tools and their alignment with an organization’s values related to privacy, fairness, transparency, and accountability.
We Must Protect Our Privacy and Data in the Age of AI
Panelists also discussed a crucial question: What laws regulate facial recognition?
“Not enough,” Nathan Freed Wessler, deputy director of the ACLU Speech, Privacy and Technology project, replied. “In a lot of parts of the country, there's actually no law, nothing from Congress, and most states haven't acted. But there are places that are real leaders...cities have actually banned police from using face recognition technology because it's so dangerous.”
CRiDA experts also explored how AI systems today are trained by developers on vast amounts of data including personal, academic, and behavioral information — often without the consent of the individuals behind this data. The threat doesn’t stop at passive data usage. AI-powered surveillance systems — whether it’s facial recognition or predictive policing — are trained on prejudiced data and used in ways that disproportionately target communities of color, further embedding discrimination into our social reality.
AI risks the danger of not only reflecting but intensifying the structural racism and discrimination woven into the systems around us.

ACLU CRiDA Summit panel members Ijeoma Mbamalu, Vilas Dhar, and Deborah Archer.
Credit: ACLU
This only amplifies the need for transparency and accountability when adopting AI systems. In practice, transparency for organizations considering using AI can include requiring vendors to provide details on the sources of the data used to develop their systems, their measures to assess risks of bias and discrimination, and any guardrails they have implemented to measure and address these risks. These questions are not only based on the need for transparency, but also critical for maintaining equity and fairness.
When developing CRiDA, one of the ACLU’s goals was to highlight the privacy implications of modern AI systems and vast use of personal data to power these systems. Together, experts and leaders across the board agreed that we all need transparency on how and when our data is used.
Ensuring AI Does Not Deepen the Digital Divide
AI has entered everyday life in a variety of ways, from the classroom to the hiring process. We understand that AI can be an asset to an organization’s work if implemented thoughtfully and responsibly. For example, if designed and governed carefully with appropriate guardrails, AI systems could be used to support critical educational and economic opportunities. But at the same AI systems can also have the opposite impact, and we risk exacerbating the racial wealth gap — harming rather than helping marginalized communities — when they are not designed and deployed carefully.
To accomplish a future of tech that is based on fairness and equity, CRiDA experts such as Deborah Archer, president of the ACLU, called on AI system developers to not only include civil society leaders in the room when developing AI, but to also expand diversity, equity, and inclusion, invest in causes that seek to close the digital divide, and fund trainings for marginalized communities to build technology that ensures the next generation have equal opportunity to succeed.
“It's not just enough to say, ‘We want diverse people in the room,’ and ‘We want diverse people to do this work’ if we’re not also doing the work to make sure that all of those people have access to the education, the resources, the opportunities, and the networks that equip them to do the work and then put them in the spaces to take advantage of the opportunities,” said Archer. “So, it is connected to all the other work the ACLU is doing, and other people are doing, around diversity, equity, and inclusion.”
The ACLU has fought both in courts and in communities to address and remedy AI’s systemic harms. We must meet the moment and urge tech, political, and civil society leaders to keep civil rights at the center of AI innovation.
Policies Centering Civil Liberties and Civil Rights in the Digital Age
The future of AI depends on a commitment from our leaders to ensure that AI aligns with the core principles and liberties that our Constitution envisions. Congress passed H.R. 1, the so-called One Big, Beautiful Bill Act, in May. Earlier versions of the bill included a moratorium on state and local laws regulating AI. Luckily, with the support of everyone who called on Congress, the provision was taken out.
Off the heels of this vote, experts and leaders at the ACLU’s CRiDA highlighted how, as AI gains power, now is the time to push for guardrails protecting civil liberties.
“Congress is trying to stop states from passing legislation and other regulations to protect us from AI through a tool called preemption,” Cody Venzke, senior policy counsel with the ACLU National Political Advocacy Division, shared with leaders. “You can keep it out of any future legislation by reaching out to your representatives and senators and tell them: No AI moratoriums and no preemption of state laws.”
The ACLU will continue to fight for responsible AI design and deployments, in the courtroom, and in Congress. Together, with peer organizations, innovators, and advocates, we will continue to use our collective power to protect the digital rights of everyone — especially people from marginalized communities. It must be a priority for all of us, including the institutions and developers building these technologies.