Daniel Kahn Gillmor, Senior Staff Technologist, ACLU Speech, Privacy, and Technology Project

Fifteen years ago it was unfathomable – and a bad idea – to imagine that your digital messages could automatically self-destruct.. Once your message is on someone else’s machine, you simply cannot guarantee that it will be destroyed when you want it to be. Fooling people into thinking they have more security and privacy than they really do can put them in harm’s way.

Today, however, modern messaging apps have built exactly this feature. Signal Private Messenger, WhatsApp, SimpleX Chat, DeltaChat, and Facebook Messenger all have a disappearing messages function. Wire has self-deleting messages, Telegram’s Secret Chats have self-destructing messages, and many more. These features establish a time frame – from minutes to hours to weeks – before all the messages in your conversation are supposed to disappear from the devices of all participants in that conversation.

From a security point of view, it’s impossible to guarantee deletion in this way. Are these products all lying or deluded, then? No. These mechanisms are actually a great step forward for the public conversation — as long as users are aware of their limitations. Rather than provide some impossible perfection, what they do is to automate and normalize agreements about how long to keep records of your conversations with another person or people.

Disappearing Messages Cannot Beat Cheaters

Why are these mechanisms inherently unreliable? Digital tools fundamentally work by making copies. You don’t actually “send” an instant message from one device to another, even though that’s how we talk about it. Rather, your device copies the message into the network and devices in the network make more copies of the message until a copy finally appears on the destination device. Modern instant messaging services encrypt the message before sending out copies, so the intermediate devices can’t see what is in the message. But the recipient’s device will decrypt the message so they can read it. This is called end-to-end encryption and it has become a fundamental part of today’s modern communication systems.

End-to-end encryption means that when you send a message, you don’t have to worry about anybody accessing the message between your device and your recipients’ –though your device itself could be vulnerable, which is a whole other cybersecurity issue. But, if you want to block the recipient from retaining a copy of the “disappearing” message, you’re out of luck. This is simply how the universe works: the sender of a message can’t actually control what happens when the recipient views their copy of the message.

To begin with, the recipient can always take a screenshot or make a backup of their app’s data. Also, even if the recipient’s device is somehow running completely locked-down software that prohibits screenshots and backups, the recipient can always point another camera at their screen when the disappearing message is displayed, or use another microphone to record a “disappearing” voice note. This is known as the “analog hole” — meaning that eventually digital data has to be translated into sights and sounds that we humans can perceive in the non-digital world and those sights and sounds can always be recorded.

Disappearing Messages Are Flawed. We Don’t Actually Want Perfection

It’s important to remember that if the disappearing messages feature was actually perfect, we might not actually be too happy. Imagine if someone could send you an abusive message and know that you could never show it to someone else who might be able to help to defend you.

Freedom, autonomy, and responsibility are good reasons why the recipient of any message should be in full control over their own endpoint, even if that means they might make a non-disappearing copy of an ostensibly disappearing message. In the case of an abusive disappearing message, and probably in other cases, the person “cheating” the system is actually in the right if they want to retain the message for non-nefarious reasons.

Disappearing Messages Normalize and Automate Data Destruction Policies

So even though disappearing messages can’t actually work reliably against someone determined to cheat the system, why are they still a great advance in public communications?

Before digital communications, messages were much less likely to stick around forever. There was often only a single copy of a message. In the past, a letter sent by the post office didn’t stay with the sender unless they deliberately made a copy first. In-person conversations also vanished as soon as they were spoken. As our society has digitized, however, more and more of our daily interactions leave a trail. Law enforcement agencies the world over seem to think that every human communication can and should now be permanently available to them whenever they’re interested. And old data left lying around on a device can also be misused by criminals, domestic abusers, or spies if they manage to get access to the device.

But if data is truly destroyed, it can’t be compromised, even if it was once available on your personal device, or the device of the person you were talking with. So one way to reduce the scope of this overreach is with a data retention/destruction policy, such as the disappearing messages feature. It’s possible to plan such a policy without a disappearing messages feature, for example, the people involved in a conversation could discuss and agree on when messages should be deleted, and check in with each other to ensure everyone involved remembers to go back and delete the old messages regularly. But negotiating such a policy among all participants in a chat can be difficult work. People chat to have a conversation about some specific topic, not about the conversation itself.

Automation Helps Us Make and Keep Promises

And even if you manage to get agreement from everyone in a chat on a data-destruction policy, getting people to follow through by actually deleting messages is a serious logistical challenge. The best time to delete a conversation is when it’s no longer important, and almost by definition at that point, the busy participants are usually already thinking about something else. But the disappearing messages feature delegates the task of following through to machinery that doesn’t get bored or distracted. This frees up human attention and energy to think about current problems and to not have to worry about older commitments.

A disappearing-messages feature in a messaging app serves two great purposes: 1) it normalizes and simplifies the act of agreeing on a data destruction policy; and 2) it helps honest participants keep their word. If all it did was help people negotiate an agreement on a data-destruction policy, that would be a win, but it wouldn’t be enough. Busy people need to find time to act on their agreements. Even the most well-meaning person can get distracted by other commitments and fail to follow up on what they had intended to do. But a tool with a disappearing-messages feature will follow through automatically, and the participants don’t need to think about it once the decision has been made.

These policies won’t stop someone who wants to break their promise about data deletion and sometimes it might even fail inadvertently. For example, someone might create a backup of their messages in a way that accidentally retains a message set for automatic deletion. But we know what it’s like when someone reneges on a commitment, or simply fails to follow through, and we have human ways of dealing with those scenarios.

These impossible, imperfect tools provide a healthy counterbalance to the disturbing trend of ever-increasing data retention. If you haven’t tried using them yet, now is a great time to start.

Date

Friday, January 3, 2025 - 2:15pm

Featured image

A phone screen showing WhatsApp audio messages.

Show featured image

Hide banner image

Override default banner image

A phone screen showing WhatsApp audio messages.

Tweet Text

[node:title]

Share Image

ACLU: Share image

Related issues

Privacy

Show related content

Imported from National NID

195398

Menu parent dynamic listing

22

Imported from National VID

195415

Imported from National Link

Show PDF in viewer on page

Style

Centered single-column (no sidebar)

Teaser subhead

Disappearing messages features can’t actually guarantee message deletion, but what they do offer communicators is even better.

Show list numbers

Kia Hamadanchy, Senior Policy Counsel, ACLU National Political Advocacy Division

Hina Shamsi, Director, ACLU National Security Project

President-elect Donald Trump has nominated the Fox News Channel host Pete Hegseth to lead the Department of Defense (DOD). If confirmed, the military veteran will lead the nation’s armed forces in what will be his first appointment to a political office.

Hegseth was commissioned as an infantry officer in the Army National Guard and he served in Afghanistan and Iraq after 9/11. Prior to his stint as a talk show host on Fox News, he led the nonprofits Vets for Freedom and Concerned Veterans for America.

The ACLU has spent more than 100 years holding power accountable. While as a matter of policy the ACLU does not endorse or oppose nominees for cabinet-level positions, it does examine and publicize nominees’ civil liberties records. Given the power and influence defense officials have over U.S. national security policy and decision making, a president’s secretary of defense choice has serious consequences for civil liberties at home and abroad. Ahead of the January Senate confirmation hearings, we analyze Hegseth’s record on key civil liberties issues, and urge Congress to carefully consider the impact his leadership would have on our rights.

The Department of Defense on Civil Liberties

As the largest U.S. government agency with the largest discretionary budget, the DOD oversees all U.S. military operations. The secretary of defense is responsible for ensuring troops comply with all applicable laws, including the laws of war. Importantly, the secretary must comply with both the Constitution, which requires Congress alone to make the ultimate decision to go to war and use force except to repel a sudden attack (and then only for a limited period); and the War Powers Resolution, which Congress intended to reflect the Constitution’s checks and balances. The Constitution itself gives the power to declare war and authorize the use of force to Congress alone. The ACLU has long advocated against unlawful use of force abroad, as well as adherence to our system of checks and balances and international humanitarian and human rights law.

With blatant disregard for the appropriate role of the military on American soil, Trump has, on numerous occasions, stated that he plans to use military troops to help conduct his mass deportation plans, or suppress protest. Deployment of troops for these purposes would be an abuse of power. As secretary of defense, Hegseth could be called upon to support or carry out these extreme and unprecedented actions.

The DOD is the largest employer in the U.S., with nearly 1 million civilian employees and more than 2 million military personnel. Whether it’s protecting the rights of LGBTQ servicemembers and their families, ensuring that immigrant service members are given the expedited citizenship they may be entitled to, or demanding that parents be allowed to enroll and graduate from military service academies, it is vital for the DOD to protect the civil rights and liberties of its employees and comply with the rule of law in serving the American people writ large.

On the Record: Where Hegseth Stands

Hegseth has a long record of extremely concerning views on a variety of civil liberties issues related to the military and U.S. national security policy. His positions include:

  1. He has excused war crimes. Disregarding the objections of senior defense officials, he encouraged Trump to pardon three U.S. servicemen accused, or convicted, of war crimes. Trump ultimately pardoned all three men.
  2. He has supported overbroad claims of presidential authority to use lethal force without congressional authorization. He not only supported the Trump administration’s lethal strike against Qassem Soleimani, leader of Iran's Islamic Revolutionary Guard Corps-Qods Force, he also pushed for Trump to bomb cultural sites in Iran, which would have contravened the laws of war. Hegseth has also suggested that the U.S. use the military against Mexico’s drug cartels.
  3. He has supported using the military to suppress protests. In 2020, he supported sending the military to U.S. cities, like Seattle, to suppress racial justice protests.
  4. He has opposed efforts to fight discrimination in the military. Hegseth has stated that, “any general that was involved, general, admiral, whatever, that was involved in any of the DEI, woke s--t has got to go.” In reference to the current chairman of the Joint Chiefs of Staff, who is Black, Hegseth wrote, “Take it to the racist bank: black troops, at all levels, will be promoted simply based on their race. Some will be qualified; some will not be.”
  5. He recently shifted his views on women serving in combat. In November, he said he opposed women in combat, and used gender stereotypes to make his case. He stated, “I’m straight up just saying we should not have women in combat roles. It hasn’t made us more effective. Hasn’t made us more lethal. Has made fighting more complicated.” But after meeting with several women senators in December, he said “we support all women in our military today, . . . combat included.”
  6. Hegseth has also opposed medical care for transgender soldiers. He stated that transgender soldiers are “not deployable” because they are “reliant on chemicals” and referred to discussions on transgender issues in the military as “trans lunacy.
  7. He has made virulently anti-Muslim statements. He asserted that Muslim communities in America represent “an existential threat” to the country and repeated other vitriolic and hateful stereotypes about Muslims, who already face discrimination in the U.S., especially by national security agencies.

Finally, credible allegations exist that Hegseth has engaged in sexual misconduct, and the Senate must investigate the matter further before advancing his nomination. Given longstanding concerns regarding sexual assault in the military and the statements Hegseth has made regarding the role of women in combat, these allegations are directly relevant to his nomination as secretary of defense.

Commitments the ACLU is Urging Senators to Demand at Hegseth’s Confirmation Hearing:

Based on his track record, the ACLU is concerned about how Hegseth would use the DOD’s vast power and resources, and about the impact his leadership would have on our civil liberties and civil rights. At his confirmation hearing, we’re urging senators to ask Hegseth:

  1. When the framers drafted the Constitution, they wanted to ensure the clear separation of the civilian government from a nonpolitical, nonpartisan military. The military should have no role to play in mass deportation or suppression of protest and in fact we condemn other countries that send in troops to break up protests or enforce civil laws. Will you pledge not to deploy the military to intimidate or use force against protesters in American cities? Will you pledge not to deploy troops to carry out civilian law enforcement functions on American soil, which could place them at risk of violating criminal law?
  2. In 2015, 78 Senators voted to ensure that this country never again engages in torture. Do you agree to support and adhere to that bipartisan pledge?
  3. Adherence to the rule of law, including the laws of war, is critical for U.S. service members who rely on the secretary of defense to ensure they are not placed at risk of committing unlawful actions. Will you ensure that the DOD conforms to the checks and balances enshrined in the Constitution and act only as authorized by Congress, as well as international humanitarian law?
  4. Will you support LGBTQ service members continuing to serve in the military, and also provide health care, including reproductive health care and gender-affirming care, for all eligible service members and their families?
  5. In the space of less than two months, you went from arguing, ““I’m straight up just saying we should not have women in combat roles,” to later saying, “we support all women in our military today, . . . combat included.” Will you commit now to continue all of the Department’s current policies and practices that support women serving in combat and in combat positions?

Date

Thursday, December 26, 2024 - 2:45pm

Featured image

A photo of Pete Hegseth.

Show featured image

Hide banner image

Override default banner image

A photo of Pete Hegseth.

Tweet Text

[node:title]

Share Image

ACLU: Share image

Related issues

Police Practices Criminal Justice Immigrants' Justice

Show related content

Imported from National NID

195255

Menu parent dynamic listing

22

Imported from National VID

195280

Imported from National Link

Show PDF in viewer on page

Style

Centered single-column (no sidebar)

Teaser subhead

What to know about Trump’s secretary of defense nominee and his stance on civil liberties.

Show list numbers

Daniel Kahn Gillmor, Senior Staff Technologist, ACLU Speech, Privacy, and Technology Project

Jay Stanley, Senior Policy Analyst, ACLU Speech, Privacy, and Technology Project

There is widespread concern today about the use of generative AI and deepfakes to create fake videos that can manipulate and deceive people. Many are asking, is there any way that technology can help solve this problem by allowing us to confidently establish whether an image or video has been altered? It is not an easy task, but a number of techniques for doing so have been proposed. They include – most prominently — a system of “content authentication” supported by a number of big tech firms, and which was discussed by the Bipartisan House Task Force Report on AI released this month. The ACLU has doubts about whether these techniques will be effective and serious concerns about potential harmful effects

There are a variety of interesting techniques for detecting altered images, including frames from videos, such as statistical analyses of discontinuities in the brightness, tone, and other elements of pixels. The problem is that any tool that is smart enough to identify features of a video that are characteristic of fakes can probably also be used to erase those features and make a better fake. The result is an arms race between fakers and fake detectors that makes it hard to know if an image has been maliciously tampered with. Some have predicted that efforts to identify AI-generated material by analyzing the content of that material are doomed. This has to led to a number of efforts to use another approach to proving the authenticity of digital media: cryptography. In particular, many of these concepts are based on a concept called “digital signatures.”

Using Cryptography to Prove Authenticity

If you take a digital file — a photograph, video, book, or other piece of data — and digitally process or “sign” it with a secret cryptographic “key,” the output is a very large number that represents a digital signature. If you change a single bit in the file, the digital signature is invalidated. That is a powerful technique, because it lets you prove that two documents are identical — or not — down to every last one or zero, even in a file that has billions of bits, like a video.

Under what is known as public key cryptography, the secret “signing key” used to sign the file has a mathematically linked “verification key” that the manufacturer publishes. That verification key only matches with signatures that have been made with the corresponding signing key, so if the signature is valid, the verifier knows with ironclad mathematical certainty that the file was signed with the camera manufacturer’s signing key, and that not a single bit has been changed.

Given these techniques, many people have thought that if you can just digitally sign a photo or video when it’s taken (ideally in the camera itself) and store that digital signature somewhere where it can’t be lost or erased, like a blockchain, then later on you can prove that the imagery hasn’t been tampered with since it was created. Proponents want to extend these systems to cover editing as well as cameras, so that if someone adjusts an image using a photo or video editor the file’s provenance is retained along with a record of whatever changes were made to the original, provided “secure” software was used to make those changes.

For example, suppose you are standing on a corner and you see a police officer using force against someone. You take out your camera and begin recording. When the video is complete, the file is digitally signed using the secret signing key embedded deep within your camera’s chips by its manufacturer. You then go home and, before posting it online, use software to edit out a part of the video that identifies you. The manufacturer of the video editing software likewise has an embedded secret key that it uses to record the editing steps that you made, embed them in the file, and digitally sign the new file. Later, according to the concept, someone who sees your video online can use the manufacturers’ public verification keys to prove that your video came straight from the camera, and wasn’t altered in any way except for the editing steps you made. If the digital signatures were posted in a non-modifiable place like a blockchain, you might also be able to prove that the file was created at least as long ago as the signatures were placed in the public record.

Content Authentication Schemes Are Flawed

The ACLU is not convinced by these “content authentication” ideas. In fact, we’re worried that such a system could have pernicious effects on freedom.

The different varieties of these schemes for content authentication share similar flaws. One is that such schemes may amount to a technically-enforced oligopoly on journalistic media. In a world where these technologies are standard and expected, any media lacking such a credential would be flagged as “untrusted.” These schemes establish a set of cryptographic authorities that get to decide what is “trustworthy” or “authentic.” Imagine that you are a media consumer or newspaper editor in such a world. You receive a piece of media that has been digitally signed by an upstart image editing program that a creative kid wrote at home. How do you know whether you can trust that kid’s signature — that they’ll only use it to sign authentic media, and that they’ll keep their private signing key secret so that others can’t digitally sign fake media with it?

The result is that you only end up trusting tightly controlled legacy platforms operated by the big vendors like Adobe, Microsoft, and Apple. If this scheme works, you’ll only get the badge of authentic journalist authority if you use Microsoft or Adobe.

Furthermore, if “trusted” editing is only doable on cloud apps, or on devices under the full control of a group like Adobe, what happens to the privacy of the photographer or editor? If you have a recording of police brutality, for example, you may want to ask the police for their story about what happened before you reveal your media, to determine whether the police will lie. But if you edit your media on a platform controlled by a company that regularly gives in to law enforcement requests, they might well get access to your media before you are willing to release it.

Locking down hardware and software chains may help authenticate some media, but would not be good for freedom. It would pose severe threats to who gets to easily share their stories and lived experiences. If you live in a developing country or a low-income neighborhood in the U.S., for example, and don’t have or can’t afford access to the latest authentication-enabled devices and editing tools, will you find that your video of the authorities carrying out abuses will be dismissed as untrusted?

It’s not even certain that these schemes would work to prevent an untrustworthy piece of media from being marked as “trusted.” Even a locked-down technology chain can fail against a dedicated adversary. For example:

  • Sensors in the camera could be tricked, for example by spoofing GPS signals to make the “secure” hardware attest that the location where photography took place was somewhere other than where it really was.
  • Secret signing keys could be extracted from “secure” camera hardware. Once the keys are extracted, they can be used to create signatures over data that did not actually originate with that camera, but can still be verified with the corresponding verification key.
  • Editing tools or cloud-based editing platforms could potentially be tricked into signing material that they didn't intend to sign, either by hacks on the services or infrastructure that support those tools, or by exploitation of vulnerabilities in the tools themselves.
  • Synthetic data could be laundered through the “analog hole.” For example, a malicious actor could generate a fake video, which would not have any provenance information, and play it back on a high-resolution monitor. They then set up an authentication-capable camera so that the monitor fills the camera’s field of view and hit “record.” The video produced by the camera will now have “authentic” provenance information, even though the scene itself did not exist outside of the screen.
  • Cryptographic signature schemes have often proven to be far less secure than people think, often because of implementation problems or because of how humans interpret the signatures.

Another commonly proposed approach to helping people identify the provenance of digital files is the opposite of the scheme described above. Instead of trying to establish proof that content is unmodified, establish proof that modified content has been modified. To do this, these schemes demand every AI photo creation tool to register all “non-authentic” photos and videos using a signature, or a watermark. Then people can check if a photo has been created by AI rather than a camera.

There are numerous problems with this concept. People can strip digital signatures, evade media comparison, or elide watermarks by changing parts of the media. They can create a fake photo manually in image editing software like Photoshop, or with their own AI,which is likely to become increasingly possible as AI technology is democratized. It’s also unclear how you could force every large corporate AI image generator to participate in such a scheme.

A Human Problem, Not a Technology Problem

Ultimately, no digital provenance mechanism will solve the problem of false and misleading content, disinformation, or the fact that a certain proportion of the population is deceived by it. Even content that has been formally authenticated under such a scheme can be used to warp perception or reality. No such scheme will control how people decide what is filmed or photographed, what media is released, and how it is edited and framed. Choosing focus and framing to highlight the most important parts is the ancient essence of storytelling.

The believability of digital media will most likely continue to rely on the same factors that storytelling always has: social context. What we have always done with digital media, as with so many other things, is judge the authenticity of images based on the totality of the human circumstances surrounding them. Where did the media come from? Who posted it, or is otherwise presenting it, and when? What is their credibility; do they have any incentive to falsify it? How fundamentally believable is the content of the photo or video? Is anybody disputing its authenticity? The fact that many people are bad at making such judgments is not a problem that technology can solve.

Photo-editing software, such as Photoshop, has been with us for decades, yet newspapers still print photographs on their front page, and prosecutors and defense counsel still use them in trials, largely because people rely on social factors such as these. It is far from clear that the expansion of democratized software for making fakes from still photos to videos will fundamentally change this dynamic — or that technology can replace the complex networks of social trust and judgment by which we judge the authenticity of most media.

Voters hit with deepfakes for the first time (such as a fake President Joe Biden telling people not to vote in a Republican primary) may well fall for such a trick. But they will only encounter such a trick for the first time once. After that they will begin to adjust. Much of the hand-wringing about deepfakes fails to account for the fact that people can and will take the new reality into account in judging what they see and hear.

If many people continue to be deceived by such tricks in the future, as a certain number are now, then a better solution to such a problem would be increased investments in public education and media literacy. No technological scheme will fix the age-old problem of some people falling for propaganda and disinformation, or replace the human factors that are the real cure.

Date

Tuesday, December 24, 2024 - 12:45pm

Featured image

Binary code on a screen.

Show featured image

Hide banner image

Override default banner image

Binary code on a screen.

Tweet Text

[node:title]

Share Image

ACLU: Share image

Related issues

Free Speech

Show related content

Imported from National NID

195176

Menu parent dynamic listing

22

Imported from National VID

195237

Imported from National Link

Show PDF in viewer on page

Style

Centered single-column (no sidebar)

Teaser subhead

Futile “content authentication technologies” are being pushed by companies and others.

Show list numbers

Pages

Subscribe to ACLU of Florida RSS