With deepfakes, it will be difficult to claim that anyone has ever said anything, since any evidence of it might be fake.
Deepfakes are AI-generated videos of people doing something they really didn’t. They’re typically based on existing, real, videos of them saying something else, with voice and motion altered to match the new “story”.
So far, so good. This technology exists and it makes it way easier to fake videos than it used to be using Hollywood-grade motion graphics and animation Software. I think we just need to accept that this box of pandora is open and generally be wary of any video evidence since it might have been faked.
But how does that affect legal defenses? If video evidence is used in a court case, we must also assume that it’s fake.
Lawyers tried to use this as defense in a case about a Tesla “Autopilot” crash where the plaintiff claimed that the victim’s death was caused by a belief that it would be safe to play video games while driving because Elon Musk has allegedly claimed that Teslas were effectively self-driving.
I’m not arguing for whether or not these videos are fake. I’m not saying the court should accept this defense.
The interesting thing here is that it only takes one precedent where a video was actually fake to open the legal box of pandora. If one video was faked and this was successfully proven in court, how can we be sure that any other video evidence will not be fake?
If the video is obviously fake (as in: there are visible artifacts or unnatural motion), that’s one story and then the precedent would only apply to other videos that could be proven to be fake this way.
The more consequential notion is this: What if there’s a deepfake that is, on the surface, indistinguishable from an authentic video but that is later proven to be fake, for example through witness testimony or data evidence of the creation process. This would set a precedent that no video can be trusted unless proven genuine.
But how can we prove that video evidence is genuine? Maybe camera equipment that uses hardware based public-private key signing in the encoding process. But most cameras wouldn’t do that and alteration of the footage (or even just re-encoding) could break it. It’s a dead end.
I think, the idea that a celebrity is more likely to be subject of a deepfake will also become less impactful. If it gets easier and easier to create convincing deepfakes (and technology will only improve from here), it will just take a single bad actor to incriminate anyone with fake evidence for any reason.
So there we have it. Courts will just need to work with the idea that evidence will be more difficult to simply accept without a meticulous and uninterrupted chain of proof, potentially through digital certificates. Just some video of an alleged act is not going to be enough.
Leave a Reply