Episode 8

November 04, 2024

00:12:27

AI Deepfakes in the Legal Context: An Introduction

AI Deepfakes in the Legal Context: An Introduction
Guardify Real Evidence Podcast
AI Deepfakes in the Legal Context: An Introduction

Nov 04 2024 | 00:12:27

/

Show Notes

In this episode of the Real Evidence Podcast, we explore the emerging challenges of AI deepfakes in the legal context. Drawing from insights by legal scholars, we examine how manipulated audio and video evidence is already impacting courtrooms today. From high-profile cases to everyday legal proceedings, we discuss the authentication challenges facing judges, lawyers, and jurors. 

Join us as we explore practical solutions being discussed, including emerging detection technologies, forensic collaboration, and legislative responses to this growing topic in judicial proceedings. Whether you're a legal professional or interested in the intersection of AI and justice, this episode offers crucial insights into one of today's most pressing legal technology challenges.

Don't forget to rate the podcast and subscribe to stay updated on upcoming episodes or visit our website for more content at guardify.com/real-evidence-podcast. Or watch it on YouTube: https://youtu.be/wqAUUhH8WHM

Sources referenced: 

View Full Transcript

Episode Transcript

[00:00:02] Hey, everyone. Welcome back to another episode of the Real Evidence podcast. This is a podcast where we talk about the digital evidence landscape and how people and processes and technology work together towards restorative justice. Today, I'm your host, my name is Myron, and we get to talk about and explore the topic of AI deepfakes in the legal context. This is a topic that we've done some research on and we want to come back to in future episodes. There's a lot of things that we want to cover related to specifically AI in legal tech, so we'll have some guests on to bring different perspectives in. This is more of an intro episode to just AI deepfakes in particular, and then some of the challenges and some of the implications of what that means in the legal context. Just a quick disclaimer. We're not professionals in the legal context. This is not something where we're looking to talk about how this impacts federal rule of evidence or anything like that. This, this is just an introduction if you're somebody who, one, is curious about what are AI deepfakes and two, what are some of the things that are actually out there already that are known and that are challenges as it relates to our legal system and AI deepfakes. So with that, let's dive right in. Let's start with just what is an AI deepfake? As the name implies, it is powered by AI. So a deepfake itself is an artificial recording, or it's a recording rather, that has been convincingly altered and manipulated or misrepresented someone as doing or saying something that they didn't actually do or say. So you might see these more and more. We're seeing them pop up in social media channels, we're seeing them pop up across other platforms of media. One example that I can think of recently is post game interviews from sports players and coaches. Again, this is stuff where they're up there speaking and somebody has taken that, run it through some sort of a deep fake technology and come out on the other end with that person, a video of that person obviously saying something that they didn't say. And sometimes it is kind of comical. But as this relates to the legal context, before we dive into the challenges, I think one thing that stands out to me in doing some of this research is sometimes it is approached as a futuristic problem. And the reality is there's actually already been cases that have been tried that are the result of issues related to AI deepfakes. [00:02:29] So that'll be something that we'll talk about in the future. In terms of specific examples. But there is a great paper that was written by Rebecca Delfino. She's an associate dean at Loyola Law School out of la. And I'll cite that paper. I'll go ahead and give you the link to that. So if you wanted to read up on it a little bit more, you can. But she has a great way of sort of framing out what are the real types or the types of cases or the threats that we're going to see and the challenges we're going to see as a result of AI deepfakes. So I'm just going to quote her directly and I think this helps frame out a little bit of, again, what are those challenges? She says, quote, challenges that deep fakes pose to legal proceedings are one, proving whether audiovisual evidence is genuine or fake, which sounds simple, but how do you prove it? [00:03:14] Number two, responding to claims that genuine evidence is a deep fake. [00:03:19] And then number three, addressing a growing distrust and a doubt among jurors over the authenticity of audiovisual evidence. In addition to those three challenges that Delfino points out, I think it's important to think of other situations where it's things like evidence tampering, identity theft. There's areas that, across sort of this. This space that haven't always been attributed to AI deepfakes, but take on a whole new form when you're talking about something that's actually been faked. So this is something that, again, impacts today. It's something that's out there today and it's being thought about. And there's actually been, again, this is not a new topic if you're in the space. So with that, let's talk a little bit about the legal implications and concerns. So again, these are high level sort of meta themes to think about. One is judicial impacts. So coming out of those challenges that Delfino talks about, one of those could be undermining the integrity of legal proceedings and creating distrust. Again, what I found interesting in doing some of this research is the problem that AI deepfakes brings to the table isn't actually a novel problem. You can go back to when video was first introduced into court as being, you know, a type of evidence that was accepted, even back to photos being a type of evidence that was accepted. And to be fair, any one of those can still be faked in some form, and that has been a problem that has existed. I think where some of this starts to shift when we get into the AI deepfake is one just the actual level of convincing. How Convincing an AI deepfake can be to the point where any normal person looking at it really can't tell the difference. And it's very convincing when it's a voice and an image put together of a specific scenario. So again, not a novel problem, but something that is in of itself going to be very unique and very difficult to detect. So that that in of itself again, creates this distrust that can come about in the implications of the legal system. The second is potential misuse. So this would be like threat of false confessions, fake alibis, fabricating, again, audio and video recordings, wrongfully accusing or wrongfully exonerating individuals. [00:05:36] And another implication is going to be the legislative responses. And I think this is one, again, where we're no experts in this, but as we look as I've done some research on this, and I think, again, Rebecca Delfino does a great job digging deep on this, and I'll put in a few other examples where we. Where you can dig a little deeper on this as well. It's really not obvious that things are settled. This means that there has been stuff that has shifted, but really it's not settled. And so legislation is and will continue and need to continue to change in order to support what we're going to see in the legal systems. So, you know, one example of that might be things like the need to have digital water markings. The need to ensure that from a data perspective on a specific file, not only can you go back to where did the file originate, but what attributes of that file can be designated to say, has it ever been tampered with? Is there a different version that exists of this file anywhere? Against some of that we have today. But this will, I think, take on a completely different form. And so with that, we might get into sort of this last phase of thinking about AI deepfakes in the legal context. And that's going to be what are some of the thoughts about combating this? So what are the things being put forth by different folks in the space? Again, legislation's one, but other things that are being thought about to say, how do we combat. How do we begin to think about combating AI deep fakes in the legal context? And so there's really kind of four big themes here that we'll talk a little bit more about in depth. The first is just education. [00:07:12] How can we ensure that people are a little bit more aware of what is AI technology? [00:07:18] AI is a broad term, so being able to know the difference, or at least know what are the different types of things that are generated by machine learning versus large language models versus other types of AI tech stacks. And so there's going to be this education wave and I wouldn't be surprised if it's already occurring in some areas. So if you're listening to this and you're aware this is just a topic where we want to invite a lot of conversation in. So please let us know, hit throw a comment, reach out to us on our website, let us know. But with that education just comes adoption. And we are actually seeing this especially in the civil space in law or excuse me, in prosecution offices. So there are a number of technology companies who are entering into AI as sort of an AI powered product or software to come alongside civil attorneys and help them leverage AI in more useful ways. But adopting AI, think of things like even ChatGPT, some of the stuff that's off the shelf and easy to use, that just makes your life a little bit easier. The more that those tools are leveraged, the more that those tools are implemented into your to workflows, the more people begin to understand and kind of detect what's real, what's not real and what is the technology really capable of. So that's one big one. The second one I actually think is probably one of the most important ones, which is collaboration. So again, how are, what are some of the ways in which that the thoughts are leading towards how do we combat AI Deepfakes in legal contexts and collaboration is huge. So this means this is collaboration between the various stakeholders that are working on a case. So the mdts and everybody involved, forensic experts, expert witnesses, you're also going to start to see, I think a lot more of deep fake detection companies, companies who specialize in being able to detect if something is a deep fake coming in as part of that collaboration. And then you know, digital evidence management companies like Gardify. So you know, a lot of what thinking about as we think to leverage AI in the right ways, one is that they're always human centric and two is that they're accurate, they're real, they don't in any way dissuade towards the actual evidentiary value of something in our platform. So on that note, we've also this kind of tallies with a little bit of what I just talked about with those companies. But the third thing is going to be AI based detection softwares. And let's just go a little bit deeper on that. There are a number of companies starting up in this space and there's some who've been existing for a long time, including the companies who are, you might say the ones who helped build sort of the foundation for how a deep fake technology became commercially available. But you're going to see more specified, I think companies in this realm who are going to pop up and again, some of them are already doing this and they're going to say, hey, we're actually specialized in just being able to leverage software to detect whether that particular video or audio file is a deep fake or whether it's not. [00:10:11] Lastly, there's going to be some degree of emerging standards and again, these four points are ones that have come about as a result of some of this research. I'm sure there's a lot of other ones. So if again, if you have some, we would love to invite that conversation in. Emerging standards is the last one. So this is going to be, and I think one way to summarize this is a shift from the problem space to a solution space or a solution based approach, if you will. So right when this stuff gets introduced, as we've been talking, there is sort of a, hey, is this a futuristic thing? And then once it's sort of, nope, this is a today problem. [00:10:50] We it's normal to spend a period of time in that thinking, you know, kind of focused on the problem itself. Do we understand the problem and is it clear? And I think one of the things that is put forth in another article that I'll link here is being able to shift that mindset from the problem to what is the solutions based approach. What can we do to actually have frameworks, to actually have models that we can then reference something against to at least get a degree of understanding of is this a fake or is this not a fake? [00:11:21] So with that, those are going to be kind of the four areas of what's, what's coming. We want to invite future guests on our podcast to come in and again speak to different perspectives on AI deepfakes as, as specifically as well as just leveraging AI tools and really anything in that space. Thanks so much for listening. If you found some value in this particular episode, as short as it was, would really appreciate if you could subscribe to our podcast. For future episodes, consider giving us a rating. Also consider referring back to some prior episodes. We had some great guests and keep an eye out for future episodes where we're going to talk a little bit more about this topic in particular and again, the legal and AI space. [00:12:00] Also, if you're interested in Gardify as a solution, please don't hesitate to reach out to us. You can either email us directly at salesardify. Or you can just go to our website www.gardefy.com and find a pathway there to connect with us. We would love to chat with you. Thanks so much again for listening to today. Hope you found some value and look forward to seeing you back on the podcast in the future. Episode.

Other Episodes