214.987.2400

I just returned from a national conference where I was shown an AI application that will make it hard to believe what you see. Imagine seeing a video of a guy speaking to you about his business. Nothing too shocking about that… except that it isn’t really a video of the guy. In the demo I saw, the guy never spoke those words on video. The primary source material was an audio file.

At first, I got excited! This opens up tremendous possibilities to communicate your messages without having to invest time and money into video shoots. Then, I thought about the negative side: someone could create videos of you saying things you never said to damage your reputation. The same is true for political candidates or for government officials.

Our brains naturally want to believe what we see and hear. Now, AI platforms, including DeepFake technology, are blurring the line between reality and illusion. So how can we avoid the pitfalls of believing our “lying eyes”? Cultivating a culture of critical thinking is going to be important. Encouraging individuals to question the authenticity of the content they see, especially online, could help defend against misinformation. Even so, the power of the visual impression will likely influence people’s opinions. As they say in the courtroom, “You can’t unring the bell.”

What are your thoughts on AI that creates videos from sampling a person’s voice and image? How do you think it could be controlled to protect individuals from fraud and identity theft?