Parsing through synthetic media
I spent way too much time waiting in line to view AI-generated videos (known as synthetic media). I am currently in Milan, and the Olympic Games attractions are still up. These attractions essentially entail waiting in line to get let into a small building; they will give you some free swag, and there’s a small activity. That’s all relatively standard for a trade show/event booth — and I wouldn’t have any issues with any of it had the majority of the videos/content (for the activity) not been AI-generated.
I don’t know what it is about AI videos that immediately makes me question the validity and value of the content. At first, these videos were fun, quirky, and novel. I still remember seeing the horrible video of Will Smith eating spaghetti. Thankfully, since then, the quality has significantly improved. But this increase in quality has been combined with the widespread availability of services to produce this media. Everyone can now easily access and generate videos.
I mention this anecdote to highlight that this AI-generated content is spreading to the “real” world. This content was already everywhere online, and social media is overflowing with AI-generated videos and pictures. Now we are seeing it leak into the real world.
I get why it has spread; it is cost-effective, democratizes access to video-generating capabilities, and can be fun to see your ideas come to life in a few seconds to minutes. Yet the future looks bleak if we assume that:
Content is increasingly lifelike and harder to distinguish from falsehood.
Access continues to expand, and
Agentic models get even more proficient.
Of course, none of these assumptions are bound to happen — but they certainly could, and it looks like we are going down this trajectory. Essentially, if we get to a world where these three assumptions are true, we will be flooded with content in every part of our lives that we cannot tell apart from reality.
Sure, the AI-generated videos that I had to drudge through were annoying, but there are more dire consequences. Synthetic media has enabled and increased:
Misinformation and fake news;
Copyright infringement;
Political interference;
Scams;
Harassment;
Hate speech;
This list only highlights some of the consequences of increasingly realistic and pervasive synthetic media. An example of synthetic media many people have likely heard of is Deepfakes. “Deepfakes are images, audio and videos that are digitally altered or generated by AI, to depict real or fictional people or things.”
Some tips to defend yourself
Refine your epistemic agency: Essentially, what frameworks do you have in place to assess whether something is real instead of relying on other verification systems? Here are a few systems as an example.
Establish a code word with your family to stop voice cloning scams; and
Use the “prove your live” challenge: ask the person on the other side of the screen to do something specific.
Use the technical solutions available. For example, there are services that filter spam texts, and others that help you avoid clicking on scam sites or unsafe links (The Canadian Shield by CIRA is a great free option).
These tips will help you defend yourself, but will not eradicate the issue. We likely will never be able to fully eradicate the issue, but any semblance of eradication will require everyone to get on board. If you are interested in pushing people to do more, you can use your:
Attention. This is one of the most valuable resources at your disposal.
Platform. Talk to people, post on social media, or whatever else you can think of to warn them and arm them with the knowledge they need.
Dollars.
In the meantime, I hope the tips will help you defend yourself. Also, other technical solutions are being developed, such as a watermark (e.g., a small logo at the bottom of your photo to prove it's from you), but these are not widespread yet.
Speaking of which, would you consider adding a watermark to everything you generate online?
Take care,
Emanuel