I first paid notice to a deepfake about three years back when an adult video with the superimposed image of an Indian journalist began to circulate. She had been the subject of web-based harassment for some time and the video was the lurid culmination of those efforts.
It was a lousy video, but the impact was not. The video began to circulate on social media and ended up in a phone that the journalist’s father had access to. The severe mental trauma of the incident resulted in the journalist seeking medical treatment.
Deepfakes, videos where images are superimposed on videos using artificial intelligence is here to stay. And their effects are spectacular.
Last week Bill Poster who curates Spectre, released two fascinating deepfakes using videos of Mark Zuckerberg. In one the Facebook founder says “I wish I could keep telling you that our mission in life is connecting people, but it isn’t. We just want to predict your future behaviours.”
In another one reality star Kim Kardashian deadpans “When there are so many haters I really don’t care, because their data has made me rich beyond my wildest dreams.”
Both videos look ordinary and real, only an ultra-attentive viewer will detect the small discrepancies in voice and image. Neither video is genuine, both were made using AI tech. Poster says his works are a cautionary tale of what AI and data mining can do. “Spectre is an immersive installation that tells a cautionary tale of technology and democracy, curated by algorithms and powered by visitor’s personal data.”
Scary, but this is what the future holds. And just imagine what this tech could do within a charged but connected environment – like Sri Lanka post 21/04 or in the run up to elections. We need to be worried, very worried, but prepared as well.
Both these videos and others of similar nature however remained on social media platforms. That was primarily due to Posters clearly stating that the videos were art projects. However, what they showed was how far AI capabilities have ratcheted up.
No fake video featuring Zuckerberg or Hollywood actor Morgan Freeman would remain on social media for long. However, the situation changes if such tech were to be used to create content using obscure characters, a language like Sinhala and contained material that was racially, ethically charged, derogatory or abusive. Detection and take down of such content would not be that fast.
Also, in an environment like Sri Lanka where a misinformation is used to create a deafening cacophony this could be even more sinister. The saga of the mass sterilization in Kurunegala is a case in point. While there has not been anything that has been proven, a media court has already passed judgment. Imagine what can happen if AI tech used for deepfakes were to fall into the hands of such miscreants – frightening thought. Fortunately, I don’t think that an editorial that passes a copy on bricks in Mars is any closer to any kind of AI tech than you and I are to collecting bricks from Mars.
The worry is guns for hire. There is nothing preventing this tech from
being offered as part and parcel of business packages. Which in turn would make it accessible for those without the tech acumen.
In Sri Lanka there is researched evidence to suggest the use of Twitter farms to boost profiles. We also know that politicians and other public figures have sought and gained advice on the usage of social media from PR firms. The use of AI and most certainly deepfakes is still not on the horizon, but not unfathomable.
So next time you see that fairly crazy video on the WhatsApp group, take a second look before you pass it on.
The author is the Asia-Pacific Coordinator for the DART Centre for Journalism and Trauma, a project of the Columbia Journalism School
Twitter - @amanthap