Deep Fake: The Ingenuity and the Implications

Posted on by

Under  the theme of ” The truth is out there but can you find it?”  there is perhaps nothing more  frighteningly ingeneous than the rise of  `Deep Fake’

‘`Deep Fake’.“an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.

See: How Machine Learning Drives the Deceptive World of Deepfakes

Aside from malicious purposes like spreading misinformation, stealing identities and pushing propaganda, in the context of science communication `Deep Fake’  technology opens the potential  for undermining important public  health science initiatives  and misrepresenting  scientific research findings  especially those that rely  on significant  levels of public/political  support.

They could also pose  a new threat to image fabrication in scientific publications

One of the most famous examples of Deep Fake  involved  former  president Nixon  delivering a  speech  on an  Apollo 11 Mission failure.  While  a speech was written  at the in the event  the mission did fail it was never needed, nor was it ever recorded. The Deepfake video was produced at MIT for educational purposes.

And there have been many other examples of Deep Fakes involving past and present politicians.

Check it out here  along with  some interesting background as to why they did it and further exploration of the Deepfake phenomenon.

Former President Nixon delivering speech on Apollo 11 mission failure 

So how do you spot a `Deep Fake’   image or video. Badly made Deep Fake videos can be fairly easy to pick  but identifying higher-quality Deep Fakes  is not so easy and  advances in technology is making it much more difficult..

Classic tips are:

  • Eye movements that don’t look natural  esp  an absence of blinking
  • Facial-features that look a bit off –Lack of facial movement that mirrors the           emotions of what is being said – “facial morphing” that is off
  • Awkward-looking body, posture or movement.
  • Blurring or misalignment around edges of face, body
  • Strange skin tone, discolouration, weird lighting, and misplaced shadows Hair and teeth that look too ‘perfect’- no flyaway hair or outlines of  teeth .
  • Poor audio   bad lip-syncing, metallic sounding voices,weird word pronunciation.
  • Zoom in and/or slow down image video on large monitor. Focus on the lips  to check  for poor lip synching.
  • Reverse image searches. Grab  an individual frame from video  and do a reverse image search on Google (see earlier blog) .This can help find  similar videos online to help determine if an image, audio, or video has been altered in any way.

See also the following online tools:

If you want to test your own ability to detect Deep Fake images  check out:
Detect Fakes – North Western University Kellogg School of Management Project

And their research paper:
Deep Fake detection by human crowds, machines, and machine-informed crowds 

However, researchers believe that in  the very near future  Deep Fakes will be so advanced  the  mentioned above will be overcome and that critical thinking  skills  like those we have  explored in previous blog posts will become of paramount importance. Such as asking ourselves questions like:

  • Is  what happened in video something believable ( esp in the context of the person portrayed?
  • It what he/she saying  in line with what they have said before?
  • What is the reliability of the source where was this video was  published?
  • Has any major major newspaper /reliable news source also mentioned the video?

There are of course positive applications of recent advances in Deep Fake technology:

  • Recasting movies using other actors, or younger versions of the same actor.
  • Education – Bringing history to life in new ways. e.g The Dalí Museum in St Petersburg, Florida has used a controversial artificial intelligence technique to “bring the master of surrealism back to life”.
  • Educating people in a more interactive way by automatically generating lecture videos from text-based content or audio narration.
  • Health and disabilities-  Help patients who have lost motor, speech or visual abilities through neurogenerative disease or injury communicate better.
  • Media and communications – Better  voice transfer and lip-syncing algorithms can could give  media correspondents the ability to easily translate and dub recorded messages into a foreign language more quickly  bringing important messages to an international audience.

One thing is for sure Deep Fake is here to stay.

So keep those critical thinking skills honed and apply them regularly while surfing the web and social media.

 

This entry was posted in For Teachers, General, Science Communication by STEPHEN BRONI. Bookmark the permalink.

Leave a Reply