See all results

Coronavirus (COVID-19) Updates

For the latest COVID-19 information and updates from Qatar Foundation, please visit our Statements page

Story | Research
3 August 2021

A picture is no longer worth 1,000 words, says AI expert at QF

Share
A picture is no longer worth 1,000 words, says AI expert at QF

Deepfake technology became a tool that serves disinformation campaigns and is expected to have serious social and political implications.

Image source: Great Pics - Ben Heine, via Shutterstock

Dr. Husrev Taha Sencar, Principal Scientist at HBKU’s QCRI, sheds light on what deepfakes mean and how it can be used to manipulate the truth

In today’s world, where digital media is becoming the primary source of information for many, can we still trust our eyes? After the first deepfake video was released four years ago, it is becoming more and more likely that anyone can be superimposed into a video which can be seen by millions worldwide, while never having been a part of this video at all!

A picture is no longer worth 1,000 words, says AI expert at QF - QF - 01
A picture is no longer worth 1,000 words, says AI expert at QF - QF - 01

Deepfakes is when facial features and expressions of a person are transferred to another person

Dr. Husrev Taha SencarPrincipal Scientist at HBKU’s QCRI.
Dr. Husrev Taha SencarPrincipal Scientist at HBKU’s QCRI.

Deepfake is referred to any video in which faces have been either digitally altered or swapped, all with the help of Artificial Intelligence (AI).

Dr. Husrev Taha Sencar, Principal Scientist at Qatar Computing Research Institute (QCRI), part of Qatar Foundation’s Hamad Bin Khalifa University (HBKU), explains how exactly deepfake technology works, and if its detectable, has societal implications, and the importance of credible sources in the digital sphere.

“Deepfakes is when facial features and expressions of a person are transferred to another person. To do this, facial expressions and facial landmarks of the target person like eyes, nose, lips, etc. in a video are extracted and aligned to a fixed orientation. Then, the aligned face images are cropped and fed to a deepfake generator which merges new face images that imitate facial expressions of the original face,” Dr. Sencar explained.

While to the average audience, it is difficult to detect deepfake videos. Any attempt to fabricate an image or video always leaves some traces that can be spotted, such as the lack of eye blinking, missing reflections in the eyes, and fuzzy details in the facial areas.

Deepfake technology can be used to weaponize content to serve disinformation campaigns, and as the realism in deepfake videos further improves, it is expected to have serious social, political, financial, and legal implications

Dr. Husrev Taha Sencar

“As they say, there is no perfect crime. Since each face must be spliced back to original pictures while making a fake video, careful observers may notice blending-related errors. And since each frame is merged individually when creating the video, unnatural facial movements in the deepfake video can be observed,” said Dr. Sencar.

Amidst the wide availability of deepfake applications such as FaceApp, Reface and Zao, and the limited control over its use, its use is moving away from simple entertainment of its users to more dangerous scenarios such as damaging reputation of public figures, spreading fake news, manipulating election campaigns, and creating social distrust.

We need to convey the message that a video or a picture may not necessarily depict a real situation. The adage that a picture is worth a thousand words is really not valid anymore

Dr. Husrev Taha Sencar

“Deepfake technology can be used to weaponize content to serve disinformation campaigns, and as the realism in deepfake videos further improves, it is expected to have serious social, political, financial, and legal implications,” Dr. Sencar said.

So, how can people protect themselves on digital platforms? Dr. Sencar believes there are no magic solutions that will save the day, however, people need to be better educated about how such personally identifiable information can be used in adverse ways.

“We need to convey the message that a video or a picture may not necessarily depict a real situation. The adage that a picture is worth a thousand words is really not valid anymore.”

As it is extremely difficult to control the spread of this technology, AI experts are working to alert users indicating what they might be viewing may not be authentic.

A picture is no longer worth 1,000 words, says AI expert at QF - QF - 02

Deepfakes is the process of transferring facial expressions and landmarks of a target person like eyes, nose, lips, etc. in a video and aligning it to a fixed orientation. Image source: meyer_solutions, via Shutterstock

“As a good example, think of the progress made in elimination of spam email. It used to be a big problem, but today we rarely see such e-mails finding their way into our inboxes,” said Dr. Sencar.

As the AI research community continues to develop solutions to combat the negative impacts of this trending technology, a new research direction in multimedia forensics is aiming to utilize the same technology to capture differences between real and fake videos.

Related Stories