Intel Unveils 96% Effective Deception Detection Tool

Spread the love

Intel Unveils 96% Effective Defeat Detection Tool

A deep fake Tom Cruise directed by special effects specialist Chris Ume has gone viral on social media.

Computer giant Intel this week presented FakeCatcher, a highly effective tool for quickly uncovering deepfakes (deepfake). His secret weapon: analyzing people's blood flow in video pixels.

Intel's detector recognizes manipulated images, video, and audio in milliseconds with 96% accuracy, a first according to a company blog post.

< p class="e-p">Deep fakery, called deepfakein English, is an altered image, video or audio sequence in which artificial intelligence (AI) is used to replace, for example, the face of a political figure with another and make him say things that he has never been pronounced.

According to Intel, the majority of such tools examine raw image data for signs of inauthenticity. It is also often necessary to download the videos for analysis, then to wait for hours for the results, underlines the company.

FakeCatcher rather looks for clues of authenticity in what makes of us human beings: subtle blood flow in the pixels of a video.

When our heart pumps blood, our veins change color. These blood flow signals are collected from across the face, and algorithms translate these signals into spatiotemporal maps. Then, thanks to deep learning, we can instantly detect whether a video is real or fake, details the Intel blog post.

Tampered content are a growing threat, according to the company.

Deception through deep fakes can cause harm and lead to negative consequences, such as reduced trust in the media, believes Intel.

FakeCatcher, designed in partnership with the State University of New York at Binghamton, could thus be useful for the media by allowing them to authenticate content circulating in line. Social networks could also benefit from it by flushing out harmful hyper-deceptions before they circulate on its platforms.

Previous Article
Next Article