• Email Us: [email protected]
  • Contact Us: +1 718 874 1545
  • Skip to main content
  • Skip to primary sidebar

Medical Market Report

  • Home
  • All Reports
  • About Us
  • Contact Us

Deepfake Audio Has A Tell – Researchers Use Fluid Dynamics To Spot Artificial Imposter Voices

September 23, 2022 by Deborah Bloomfield

The ConversationImagine the following scenario. A phone rings. An office worker answers it and hears his boss, in a panic, tell him that she forgot to transfer money to the new contractor before she left for the day and needs him to do it. She gives him the wire transfer information, and with the money transferred, the crisis has been averted.

The worker sits back in his chair, takes a deep breath, and watches as his boss walks in the door. The voice on the other end of the call was not his boss. In fact, it wasn’t even a human. The voice he heard was that of an audio deepfake, a machine-generated audio sample designed to sound exactly like his boss.

Advertisement

Attacks like this using recorded audio have already occurred, and conversational audio deepfakes might not be far off.

Deepfakes, both audio and video, have been possible only with the development of sophisticated machine learning technologies in recent years. Deepfakes have brought with them a new level of uncertainty around digital media. To detect deepfakes, many researchers have turned to analyzing visual artifacts – minute glitches and inconsistencies – found in video deepfakes.

This is not Morgan Freeman, but if you weren’t told that, how would you know?

Audio deepfakes potentially pose an even greater threat, because people often communicate verbally without video – for example, via phone calls, radio and voice recordings. These voice-only communications greatly expand the possibilities for attackers to use deepfakes.

Advertisement

To detect audio deepfakes, we and our research colleagues at the University of Florida have developed a technique that measures the acoustic and fluid dynamic differences between voice samples created organically by human speakers and those generated synthetically by computers.

Organic vs. synthetic voices

Humans vocalize by forcing air over the various structures of the vocal tract, including vocal folds, tongue and lips. By rearranging these structures, you alter the acoustical properties of your vocal tract, allowing you to create over 200 distinct sounds, or phonemes. However, human anatomy fundamentally limits the acoustic behavior of these different phonemes, resulting in a relatively small range of correct sounds for each.

How your vocal organs work.

In contrast, audio deepfakes are created by first allowing a computer to listen to audio recordings of a targeted victim speaker. Depending on the exact techniques used, the computer might need to listen to as little as 10 to 20 seconds of audio. This audio is used to extract key information about the unique aspects of the victim’s voice.

Advertisement

The attacker selects a phrase for the deepfake to speak and then, using a modified text-to-speech algorithm, generates an audio sample that sounds like the victim saying the selected phrase. This process of creating a single deepfaked audio sample can be accomplished in a matter of seconds, potentially allowing attackers enough flexibility to use the deepfake voice in a conversation.

Detecting audio deepfakes

The first step in differentiating speech produced by humans from speech generated by deepfakes is understanding how to acoustically model the vocal tract. Luckily scientists have techniques to estimate what someone – or some being such as a dinosaur – would sound like based on anatomical measurements of its vocal tract.

We did the reverse. By inverting many of these same techniques, we were able to extract an approximation of a speaker’s vocal tract during a segment of speech. This allowed us to effectively peer into the anatomy of the speaker who created the audio sample.

deepfake vocal tract compared to human vocal tract sketch
Deepfaked audio often results in vocal tract reconstructions that resemble drinking straws rather than biological vocal tracts. Image credit: Logan Blue et al., CC BY-ND

From here, we hypothesized that deepfake audio samples would fail to be constrained by the same anatomical limitations humans have. In other words, the analysis of deepfaked audio samples simulated vocal tract shapes that do not exist in people.

Our testing results not only confirmed our hypothesis but revealed something interesting. When extracting vocal tract estimations from deepfake audio, we found that the estimations were often comically incorrect. For instance, it was common for deepfake audio to result in vocal tracts with the same relative diameter and consistency as a drinking straw, in contrast to human vocal tracts, which are much wider and more variable in shape.

This realization demonstrates that deepfake audio, even when convincing to human listeners, is far from indistinguishable from human-generated speech. By estimating the anatomy responsible for creating the observed speech, it’s possible to identify the whether the audio was generated by a person or a computer.

Why this matters

Today’s world is defined by the digital exchange of media and information. Everything from news to entertainment to conversations with loved ones typically happens via digital exchanges. Even in their infancy, deepfake video and audio undermine the confidence people have in these exchanges, effectively limiting their usefulness.

If the digital world is to remain a critical resource for information in people’s lives, effective and secure techniques for determining the source of an audio sample are crucial.The Conversation

Logan Blue, PhD student in Computer & Information Science & Engineering, University of Florida and Patrick Traynor, Professor of Computer and Information Science and Engineering, University of Florida

Advertisement

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Deborah Bloomfield
Deborah Bloomfield

Related posts:

  1. Schumer Falsely Claims ‘All’ Americans Got Out of Afghanistan
  2. Di Grassi stays in Formula E with Venturi after Audi exit
  3. Athletics-Bromell sets world-leading time in 100m after Tokyo disappointment
  4. Dollar firm as China Evergrande nerves resurface

Source Link: Deepfake Audio Has A Tell – Researchers Use Fluid Dynamics To Spot Artificial Imposter Voices

Filed Under: News

Primary Sidebar

  • DNA From Greenland Sled Dogs – Maybe The World’s Oldest Breed – Reveals 1,000 Years Of Arctic History
  • Why Doesn’t Moonrise Shift By The Same Amount Each Night?
  • Moa De-Extinction, Fashionable Chimps, And Robot Surgery – No Human Required
  • “Human”: Powerful New Images Mark The Most Scientifically Accurate “Hyper-Real 3D Models Of Human Species Ever”
  • Did We Accidentally Leave Life On The Moon In 2019 – And Could We Revive It?
  • 1.8 Million Years Ago, Two Extinct Humans Had One Of The Gnarliest Deaths In History
  • “Powerful Image” Of One Of The World’s Rarest Tigers Exposes The Real Danger In Taman Negara
  • Evolution, Domestication, And A Lot Of Very Good Boys: How Wolves Became Dogs
  • Why Do Orcas Have White Spots Near Their Eyes?
  • Tomb Of First King Of Ancient Maya City Discovered In Belize
  • The Real Reason The Tip Of Your Tape Measure Wiggles Like That
  • The “Haunting” Last Message From NASA’s Opportunity Rover, Sent From Inside A Planet-Wide Storm
  • Adorable Video Proves Not All Gorillas Hate The Rain. It Might Even Win One A Mate
  • 5,000-Year-Old Rock Art May Show One Of Ancient Egypt’s First Rulers
  • Alzheimer’s-Linked Protein Levels “20 Times Higher” In Newborn Babies – What Does This Mean?
  • Americans Were Asked If They Thought Civil War Was Coming. The Results Were Unexpected
  • Voyager 1 & 2 Could Be Detected From Almost A Light-Year Away With Our Current Technology
  • Dams Have Nudged Earth’s Poles By Over 1 Meter In The Past 200 Years
  • This Sugar Could Be A Cure For Male Pattern Baldness – And It’s Been In Our Bodies All Along
  • “Cosmic Immigrants”: Daytime Star Seen In 1604 May Be An “Alien Type Ia Supernova”
  • Business
  • Health
  • News
  • Science
  • Technology
  • +1 718 874 1545
  • +91 78878 22626
  • [email protected]
Office Address
Prudour Pvt. Ltd. 420 Lexington Avenue Suite 300 New York City, NY 10170.

Powered by Prudour Network

Copyrights © 2025 · Medical Market Report. All Rights Reserved.

Go to mobile version