Please use this identifier to cite or link to this item: https://rda.sliit.lk/handle/123456789/3951
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWilliams, E.L-
dc.contributor.authorJones, K.O-
dc.contributor.authorRobinson, J.C-
dc.contributor.authorhandler-Crnigoj, S-
dc.contributor.authorBurrell, H-
dc.contributor.authorMcColl, S-
dc.date.accessioned2025-02-07T07:50:04Z-
dc.date.available2025-02-07T07:50:04Z-
dc.date.issued2025-01-
dc.identifier.issn2961 - 5410-
dc.identifier.urihttps://rda.sliit.lk/handle/123456789/3951-
dc.description.abstractAs life in the digital era becomes more complex, the capacity for criminal activity within the digital realm becomes even more widespread. More recently, the development of deepfake media generation powered by Artificial Intelligence pushes audio and video content into a realm of doubt, misinformation, or misrepresentation. The instances of deepfake videos are numerous, with some infamous cases ranging from manufactured graphic images of the musician Taylor Swift, through to the loss of $25 million dollars transferred after a faked video call. The problems of deepfake are becoming increasingly concerning for the general public when such material is submitted into evidence in a court case, especially a criminal trial. The current methods of authentication against such deepfake evidence threats are insufficient. When considering speech within audio forensics, there is sufficient ‘individuality’ in one’s own voice to enable comparison for identification. In the case of authenticating audio for deepfake speech, it is possible to use this same comparative approach to identify rogue or incomparable harmonic and formant patterns within the speech. The presence of deepfake media within the realms of illegal activity demands appropriate legal enforcement, resulting in a requirement for robust detection methods. The work presented in this paper proposes a robust technique for identifying such AI-synthesized speech using a quantifiable method that proves to be justified within court proceedings. Furthermore, it presents the correlation between the harmonic content of human speech patterns and the AI-generated clones they produce. This paper details which spectrographic audio characteristics were found that may prove helpful towards authenticating speech for forensic purposes in the future. The results demonstrate that using specific frequency ranges to compare against a known audio sample of a person’s speech, indicates the presence of deepfake media due to different harmonic structures.en_US
dc.language.isoenen_US
dc.publisherSLIIT, Faculty of Engineeringen_US
dc.relation.ispartofseriesJournal of Advances in Engineering and Technology (JAET);Volume III Issue I61p.-70p.-
dc.subjectArtificial Intelligenceen_US
dc.subjectDigital Forensicsen_US
dc.subjectSpeech Processingen_US
dc.subjectSpeech Analysisen_US
dc.titleHow Frequency and Harmonic Profiling of a ‘Voice’ Can Inform Authentication of Deepfake Audio: An Efficiency Investigationen_US
dc.typeArticleen_US
dc.identifier.doihttps://doi.org/10.54389/HGBC7543en_US
Appears in Collections:Journal of Advances in Engineering and Technology (JAET) Volume III Issue I



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.