Please use this identifier to cite or link to this item: https://rda.sliit.lk/handle/123456789/3792
Full metadata record
DC FieldValueLanguage
dc.contributor.authorRodgers, J-
dc.contributor.authorJones, K.O-
dc.contributor.authorRobinson, C-
dc.contributor.authorChandler-Crnigoj, S-
dc.contributor.authorBurrell, H-
dc.contributor.authorMcColl, S-
dc.date.accessioned2024-10-21T07:36:19Z-
dc.date.available2024-10-21T07:36:19Z-
dc.date.issued2024-10-
dc.identifier.issn2961 5011-
dc.identifier.urihttps://rda.sliit.lk/handle/123456789/3792-
dc.description.abstractDeepfake technology has come a long way in recent years and the world has already seen cases where it has been used maliciously. After a deepfake of UK independent financial advisor and poverty champion Martin Lewis was released on social media, a theory has been proposed where the deepfake target is accompanied by additional media to increase the authenticity of the file, for instance, ambient noise or processing to match how the deepfake would sound if it was recorded from a specific device such as a cellular/mobile phone. Focussing on deepfake audio, a critical listening experiment was conducted where participants were asked to identify the deepfake audio file from a set of three, across a number of sets of three files. A number of audio files were created using real voices with additional sounds added, volunteers recording their voice which is then put through a deepfake generation system, and voices taken from publicly available podcasts which were also applied to the deepfake software – the latter set mimics using web accessible voice recordings of prominent or famous people, such as the Prime Minister of the UK. The results show participants were able to successfully detect one third of the deepfake audio files presented, however they also incorrectly marked another one third of the real files as deepfakes whilst the remaining third were missed. Results also showed no definitive confirmation that audio and/or forensic professionals had any greater ability to successfully detect deepfake audio files when compared to others. The false positive result may also reinforce the scepticism and lack of trust created by what is known as “Liar’s Dividend”. The paper details how the files were created, the testing methodology, and the experimental results. Furthermore, a discussion on the future directions of research and the effects that deepfakes may have on the criminal justice system is presented.en_US
dc.language.isoenen_US
dc.publisherSLIIT, Faculty of Engineeringen_US
dc.relation.ispartofseriesSICET 2024;239-248p.-
dc.subjectArtificial Intelligenceen_US
dc.subjectDigital Forensicsen_US
dc.subjectSynthetic Mediaen_US
dc.subjectDeepfakeen_US
dc.subjectMedia Forensicsen_US
dc.subjectAudio Forensicsen_US
dc.titleEvaluating the Threshold of Authenticity in Deepfake Audio and Its Implications Within Criminal Justiceen_US
dc.typeArticleen_US
dc.identifier.doihttps://doi.org/10.54389/JIKU1539en_US
Appears in Collections:Proceedings of the SLIIT International Conference on Engineering and Technology, 2024

Files in This Item:
File Description SizeFormat 
25.Evaluating the Threshold of Authenticity in Deepfake Audio and Its implications.pdf
  Until 2050-12-31
360.48 kBAdobe PDFView/Open Request a copy


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.