Jayasingha H M C P2026-02-082025-12https://rda.sliit.lk/handle/123456789/4557In agile software development, systems and software functions are often discussed and informally transcribed through conversations over Agile meetings, which leads to gaps and errors in documentation. Equally, traditional approaches which use voice recordings heavily depend on automated voice recognition systems to document conversations, making them riddled with errors and inconsistencies. This paper offers an automated pipeline for the transcription and analysis of Agile voice conversations in which requirements are gathered. The voice conversations are transcribed using OpenAI’s Whisper Model while formalized user stories are extracted through spoken large language models (LLMs). trained and evaluated various LLMs ranging from T5 and BART to DeepSeek on real and synthetic datasets for user story generation. Evaluation metrics focused on date and document accuracy included: narrative output through BLEU, ROUGE, F1, and WER for transcription. Results demonstrate that DeepSeek fine-tuned model outperformed others in contextual accuracy, requirement completeness, and consistency. This research automates the processes enhancing effective Agile documentation and minimizes manual effort.enAutomating Voice-basedVoice-based ConversationsFormal User StoriesNLPSpeech RecognitionAutomating Voice-based Conversations into Formal User Stories using NLP and Speech RecognitionThesis