Varaprasath,S.2026-02-102026-01https://rda.sliit.lk/handle/123456789/4581This research presents the design and implementation of an AI-based music visualization system that enables Deaf and Hard-of-Hearing (DHH) individuals to experience music through dynamic visual and haptic feedback. The system integrates both classical machine learning and deep learning models to analyze key audio features, such as tempo, pitch, and spectral energy, and lyrical sentiment, translating them into synchronized visual elements like color, motion, and shape. The visualization framework employs a combination of CNN–LSTM architectures for emotion detection and SVM-based genre classification, ensuring accurate mapping between musical attributes and emotional states. A Flask-based web interface was developed to deliver real-time, adaptive visualizations generated from user-uploaded songs. The study further proposes structured design guidelines and best practices for inclusive audiovisual systems, derived from cross-disciplinary literature in affective computing, accessibility design, and cognitive psychology. Experimental evaluation demonstrates that the system effectively translates musical emotion into coherent and perceptually comfortable visual patterns, achieving high usability and accessibility for DHH users. Beyond accessibility, the research contributes a conceptual framework for emotion-aware, AI-driven multimedia systems, offering applications in interactive art, music education, and assistive technology. The findings highlight the potential of artificial intelligence to bridge sensory gaps and make musical experiences universally inclusive.enRepresenting Music VisuallyHearing-Impairedusing AIRepresenting Music Visually for the Hearing-Impaired using AIThesis