Repository logo
Repository
Browse
SLIIT Journals
OPAC
Log In
  1. Home
  2. Browse by Author

Browsing by Author "Rajakaruna, T"

Filter results by typing the first few letters
Now showing 1 - 2 of 2
  • Results Per Page
  • Sort Options
  • Thumbnail Image
    PublicationEmbargo
    Feature Descriptor for Sri Lankan Batik Patterns Using Hu Moment Invariants and GLCM
    (IEEE, 2021-08-11) Senarathna, B. P. H. K. M. D; Rajakaruna, T
    Batik is a traditional craft of designing patterned fabrics which hold high artistic value in Sri Lankan culture, where hand-painted wax patterns are coloured using specialist dyeing methods to create the finished product. This paper presents a study of vision-based feature extraction of Batik images considering colour, texture and shape features to develop a comprehensive feature descriptor of Batik motifs. Wax drawn patterns are identified from the digital images of Batik motifs to retrieve an outline of patterns demarcating the different coloured layers generated by multiple stages of dyeing. Motifs with repetitive patterns are identified using the Local Binary Pattern (LBP) as a texture feature vector. Both RGB and L*a*b* colour schemes are studied in the representation of Batik motifs. The colour description is presented using Mini Batch K-Means which out-performed the widely used K-Means clustering method. Hu Moment Invariants are used for shape feature extraction, and Gray Level Co-occurrence Matrix (GLCM) for texture feature extraction. A comprehensive feature descriptor is developed to represent Batik designs, which could be used to recommend similar designs based on the shape and texture features of query images presented by the user.
  • Thumbnail Image
    PublicationEmbargo
    Sinhala Sign Language Interpreter Optimized for Real – Time Implementation on a Mobile Device
    (2021-08-11) Dhanawansa, V; Rajakaruna, T
    This paper proposes a framework for a vision based Sinhala Sign Language interpreter targeted for implementation on a portable device, optimized for real-time use. The translator is aimed at enabling conversation between a hearing-impaired and a non-signing individual. The scope covers both static and dynamic signs, portrayed using the right hand. Skin segmentation and contour extraction followed by a combination of hand detection and tracking algorithms isolate the signing hand against varied background conditions. A Convolutional Neural Network model was developed to extract and classify the features of the chosen static signs. A standard, expandable dataset of Sinhala static signs was prepared for this task. Dynamic signs were modeled as a tree data structure using a sequence of static signs. The model was optimized using motion based temporal segmentation between consecutive signs, to minimize the processing overhead. The interpreter recorded an average accuracy of 99.5% and 81.2% on the static sign dataset, and combined dataset of static and dynamic signs, respectively. A response time of333 ms was resulted between the occurrence and prediction of a sign, demonstrating the effectiveness of the framework for real-time use.

Copyright 2025 © SLIIT. All Rights Reserved.

  • Privacy policy
  • End User Agreement
  • Send Feedback