Research Publications Authored by SLIIT Staff

Permanent URI for this communityhttps://rda.sliit.lk/handle/123456789/4195

This collection includes all SLIIT staff publications presented at external conferences and published in external journals. The materials are organized by faculty to facilitate easy retrieval.

Browse

Search Results

Now showing 1 - 3 of 3
  • Thumbnail Image
    PublicationOpen Access
    A novel application with explainable machine learning (SHAP and LIME) to predict soil N, P, and K nutrient content in cabbage cultivation
    (Elsevier B.V., 2025-03-06) Abekoon, T; Sajindra, H; Rathnayake, N; Ekanayake, I, U; Jayakody, A; Rathnayake, U
    Cabbage (Brassica oleracea var. capitata) is commonly cultivated in high altitudes and features dense, tightly packed leaves. The Green Coronet variety is well-known for its robust growth and culinary versatility. Maximizing yield is crucial for food sustainability. It is essential to predict the soil’s major nutrients (nitrogen, phosphorus, and potassium) to maximize the yield. Artificial intelligence is widely used for non-linear predictions with explainability. This research assessed the predictive capabilities of soil nitrogen, phosphorus, and potassium levels with explainable machine learning methods over an 85-day cabbage growth period. Experiments were conducted on cabbage plants grown in central hills of Sri Lanka. SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) were used to clarify the model’s predictions. SHAP analysis showed that high feature values of the number of days and plant average leaf area negatively impacted for nutrient predictions, while high feature values of leaf count and plant height had a positive effect on the nutrient predictions. To validate the results, 15 greenhouse-grown cabbage plants at various growth stages were selected. The nitrogen, phosphorus, and potassium levels were measured and compared with the predicted values. These insights help refine predictive models and optimize agricultural practices. A user-friendly application was developed to improve the accessibility and interpretation of predictions. This tool is a user-friendly platform for end-users, enabling effective use of the model’s predictive capabilities.
  • Thumbnail Image
    PublicationOpen Access
    Explainable Machine Learning (XML) to predict external wind pressure of a low-rise building in urban-like settings
    (2022-07) Meddage, D. P. P; Ekanayake, I; Weerasuriya, A; Lewangamage, C. S; Ramanayaka, C. D. E; Miyanawala, T
    This study used explainable machine learning (XML), a new branch of Machine Learning (ML), to elucidate how ML models make predictions. Three tree-based regression models, Decision Tree (DT), Random Forest (RF), and Extreme Gradient Boost (XGB), were used to predict the normalized mean (Cp,mean), fluctuating (Cp,rms), minimum (Cp,min), and maximum (Cp,max) external wind pressure coefficients of a low-rise building with fixed dimensions in urban-like settings for several wind incidence angles. Two types of XML were used — first, an intrinsic explainable method, which relies on the DT structure to explain the inner workings of the model, and second, SHAP (SHapley Additive exPlanations), a post-hoc explanation technique used particularly for the structurally complex XGB. The intrinsic explainable method proved incapable of explaining the deep tree structure of the DT, but SHAP provided valuable insights by revealing various degrees of positive and negative contributions of certain geometric parameters, the wind incidence angle, and the density of buildings that surround a low-rise building. SHAP also illustrated the relationships between the above factors and wind pressure, and its explanations were in line with what is generally accepted in wind engineering, thus confirming the causality of the ML model’s predictions.
  • Thumbnail Image
    PublicationOpen Access
    A novel approach to explain the black-box nature of machine learning in compressive strength predictions of concrete using Shapley additive explanations (SHAP)
    (Elsevier, 2022-04) Ekanayake, I.U; Meddage, D. P. P; Rathnayake, U
    Machine learning (ML) techniques are often employed for the accurate prediction of the compressive strength of concrete. Despite higher accuracy, previous ML models failed to interpret the rationale behind predictions. Model interpretability is essential to appeal to the interest of domain experts. Therefore, overcoming research gaps identified, this research study proposes a way to predict the compressive strength of concrete using supervised ML algorithms (Decision tree, Extra tree, Adaptive boost (AdaBoost), Extreme gradient boost (XGBoost), Light gradient boosting method (LGBM), and Laplacian Kernel Ridge Regression (LKRR). Alternatively, SHapley Additive exPlainations (SHAP) – a novel black-box interpretation approach - was employed to elucidate the predictions. The comparison revealed that tree-based algorithms and LKRR provide acceptable accuracy for compressive strength predictions. Moreover, XGBoost and LKRR algorithms evinced superior performance (R ¼ 0.98). According to SHAP interpretation, XGBoost predictions capture complex relationships among the constituents. On the other hand, SHAP provides unified measures on feature importance and the impact of a variable for a prediction. Interestingly, SHAP interpretations were in accordance with what is generally observed in the compressive behavior of concrete, thus validating the causality of ML predictions.