Research Publications

Permanent URI for this communityhttps://rda.sliit.lk/handle/123456789/4194

This main community comprises five sub-communities, each representing the academic contribution made by SLIIT-affiliated personnel.

Browse

Search Results

Now showing 1 - 5 of 5
  • Thumbnail Image
    PublicationOpen Access
    A novel application with explainable machine learning (SHAP and LIME) to predict soil N, P, and K nutrient content in cabbage cultivation
    (Elsevier B.V., 2025-03-06) Abekoon, T; Sajindra, H; Rathnayake, N; Ekanayake, I, U; Jayakody, A; Rathnayake, U
    Cabbage (Brassica oleracea var. capitata) is commonly cultivated in high altitudes and features dense, tightly packed leaves. The Green Coronet variety is well-known for its robust growth and culinary versatility. Maximizing yield is crucial for food sustainability. It is essential to predict the soil’s major nutrients (nitrogen, phosphorus, and potassium) to maximize the yield. Artificial intelligence is widely used for non-linear predictions with explainability. This research assessed the predictive capabilities of soil nitrogen, phosphorus, and potassium levels with explainable machine learning methods over an 85-day cabbage growth period. Experiments were conducted on cabbage plants grown in central hills of Sri Lanka. SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) were used to clarify the model’s predictions. SHAP analysis showed that high feature values of the number of days and plant average leaf area negatively impacted for nutrient predictions, while high feature values of leaf count and plant height had a positive effect on the nutrient predictions. To validate the results, 15 greenhouse-grown cabbage plants at various growth stages were selected. The nitrogen, phosphorus, and potassium levels were measured and compared with the predicted values. These insights help refine predictive models and optimize agricultural practices. A user-friendly application was developed to improve the accessibility and interpretation of predictions. This tool is a user-friendly platform for end-users, enabling effective use of the model’s predictive capabilities.
  • Thumbnail Image
    PublicationOpen Access
    A novel application with explainable machine learning (SHAP and LIME) to predict soil N, P, and K nutrient content in cabbage cultivation
    (Elsevier B.V., 2025-08) Abekoon, T; Sajindra, H; Rathnayake, N; Ekanayake, I.U.; Jayakody, A; Rathnayake, U
    Cabbage (Brassica oleracea var. capitata) is commonly cultivated in high altitudes and features dense, tightly packed leaves. The Green Coronet variety is well-known for its robust growth and culinary versatility. Maximizing yield is crucial for food sustainability. It is essential to predict the soil's major nutrients (nitrogen, phosphorus, and potassium) to maximize the yield. Artificial intelligence is widely used for non-linear predictions with explainability. This research assessed the predictive capabilities of soil nitrogen, phosphorus, and potassium levels with explainable machine learning methods over an 85-day cabbage growth period. Experiments were conducted on cabbage plants grown in central hills of Sri Lanka. SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) were used to clarify the model's predictions. SHAP analysis showed that high feature values of the number of days and plant average leaf area negatively impacted for nutrient predictions, while high feature values of leaf count and plant height had a positive effect on the nutrient predictions. To validate the results, 15 greenhouse-grown cabbage plants at various growth stages were selected. The nitrogen, phosphorus, and potassium levels were measured and compared with the predicted values. These insights help refine predictive models and optimize agricultural practices. A user-friendly application was developed to improve the accessibility and interpretation of predictions. This tool is a user-friendly platform for end-users, enabling effective use of the model's predictive capabilities.
  • Thumbnail Image
    PublicationOpen Access
    Evaluating expressway traffic crash severity by using logistic regression and explainable & supervised machine learning classifiers
    (Elsevier, 2023-07-09) Shashiprabha, M.J.P.S; Kelum, S.R.M; Meddage, D.P.P; Pasindu, H.R; Gomes, P.I.A
    The number of expressway road accidents in Sri Lanka has significantly increased (by 20%) due to the expansion of the transport network and high traffic volume. It is crucial to identify the causes of these crashes for effective road safety management. However, traditional statistical methods may be insufficient due to their inherent assumptions. This study utilized explainable machine learning to investigate the factors that affect the severity of traffic crashes on expressways. The study evaluated two groups of traffic crashes: fatal or severe crashes, and other crashes that included non-severe injuries or only property damage. Five factors that contribute to crashes were analyzed: road surface condition, road alignment, location, weather condition, and lighting effect. Four machine learning models (Random Forest (RF), Decision Tree (DT), extreme gradient boosting (XGB), K-Nearest Neighbor (KNN)) were developed and compared with Logistic Regression (LR) using 223 training and 56 testing data instances. The study revealed that the machine learning algorithms provided more accurate predictions than the LR model. To explain the machine learning models, Shapley Additive Explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) were used. These methods revealed that all five features decreased the possibility of occurrence of fatal accidents. SHAP and LIME explanations confirmed the known interactions between factors influencing crash severity in expressway operational conditions. These explanations increase the trust of end-users and domain experts on machine learning models. Furthermore, the study concluded that using explainable machine learning methods is more effective than traditional regression analysis in evaluating safety performance. Additionally, the results of the study can be utilized to improve road safety by providing accurate explanations for decision-making processes for black-box models. © 2023
  • Thumbnail Image
    PublicationOpen Access
    Explainable Machine Learning (XML) to predict external wind pressure of a low-rise building in urban-like settings
    (2022-07) Meddage, D. P. P; Ekanayake, I; Weerasuriya, A; Lewangamage, C. S; Ramanayaka, C. D. E; Miyanawala, T
    This study used explainable machine learning (XML), a new branch of Machine Learning (ML), to elucidate how ML models make predictions. Three tree-based regression models, Decision Tree (DT), Random Forest (RF), and Extreme Gradient Boost (XGB), were used to predict the normalized mean (Cp,mean), fluctuating (Cp,rms), minimum (Cp,min), and maximum (Cp,max) external wind pressure coefficients of a low-rise building with fixed dimensions in urban-like settings for several wind incidence angles. Two types of XML were used — first, an intrinsic explainable method, which relies on the DT structure to explain the inner workings of the model, and second, SHAP (SHapley Additive exPlanations), a post-hoc explanation technique used particularly for the structurally complex XGB. The intrinsic explainable method proved incapable of explaining the deep tree structure of the DT, but SHAP provided valuable insights by revealing various degrees of positive and negative contributions of certain geometric parameters, the wind incidence angle, and the density of buildings that surround a low-rise building. SHAP also illustrated the relationships between the above factors and wind pressure, and its explanations were in line with what is generally accepted in wind engineering, thus confirming the causality of the ML model’s predictions.
  • Thumbnail Image
    PublicationOpen Access
    A novel approach to explain the black-box nature of machine learning in compressive strength predictions of concrete using Shapley additive explanations (SHAP)
    (Elsevier, 2022-04) Ekanayake, I.U; Meddage, D. P. P; Rathnayake, U
    Machine learning (ML) techniques are often employed for the accurate prediction of the compressive strength of concrete. Despite higher accuracy, previous ML models failed to interpret the rationale behind predictions. Model interpretability is essential to appeal to the interest of domain experts. Therefore, overcoming research gaps identified, this research study proposes a way to predict the compressive strength of concrete using supervised ML algorithms (Decision tree, Extra tree, Adaptive boost (AdaBoost), Extreme gradient boost (XGBoost), Light gradient boosting method (LGBM), and Laplacian Kernel Ridge Regression (LKRR). Alternatively, SHapley Additive exPlainations (SHAP) – a novel black-box interpretation approach - was employed to elucidate the predictions. The comparison revealed that tree-based algorithms and LKRR provide acceptable accuracy for compressive strength predictions. Moreover, XGBoost and LKRR algorithms evinced superior performance (R ¼ 0.98). According to SHAP interpretation, XGBoost predictions capture complex relationships among the constituents. On the other hand, SHAP provides unified measures on feature importance and the impact of a variable for a prediction. Interestingly, SHAP interpretations were in accordance with what is generally observed in the compressive behavior of concrete, thus validating the causality of ML predictions.