Research Papers - Dept of Software Engineering
Permanent URI for this collectionhttps://rda.sliit.lk/handle/123456789/1022
Browse
2 results
Search Results
Publication Open Access Source Code based Approaches to Automate Marking in Programming Assignments.(Science and Technology Publications, 2021) Kuruppu, T; Tharmaseelan, J; Silva, C; Samaratunge Arachchillage, U. S. S; Manathunga, K; Reyal, S; Kodagoda, NWith the embarkment of this technological era, a significant demand over programming modules can be observed among university students in larger volume. When figures grow exponentially, manual assessments and evaluations would be a tedious and error-prone activity, thus marking automation has become fast growing necessity. To fulfil this objective, in this review paper, authors present literature on automated assessment of coding exercises, analyse the literature from four dimensions as Machine Learning approaches, Source Graph Generation, Domain Specific Languages, and Static Code Analysis. These approaches are reviewed on three main aspects: accuracy, efficiency, and user-experience. The paper finally describes a series of recommendations for standardizing the evaluation and benchmarking of marking automation tools for future researchers to obtain a strong empirical footing on the domain, thereby leading to further advancements in the field.Publication Embargo Revisit of Automated Marking Techniques for Programming Assignments(IEEE, 2021-04-21) Tharmaseelan, J; Manathunga, K; Reyal, S; Kasthurirathna, D; Thurairasa, TDue to the popularity of the Computer science field many students study programming. With large numbers of student enrollments in undergraduate courses, assessing programming submissions is becoming an increasingly tedious task that requires high cognitive load, and considerable amount of time and effort. Programming assignments usually contain algorithmic implementations written in specific programming languages to assess students' logical thinking and problem-solving skills. Evaluators use either a test case-driven or source code analysis approach when evaluating programming assignments. Given that many marking rubrics and evaluation criteria provide partial marks for programs that are not syntactically correct, evaluators are required to analyze the source code during evaluations. This extra step adds additional burden on evaluators that consumes more time and effort. Hence, this research work attempts to study existing automatic source code analysis mechanisms, specifically, use of deep learning approaches in the domain of automatic assessments. Such knowledge may lead to creating novel automated marking models using past student data and apply deep learning techniques to implement automatic assessments of programming assignments irrespective of the computer language or the algorithm implemented.
