Research Papers - Dept of Software Engineering
Permanent URI for this collectionhttps://rda.sliit.lk/handle/123456789/1022
Browse
5 results
Search Results
Publication Embargo Automated Programming Assignment Marking Tool(IEEE, 2022-07-18) Vimalaraj, H; Thenuwara, T. B. K. P.; Wijekoon, V. U; Sathurjan, T; Reyal, S; Kuruppu, T. A; Tharmaseelan, JDue to the enrolment of a very high number of students to programming modules, marking of programming modules is becoming a very tedious and time-consuming process. Programming assignments mainly test for the student’s ability to think logically and approach a solution to the problem. In that case, just running the script and checking the output will not be sufficient enough to award a grade to the student. Marking criteria of programming modules provide certain marks for programs which are not syntactically correct but still have a good approach. Therefore, the code has to be read line by line and the implementation should be checked carefully to provide marks. Source code analysis has become mandatory in the current scenario. This leads to immense pressure and heavy workload on the staff who mark these programs. Considering all these aspects manual marking can lead to inconsistency, biasness, waste of time and less accuracy. Therefore, the main objective of this research is to minimize these problems by implementing an automated programming module marking tool by converting source codes to parse trees, extracting features, generating feature vectors, comparing them and generating a mark along with a feedback and plagiarism report. The solution focuses on automation marking by source code analysis and plagiarism checking.Publication Embargo Automated Programming Assignment Marking Tool(IEEE, 2022-07-18) Thenuwara, T. B. K. P; Vimalaraj, H; Wijekoon, V. U; Sathurjan, T; Reyal, S; Kuruppu, T. A; Tharmaseelan, JDue to the enrolment of a very high number of students to programming modules, marking of programming modules is becoming a very tedious and time-consuming process. Programming assignments mainly test for the student’s ability to think logically and approach a solution to the problem. In that case, just running the script and checking the output will not be sufficient enough to award a grade to the student. Marking criteria of programming modules provide certain marks for programs which are not syntactically correct but still have a good approach. Therefore, the code has to be read line by line and the implementation should be checked carefully to provide marks. Source code analysis has become mandatory in the current scenario. This leads to immense pressure and heavy workload on the staff who mark these programs. Considering all these aspects manual marking can lead to inconsistency, biasness, waste of time and less accuracy. Therefore, the main objective of this research is to minimize these problems by implementing an automated programming module marking tool by converting source codes to parse trees, extracting features, generating feature vectors, comparing them and generating a mark along with a feedback and plagiarism report. The solution focuses on automation marking by source code analysis and plagiarism checking.Publication Open Access Source Code based Approaches to Automate Marking in Programming Assignments(Science and Technology Publications, 2021) Kuruppu, T; Tharmaseelan, J; Silva, C; Samaratunge Arachchillage, U. S. S; Manathunga, K; Reyal, S; Kodagoda, N; Jayalath, TWith the embarkment of this technological era, a significant demand over programming modules can be observed among university students in larger volume. When figures grow exponentially, manual assessments and evaluations would be a tedious and error-prone activity, thus marking automation has become fast growing necessity. To fulfil this objective, in this review paper, authors present literature on automated assessment of coding exercises, analyse the literature from four dimensions as Machine Learning approaches, Source Graph Generation, Domain Specific Languages, and Static Code Analysis. These approaches are reviewed on three main aspects: accuracy, efficiency, and user-experience. The paper finally describes a series of recommendations for standardizing the evaluation and benchmarking of marking automation tools for future researchers to obtain a strong empirical footing on the domain, thereby leading to further advancements in the field.Publication Open Access Source Code based Approaches to Automate Marking in Programming Assignments.(Science and Technology Publications, 2021) Kuruppu, T; Tharmaseelan, J; Silva, C; Samaratunge Arachchillage, U. S. S; Manathunga, K; Reyal, S; Kodagoda, NWith the embarkment of this technological era, a significant demand over programming modules can be observed among university students in larger volume. When figures grow exponentially, manual assessments and evaluations would be a tedious and error-prone activity, thus marking automation has become fast growing necessity. To fulfil this objective, in this review paper, authors present literature on automated assessment of coding exercises, analyse the literature from four dimensions as Machine Learning approaches, Source Graph Generation, Domain Specific Languages, and Static Code Analysis. These approaches are reviewed on three main aspects: accuracy, efficiency, and user-experience. The paper finally describes a series of recommendations for standardizing the evaluation and benchmarking of marking automation tools for future researchers to obtain a strong empirical footing on the domain, thereby leading to further advancements in the field.Publication Embargo Revisit of Automated Marking Techniques for Programming Assignments(IEEE, 2021-04-21) Tharmaseelan, J; Manathunga, K; Reyal, S; Kasthurirathna, D; Thurairasa, TDue to the popularity of the Computer science field many students study programming. With large numbers of student enrollments in undergraduate courses, assessing programming submissions is becoming an increasingly tedious task that requires high cognitive load, and considerable amount of time and effort. Programming assignments usually contain algorithmic implementations written in specific programming languages to assess students' logical thinking and problem-solving skills. Evaluators use either a test case-driven or source code analysis approach when evaluating programming assignments. Given that many marking rubrics and evaluation criteria provide partial marks for programs that are not syntactically correct, evaluators are required to analyze the source code during evaluations. This extra step adds additional burden on evaluators that consumes more time and effort. Hence, this research work attempts to study existing automatic source code analysis mechanisms, specifically, use of deep learning approaches in the domain of automatic assessments. Such knowledge may lead to creating novel automated marking models using past student data and apply deep learning techniques to implement automatic assessments of programming assignments irrespective of the computer language or the algorithm implemented.
