Please use this identifier to cite or link to this item:
https://rda.sliit.lk/handle/123456789/1867
Title: | Analysis of Human Interpretability in Document Classification |
Authors: | Kumari, P.K.Suriyaa |
Issue Date: | 2018 |
Abstract: | With high use of computers, the collection of textual data generated, exchanged, stored and accessed increased in massive amount and became one of the richest sources of data for the organization. As a result, people are tending to use natural language processing application to categorize this large volume of data efficient and accurate manner. Their application of machine learning models. When it comes to Natural Language processing (NLP) applications where most of them follows supervised learning techniques, automatic document classification models developed to do content-based assignment here the materials are assigned into predefined categories. This makes it easier to find the relevant information at the right time and for filtering and routing documents directly to correct users. Mostly these learning models are operating in black-box manner wh re there is no way to interpret how the model has decided which class an instarce should assigned. understanding the reason behind how learning makes these redictions ~7 are very important to trust such learning models in real application. [his thesis presents the work related to the experimental work been carried with set of text classifiers to interpret text classifiers predictions, so any classifier can be evaluated based on how well they support classification purpose. |
URI: | http://rda.sliit.lk/handle/123456789/1867 |
Appears in Collections: | 2018 MSc. in IT |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Analysis_merged.pdf | 6.27 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.