Representation Models for Text Classification : a comparative analysis over three Web document types
George Giannakopoulos, Petra Mavridi, George Paliouras, George Papadakis and Konstantinos Tserpes
In: WIMS 2012, 13-15 Jun 2012, Craiova, Romania.
Text classification constitutes a popular task in Web research with various applications that range from spam ﬁltering to sentiment analysis. To address it, patterns of cooccurring words or characters are typically extracted from the textual content of Web documents. However, not all documents are of the same quality; for example, the curated content of news articles usually entails lower levels of noise than the user-generated content of the blog posts and the
other Social Media.
In this paper, we provide some insight and a preliminary study on a tripartite categorization of Web documents, based on inherent document characteristics. We claim and support that each category calls for different classification settings with respect to the representation model. We verify
this claim experimentally, by showing that topic classification on these different document types oﬀers very different results per type. In addition, we consider a novel approach that improves the performance of topic classification across all types of Web documents: namely the n-gram graphs.
This model goes beyond the established bag-of-words one, representing each document as a graph. Individual graphs can be combined into a class graph and graph similarities are then employed to position and classify documents into the vector space. Accuracy is increased due to the contextual information that is encapsulated in the edges of the n-gram graphs; eﬃciency, on the other hand, is boosted by reducing the feature space to a limited set of dimensions that depend on the number of classes, rather than the size of the vocabulary. Our experimental study over three large-scale, real-world data sets validates the higher performance of n-gram graphs in all three domains of Web documents.