Combining Wikipedia-Based Concept Models for Cross-Language Retrieval
As a low-cost ressource that is up-to-date, Wikipedia recently gains attention as a means to provide cross-language bridging for information retrieval. Contradictory to a previous study, we show that standard Latent Dirichlet Allocation (LDA) can extract cross-language information that is valuable for IR by simply normalizing the training data. Furthermore, we show that LDA and Explicit Semantic Analysis (ESA) complement each other, yielding significant improvements when combined. Such a combination can significantly contribute to retrieval based on machine translation, especially when query translations contain errors. The experiments were perfomed on the Multext JOC corpus and a CLEF dataset.