thesis cover image of a smart computer

Combining Concepts and Language Models for Information Access

Since the middle of last century, information retrieval has gained an increasing interest. Since its inception, much research has been devoted to finding optimal ways of representing both documents and queries, as well as improving ways of matching one with the other. In cases where document annotations or explicit semantics are available, matching algorithms can be informed using the concept languages in which such semantics are usually defined. These algorithms are able to match queries and documents based on textual and semantic evidence.

Recent advances have enabled the use of rich query representations in the form of query language models. This, in turn, allows us to account for the language associated with concepts within the retrieval model in a principled and transparent manner. Developments in the semantic web community, such as the Linked Open Data cloud, have enabled the association of texts with concepts on a large scale. Taken together, these developments facilitate a move beyond manually assigned concepts in domain-specific contexts into the general domain.

This thesis investigates how one can improve information access by employing the actual use of concepts as measured by the language that people use when they discuss them. The main contribution is a set of models and methods that enable users to retrieve and access information on a conceptual level. Through extensive evaluations, a systematic exploration and thorough analysis of the experimental results of the proposed models is performed. Our empirical results show that a combination of top-down conceptual information and bottom-up statistical information obtains optimal performance on a variety of tasks and test collections.

See http://phdthes.is/ for more information.

  • [PDF] E. Meij, “Combining concepts and language models for information access,” PhD Thesis, 2010.
    [Bibtex]
    @phdthesis{2010:meij,
    Author = {Meij, Edgar},
    Date-Added = {2011-10-20 10:18:00 +0200},
    Date-Modified = {2011-10-22 12:23:33 +0200},
    School = {University of Amsterdam},
    Title = {Combining Concepts and Language Models for Information Access},
    Year = {2010}}

 

Traditional Library Card Catalog

Conceptual language models for domain-specific retrieval

Over the years, various meta-languages have been used to manually enrich documents with conceptual knowledge of some kind. Examples include keyword assignment to citations or, more recently, tags to websites. In this paper we propose generative concept models as an extension to query modeling within the language modeling framework, which leverages these conceptual annotations to improve retrieval. By means of relevance feedback the original query is translated into a conceptual representation, which is subsequently used to update the query model.

Extensive experimental work on five test collections in two domains shows that our approach gives significant improvements in terms of recall, initial precision and mean average precision with respect to a baseline without relevance feedback. On one test collection, it is also able to outperform a text-based pseudo-relevance feedback approach based on relevance models. On the other test collections it performs similarly to relevance models. Overall, conceptual language models have the added advantage of offering query and browsing suggestions in the form of conceptual annotations. In addition, the internal structure of the meta-language can be exploited to add related terms.

Our contributions are threefold. First, an extensive study is conducted on how to effectively translate a textual query into a conceptual representation. Second, we propose a method for updating a textual query model using the concepts in conceptual representation. Finally, we provide an extensive analysis of when and how this conceptual feedback improves retrieval.

  • [PDF] [DOI] E. Meij, D. Trieschnigg, M. de Rijke, and W. Kraaij, “Conceptual language models for domain-specific retrieval,” Inf. process. manage., vol. 46, iss. 4, pp. 448-469, 2010.
    [Bibtex]
    @article{IPM:2010:Meij,
    Address = {Tarrytown, NY, USA},
    Author = {Meij, Edgar and Trieschnigg, Dolf and de Rijke, Maarten and Kraaij, Wessel},
    Date-Added = {2011-10-12 18:31:55 +0200},
    Date-Modified = {2011-10-12 18:31:55 +0200},
    Doi = {http://dx.doi.org/10.1016/j.ipm.2009.09.005},
    Issn = {0306-4573},
    Journal = {Inf. Process. Manage.},
    Number = {4},
    Pages = {448--469},
    Publisher = {Pergamon Press, Inc.},
    Title = {Conceptual language models for domain-specific retrieval},
    Volume = {46},
    Year = {2010},
    Bdsk-Url-1 = {http://dx.doi.org/10.1016/j.ipm.2009.09.005}}
mathematics

Parsimonious Relevance Models

Relevance feedback is often applied to better capture a user’s information need. Automatically reformulating queries (or blind relevance feedback) entails looking at the terms in some set of (pseudo-)relevant documents and selecting the most informative ones with respect to the set or the collection. These terms may then be reweighed based on information pertinent to the query or the documents and—in a language modeling setting—be used to estimate a query model, P(t|θQ), i.e., a distribution over terms t for a given query Q.

Not all of the terms obtained using blind relevance feedback are equally informative given the query, even after reweighing. Some may be common terms, whilst others may describe the general domain of interest. We hypothesize that refining the results of blind relevance feedback, using a technique called parsimonious language modeling, will improve retrieval effectiveness. Hiemstra et al. already provide a mechanism for incorporating (parsimonious) blind relevance feedback, by viewing it as a three component mixture model of document, set of feedback documents, and collection. Our approach is more straightforward, since it considers each feedback document separately and, hence, does not require the additional mixture model parameter. To create parsimonious language models we use an EM algorithm to update the maximum-likelihood (ML) estimates. Zhai and Lafferty already proposed an approach which uses a similar EM algorithm; it differs, however, in the way the set of feedback documents is handled. Whereas we parsimonize each individual document, they apply their EM algorithm to the entire set of feedback documents.

To verify our hypothesis, we use a specific instance of blind relevance feedback, namely relevance modeling (RM). We choose this particular method because it has been shown to achieve state-of-the-art retrieval performance. Relevance modeling assumes that the query and the set of documents are samples from an underlying term distribution—the relevance model. Lavrenko and Croft formulate two ways of approaching the estimation of the parameters of this model. We build upon their work and compare the results of our proposed parsimonious relevance models with RMs as well as with a query-likelihood baseline. To measure the effects in different contexts, we employ five test collections taken from the TREC-7, TREC Robust, Genomics, Blog, and Enterprise tracks and show that our proposed model improves performance in terms of mean average precision on all the topic sets over both a query-likelihood baseline as well as a run based on relevance models. Moreover, although blind relevance feedback is mainly a recall enhancing technique, we observe that parsimonious relevance models (unlike their non-parsimonized counterparts) can also improve early precision and reciprocal rank of the first relevant result. Thus, our parsimonious relevance models (i) improve retrieval effectiveness in terms of MAP on all collections, (ii) significantly outperform their non-parsimonious counterparts on most measures, and (iii) have a precision enhancing effect, unlike other blind relevance feedback methods.

  • [PDF] E. Meij, W. Weerkamp, K. Balog, and M. de Rijke, “Parsimonious relevance models,” in Proceedings of the 31st annual international acm sigir conference on research and development in information retrieval, 2008.
    [Bibtex]
    @inproceedings{SIGIR:2008:Meij-prm,
    Author = {Meij, Edgar and Weerkamp, Wouter and Balog, Krisztian and de Rijke, Maarten},
    Booktitle = {Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval},
    Date-Added = {2011-10-12 18:31:55 +0200},
    Date-Modified = {2012-10-30 08:47:44 +0000},
    Series = {SIGIR 2008},
    Title = {Parsimonious relevance models},
    Year = {2008},
    Bdsk-Url-1 = {http://doi.acm.org/10.1145/1390334.1390520}}
concepts

Parsimonious concept modeling

In many collections, documents are annotated using concepts from a structured knowledge source such as an ontology or thesaurus. Examples include the news domain, where each news item is categorized according to the nature of the event that took place, and Wikipedia, with its per-article categories. These categorizing systems originally stem from the cataloging systems used in libraries and conceptual search is commonly used in digital library environments at the front-end to support search and navigation. In this paper we want to employ the explicit knowledge used for annotation at the back-end, not just to improve retrieval performance, but also to generate high-quality term and concept suggestions. To do so, we use the dual document representation— concepts and terms—to create a generative language model for each concept, which bridges the gap between vocabulary terms and concepts. Related work has also used textual representations to rep- resent concepts, however, there are two important differences. First, we use statistical language modeling techniques to parametrize the concept models, by leveraging the dual represen- tation of the documents. Second, we found that simple maximum likelihood estimation assigns too much probability mass to terms and concepts which may not be relevant to each document. Thus we apply an EM algorithm to “parsimonize” the document models.

The research questions we address are twofold: (i) what are the results of applying our model as compared to a query-likelihood baseline as well as compared to a run based on relevance models and (ii) what is the influence of parsimonizing? To answer these questions, we use the TREC Genomics track test collections in conjunction with MedLine. MedLine contains over 16 million bibliographic records of publications from the life sciences domain and each abstract therein has been manually indexed by trained curators, who use concepts from the MeSH (Medical Subject Headings) thesaurus. We show that our approach is able to achieve similar or better performance than relevance models, whilst at the same time providing high quality concepts to facilitate navigation. Examples show that our parsimonious concept models generate terms that are more specific than those acquired through maximum likelihood estimates.

  • [PDF] E. Meij, D. Trieschnigg, M. de Rijke, and W. Kraaij, “Parsimonious concept modeling,” in Proceedings of the 31st annual international acm sigir conference on research and development in information retrieval, 2008.
    [Bibtex]
    @inproceedings{SIGIR:2008:Meij-cm,
    Author = {Meij, Edgar and Trieschnigg, Dolf and de Rijke, Maarten and Kraaij, Wessel},
    Booktitle = {Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval},
    Date-Added = {2011-10-12 18:31:55 +0200},
    Date-Modified = {2012-10-30 08:46:38 +0000},
    Series = {SIGIR 2008},
    Title = {Parsimonious concept modeling},
    Year = {2008},
    Bdsk-Url-1 = {http://doi.acm.org/10.1145/1390334.1390519}}
Text Mining

Bootstrapping Language Associated with Biomedical Entities

The TREC Genomics 2007 task included recognizing topic-specific entities in the returned passages. To address this task, we have designed and implemented a novel data-driven ap- proach by combining information extraction with language modeling techniques. Instead of using an exhaustive list of all possible instances for an entity type, we look at the language usage around each entity type and use that as a classifier to determine whether or not a piece of text discusses such an entity type. We do so by comparing it with language models of the passages. E.g., given the entity type “genes”, our approach can measure the gene-iness of a piece of text.

Our algorithm works as follows. Given an entity type, it first uses Hearst patterns to extract instances of the type. To extract more instances, we look for new contextual patterns around the instances and use them as input for a bootstrapping method, in which new instances and patterns are discovered iteratively. Afterwards, all discovered instances and patterns are used to find the sentences in the collection which are most on par with the requested entity type. A language model is then generated from these sentences and, at retrieval time, we use this model to rerank retrieved passages.

As to the results of our submitted runs, we find that our baseline run performs well above the median of all participant’s scores. Additionally, we find that applying our proposed method helps the entity types for which there are unambiguous patterns and numerous instances most.

  • [PDF] E. Meij and S. Katrenko, “Bootstrapping language associated with biomedical entities,” in The sixteenth text retrieval conference, 2008.
    [Bibtex]
    @inproceedings{TREC:2008:meij,
    Author = {Meij, E. and Katrenko, S.},
    Booktitle = {The Sixteenth Text REtrieval Conference},
    Date-Added = {2011-10-16 10:24:41 +0200},
    Date-Modified = {2012-10-30 09:23:12 +0000},
    Series = {TREC 2007},
    Title = {Bootstrapping Language Associated with Biomedical Entities},
    Year = {2008}}

Integrating Conceptual Knowledge into Relevance Models: A Model and Estimation Method

We address the issue of combining explicit background knowledge with pseudo-relevance feedback from within a document collection. To this end, we use document-level annotations in tandem with generative language models to generate terms from pseudo-relevant documents and bias the probability estimates of expansion terms in a principled manner. By applying the knowledge inherent in document annotations, we aim to control query drift and reap the benefits of automatic query expansion in terms of recall without losing precision. We consider the parameters which are associated with our modeling and describe ways of estimating these automatically. We then evaluate our modeling and estimation methods on two test collections, both provided by the TREC Genomics track.

  • [PDF] E. Meij and M. de Rijke, “Integrating Conceptual Knowledge into Relevance Models: A Model and Estimation Method,” in Proceedings of the 1st international conference on theory of information retrieval, 2007.
    [Bibtex]
    @inproceedings{ICTIR:2007:meij,
    Author = {E. Meij and de Rijke, M.},
    Booktitle = {Proceedings of the 1st International Conference on Theory of Information Retrieval},
    Date-Added = {2011-10-12 18:31:55 +0200},
    Date-Modified = {2012-10-30 08:50:30 +0000},
    Series = {ICTIR 2007},
    Title = {{Integrating Conceptual Knowledge into Relevance Models: A Model and Estimation Method}},
    Year = {2007}}

Using Prior Information Derived from Citations in Literature Search

Researchers spend a large amount of their time searching through an ever increasing number of scientific articles. Although users of scientific literature search engines prefer the ranking of results according to the number of citations a publication has received, it is unknown whether this notion of authoritativeness could also benefit more traditional and objective measures. Is it also an indicator of relevance, given an information need? In this paper, we examine the relationship between citation features of a scientific article and its prior probability of actually being relevant to an information need. We propose various ways of modeling this relationship and show how this kind of contextual information can be incorporated within a language modeling framework. We experiment with three document priors, which we evaluate on three distinct sets of queries and two document collections from the TREC Genomics track. Empirical results show that two of the proposed priors can significantly improve retrieval effectiveness, measured in terms of mean average precision.

  • [PDF] E. Meij and M. de Rijke, “Using prior information derived from citations in literature search,” in Riao 2007, 2007.
    [Bibtex]
    @inproceedings{RIAO:2007:Meij,
    Author = {Meij, E. and de Rijke, M.},
    Booktitle = {RIAO 2007},
    Date-Added = {2011-10-13 09:05:34 +0200},
    Date-Modified = {2012-10-30 08:49:59 +0000},
    Title = {Using Prior Information Derived from Citations in Literature Search},
    Year = {2007}}
TREC

Expanding Queries Using Multiple Resources

We describe our participation in the TREC 2006 Genomics track, in which our main focus was on query expansion. We hypothesized that applying query expansion techniques would help us both to identify and retrieve synonymous terms, and to cope with ambiguity. To this end, we developed several collection-specific as well as web-based strategies. We also performed post-submission experiments, in which we compare various retrieval engines, such as Lucene, Indri, and Lemur, using a simple baseline topic set. When indexing entire paragraphs as pseudo-documents, we find that Lemur is able to achieve the highest document-, passage-, and aspect-level scores, using the KL-divergence method and its default settings. Additionally, we index the collection at a lower level of granularity, by creating pseudo-documents comprising of individual sentences. When we search these instead of paragraphs in Lucene, the passage-level scores improve considerably. Finally we note that stemming improves overall scores by at least 10%.

  • [PDF] E. Meij, M. Jansen, and M. de Rijke, “Expanding queries using multiple resources (the AID group at TREC 2006: genomics track),” in The fifteenth text retrieval conference, 2007.
    [Bibtex]
    @inproceedings{TREC:2006:meij,
    Author = {Meij, E. and Jansen, M. and de Rijke, M.},
    Booktitle = {The Fifteenth Text REtrieval Conference},
    Date-Added = {2011-10-12 23:24:14 +0200},
    Date-Modified = {2012-10-30 09:23:12 +0000},
    Series = {TREC 2006},
    Title = {Expanding Queries Using Multiple Resources (The {AID} Group at {TREC} 2006: Genomics Track)},
    Year = {2007}}

Combining Thesauri-based Methods for Biomedical Retrieval

This paper describes our participation in the TREC 2005 Genomics track. We took part in the ad hoc retrieval task and aimed at integrating thesauri in the retrieval model. We developed three thesauri-based methods, two of which made use of the existing MeSH thesaurus. One method uses blind relevance feedback on MeSH terms, the second uses an index of the MeSH thesaurus for query expansion. The third method makes use of a dynamically generated lookup list, by which acronyms and synonyms could be inferred. We show that, despite the relatively minor improvements in retrieval performance of individually applied methods, a combination works best and is able to deliver significant improvements over the baseline.

  • [PDF] E. Meij, L. H. L. IJzereef, L. A. Azzopardi, J. Kamps, M. de Rijke, M. Voorhees, and L. P. Buckland, “Combining thesauri-based methods for biomedical retrieval,” in The fourteenth text retrieval conference, 2006.
    [Bibtex]
    @inproceedings{TREC:2005:meij,
    Author = {Meij, E. and IJzereef, L.H.L. and Azzopardi, L.A. and Kamps, J. and de Rijke, M. and Voorhees, M. and Buckland, L.P.},
    Booktitle = {The Fourteenth Text REtrieval Conference},
    Date-Added = {2011-10-12 23:16:44 +0200},
    Date-Modified = {2012-10-30 09:23:12 +0000},
    Series = {TREC 2005},
    Title = {Combining Thesauri-based Methods for Biomedical Retrieval},
    Year = {2006}}