INEX

The University of Amsterdam (Ilps) at Inex 2008

We describe our participation in the INEX 2008 Entity Ranking and Link-the-Wiki tracks. We provide a detailed account of the ideas underlying our approaches to these tasks. For the Link-the-Wiki track, we also report on the results and findings so far.

  • [PDF] W. Weerkamp, J. He, K. Balog, and E. Meij, “The University of Amsterdam (ILPS) at INEX 2008,” in Inex 2008 workshop pre-proceedings, Dagstuhl, 2008.
    [Bibtex]
    @inproceedings{INEX-WS:2008:weerkamp,
    Abstract = {We describe our participation in the INEX 2008 Entity Ranking and Link-the-Wiki tracks. We provide a detailed account of the ideas underlying our approaches to these tasks. For the Link-the-Wiki track, we also report on the results and findings so far.},
    Address = {Dagstuhl},
    Author = {Weerkamp, W. and He, J. and Balog, K. and Meij, E.},
    Booktitle = {INEX 2008 Workshop Pre-Proceedings},
    Date-Added = {2011-10-16 10:36:58 +0200},
    Date-Modified = {2012-10-28 17:30:53 +0000},
    Title = {{The University of Amsterdam (ILPS) at INEX 2008}},
    Year = {2008}}
CLEF domain-specific sample graphic

The University of Amsterdam at the CLEF 2008 Domain Specific Track – Parsimonious Relevance and Concept Models

We describe our participation in the CLEF 2008 Domain Specific track. The research questions we address are threefold: (i) what are the effects of estimating and applying relevance models to the domain specific collection used at CLEF 2008, (ii) what are the results of parsimonizing these relevance models, and (iii) what are the results of applying concept models for blind relevance feedback? Parsimonization is a technique by which the term probabilities in a language model may be re-estimated based on a comparison with a reference model, making the resulting model more sparse and to the point. Concept models are term distributions over vocabulary terms, based on the language associated with concepts in a thesaurus or ontology and are estimated using the documents which are annotated with concepts. Concept models may be used for blind relevance feedback, by first translating a query to concepts and then back to query terms. We find that applying relevance models helps significantly for the current test collection, in terms of both mean average precision and early precision. Moreover, parsimonizing the relevance models helps mean average precision on title-only queries and early precision on title+narrative queries. Our concept models are able to significantly outperform a baseline query-likelihood run, both in terms of mean average precision and early precision on both title-only and title+narrative queries.

  • [PDF] E. Meij and M. de Rijke, “The University of Amsterdam at the CLEF 2008 Domain Specific Track – parsimonious relevance and concept models,” in Working notes for the clef 2008 workshop, 2008.
    [Bibtex]
    @inproceedings{CLEF-WN:2008:meij,
    Author = {Edgar Meij and Maarten de Rijke},
    Booktitle = {Working Notes for the CLEF 2008 Workshop},
    Date-Added = {2011-10-12 18:31:55 +0200},
    Date-Modified = {2012-10-30 09:28:58 +0000},
    Title = {The {U}niversity of {A}msterdam at the {CLEF} 2008 {Domain Specific Track} - Parsimonious Relevance and Concept Models},
    Year = {2008}}

question mark

Towards a combined model for search and navigation of annotated documents

Documents whose textual content is complemented with annotations of one kind or another are ubiquitous. Examples include biomedical documents (annotated with MeSH terms) and news articles (annotated with IPTC terms). Such annotations—or concepts—have typically been used for query expansion, to suggest alternative or related query formulations, and to facilitate browsing of the document collection. In recent years, we have seen two important developments in this area: (i) a renewed interest in the knowledge sources underlying the annotations, mainly inspired by semantic web initiatives and (ii) the creation of social annotations, as part of web 2.0 developments. These developments motivate a renewed interest in models and methods for accessing annotated documents.

The theme of my proposed research is to capture two aspects in a single, unified model: retrieval and navigation. Given a query, this entails using both term-based and concept-based evidence to locate relevant information (retrieval) and suggesting useful browsing suggestions (navigation). I imagine this to be a “two-way” process, i.e., the user can browse the document collection using concepts and the relations between concepts, but she can also navigate the knowledge structure using the (vocabulary) terms from the documents. Such information seeking behavior is witnessed in an increasing number of applications and domains (e.g., suggesting related tags in Bibsonomy or Flickr), providing a solid motivation for my research agenda. In order to accomplish this unification, I will first need to address three separate, but intertwined issues. First, a way of “bridging the gap” between concepts and (vocabulary) terms is needed, since concepts are not directly observable. Second, relations between concepts need to be modeled in some way. Finally, the concepts and relations thus modeled should be integrated in the information seeking process, thereby improving both retrieval and navigation.

So far, I have formulated concept modeling as a form of text classification, by representing concepts as distributions over vocabulary terms. In the context of a digital library setting, I have shown that integrating conceptual knowledge in this way can be beneficial both to retrieval performance as well as to facilitate navigation. More recently, I have taken these experiments a step further by creating parsimonious concept models. In these experiments, the integration of concepts in the query model estimations is able to deliver significantly better results, both compared to a query likelihood run as well as to a run based on relevance models.

To determine the strength of relations between concepts, I have looked at using the divergence between concept models. The estimations are based on differences in language use as measured by computing the cross-entropy reduction between concept models. Experimental results show that this approach is able to outperform both path-based as well as information content-based methods on two separate test sets. While this approach measures the similarity between concepts, it does not explicitly take a relation type into consideration. Thus, any explicit link structure present in the used knowledge structure disappears. Whether this is a reasonable assumption for my work is still unclear and something I intend to find an answer to.

In future work, I would also like to address the question how the retrieval-oriented models I have introduced so far may be used to further aid navigation. To some extent, I have already used the TREC Genomics test collections for the evaluation of the navigational effectiveness, but future work—possibly observing users directly in a user study or indirectly through log analysis—should indicate what the model’s impact, if any, is on navigational effectiveness.

  • [PDF] E. Meij, “Towards a combined model for search and navigation of annotated documents,” in Proceedings of the 31st annual international acm sigir conference on research and development in information retrieval, 2008.
    [Bibtex]
    @inproceedings{SIGIR:2008:meij-doctcons,
    Abstract = {Note: OCR errors may be found in this Reference List extracted from
    the full text article. ACM has opted to expose the complete List
    rather than only correct and linked references.},
    Author = {Meij, Edgar},
    Booktitle = {Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval},
    Date-Added = {2011-10-12 18:31:55 +0200},
    Date-Modified = {2012-10-30 08:48:04 +0000},
    Series = {SIGIR 2008},
    Title = {Towards a combined model for search and navigation of annotated documents},
    Year = {2008},
    Bdsk-Url-1 = {http://dx.doi.org/10.1145/1390334.1390573}}
Apple of orange?

Measuring Concept Relatedness Using Language Models

Over the years, the notion of concept relatedness has at- tracted considerable attention. A variety of approaches, based on ontology structure, information content, association, or context have been proposed to indicate the relatedness of abstract ideas. In this paper we present a novel context based measure of concept relatedness, based on cross entropy reduction. We propose a method based on the cross entropy reduction between language models of concepts which are estimated based on document-concept assignments. After introducing our method, we compare it to the methods introduced earlier, by comparing the results with relatedness judgments provided by human assessors. The approach shows improved or competitive results compared to state-of-the-art methods on two test sets in the biomedical domain.

  • [PDF] D. Trieschnigg, E. Meij, M. de Rijke, and W. Kraaij, “Measuring concept relatedness using language models,” in Proceedings of the 31st annual international acm sigir conference on research and development in information retrieval, 2008.
    [Bibtex]
    @inproceedings{SIGIR:2008:trieschnigg,
    Author = {Trieschnigg, Dolf and Meij, Edgar and de Rijke, Maarten and Kraaij, Wessel},
    Booktitle = {Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval},
    Date-Added = {2011-10-12 18:31:55 +0200},
    Date-Modified = {2012-10-30 08:45:51 +0000},
    Series = {SIGIR 2008},
    Title = {Measuring concept relatedness using language models},
    Year = {2008},
    Bdsk-Url-1 = {http://doi.acm.org/10.1145/1390334.1390523}}
mathematics

Parsimonious Relevance Models

Relevance feedback is often applied to better capture a user’s information need. Automatically reformulating queries (or blind relevance feedback) entails looking at the terms in some set of (pseudo-)relevant documents and selecting the most informative ones with respect to the set or the collection. These terms may then be reweighed based on information pertinent to the query or the documents and—in a language modeling setting—be used to estimate a query model, P(t|θQ), i.e., a distribution over terms t for a given query Q.

Not all of the terms obtained using blind relevance feedback are equally informative given the query, even after reweighing. Some may be common terms, whilst others may describe the general domain of interest. We hypothesize that refining the results of blind relevance feedback, using a technique called parsimonious language modeling, will improve retrieval effectiveness. Hiemstra et al. already provide a mechanism for incorporating (parsimonious) blind relevance feedback, by viewing it as a three component mixture model of document, set of feedback documents, and collection. Our approach is more straightforward, since it considers each feedback document separately and, hence, does not require the additional mixture model parameter. To create parsimonious language models we use an EM algorithm to update the maximum-likelihood (ML) estimates. Zhai and Lafferty already proposed an approach which uses a similar EM algorithm; it differs, however, in the way the set of feedback documents is handled. Whereas we parsimonize each individual document, they apply their EM algorithm to the entire set of feedback documents.

To verify our hypothesis, we use a specific instance of blind relevance feedback, namely relevance modeling (RM). We choose this particular method because it has been shown to achieve state-of-the-art retrieval performance. Relevance modeling assumes that the query and the set of documents are samples from an underlying term distribution—the relevance model. Lavrenko and Croft formulate two ways of approaching the estimation of the parameters of this model. We build upon their work and compare the results of our proposed parsimonious relevance models with RMs as well as with a query-likelihood baseline. To measure the effects in different contexts, we employ five test collections taken from the TREC-7, TREC Robust, Genomics, Blog, and Enterprise tracks and show that our proposed model improves performance in terms of mean average precision on all the topic sets over both a query-likelihood baseline as well as a run based on relevance models. Moreover, although blind relevance feedback is mainly a recall enhancing technique, we observe that parsimonious relevance models (unlike their non-parsimonized counterparts) can also improve early precision and reciprocal rank of the first relevant result. Thus, our parsimonious relevance models (i) improve retrieval effectiveness in terms of MAP on all collections, (ii) significantly outperform their non-parsimonious counterparts on most measures, and (iii) have a precision enhancing effect, unlike other blind relevance feedback methods.

  • [PDF] E. Meij, W. Weerkamp, K. Balog, and M. de Rijke, “Parsimonious relevance models,” in Proceedings of the 31st annual international acm sigir conference on research and development in information retrieval, 2008.
    [Bibtex]
    @inproceedings{SIGIR:2008:Meij-prm,
    Author = {Meij, Edgar and Weerkamp, Wouter and Balog, Krisztian and de Rijke, Maarten},
    Booktitle = {Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval},
    Date-Added = {2011-10-12 18:31:55 +0200},
    Date-Modified = {2012-10-30 08:47:44 +0000},
    Series = {SIGIR 2008},
    Title = {Parsimonious relevance models},
    Year = {2008},
    Bdsk-Url-1 = {http://doi.acm.org/10.1145/1390334.1390520}}
concepts

Parsimonious concept modeling

In many collections, documents are annotated using concepts from a structured knowledge source such as an ontology or thesaurus. Examples include the news domain, where each news item is categorized according to the nature of the event that took place, and Wikipedia, with its per-article categories. These categorizing systems originally stem from the cataloging systems used in libraries and conceptual search is commonly used in digital library environments at the front-end to support search and navigation. In this paper we want to employ the explicit knowledge used for annotation at the back-end, not just to improve retrieval performance, but also to generate high-quality term and concept suggestions. To do so, we use the dual document representation— concepts and terms—to create a generative language model for each concept, which bridges the gap between vocabulary terms and concepts. Related work has also used textual representations to rep- resent concepts, however, there are two important differences. First, we use statistical language modeling techniques to parametrize the concept models, by leveraging the dual represen- tation of the documents. Second, we found that simple maximum likelihood estimation assigns too much probability mass to terms and concepts which may not be relevant to each document. Thus we apply an EM algorithm to “parsimonize” the document models.

The research questions we address are twofold: (i) what are the results of applying our model as compared to a query-likelihood baseline as well as compared to a run based on relevance models and (ii) what is the influence of parsimonizing? To answer these questions, we use the TREC Genomics track test collections in conjunction with MedLine. MedLine contains over 16 million bibliographic records of publications from the life sciences domain and each abstract therein has been manually indexed by trained curators, who use concepts from the MeSH (Medical Subject Headings) thesaurus. We show that our approach is able to achieve similar or better performance than relevance models, whilst at the same time providing high quality concepts to facilitate navigation. Examples show that our parsimonious concept models generate terms that are more specific than those acquired through maximum likelihood estimates.

  • [PDF] E. Meij, D. Trieschnigg, M. de Rijke, and W. Kraaij, “Parsimonious concept modeling,” in Proceedings of the 31st annual international acm sigir conference on research and development in information retrieval, 2008.
    [Bibtex]
    @inproceedings{SIGIR:2008:Meij-cm,
    Author = {Meij, Edgar and Trieschnigg, Dolf and de Rijke, Maarten and Kraaij, Wessel},
    Booktitle = {Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval},
    Date-Added = {2011-10-12 18:31:55 +0200},
    Date-Modified = {2012-10-30 08:46:38 +0000},
    Series = {SIGIR 2008},
    Title = {Parsimonious concept modeling},
    Year = {2008},
    Bdsk-Url-1 = {http://doi.acm.org/10.1145/1390334.1390519}}
workflow process

Biological applications of Aida knowledge management components

Given the important role of knowledge in biology, knowledge in a machine readable form can be an important asset for bioinformatics. We present two applications of AIDA (Adaptive Information Disclosure Application), a collection of knowledge management components. One is a workflow that extends a semantic model with putative relations between proteins and diseases extracted from literature by machine learning techniques. The other extends vBrowser, a virtual resource browser tool, with the ability to find relevant biological resources (e.g. data, workflows, documents) via semantic relationships.

Central to our semantic web approach is the separation of a ‘virtual knowledge space’ from its applications. In other words, knowledge is disclosed and accessed in a knowledge space rather than being coded into the application. The workflow adds knowledge to this space with knowledge extraction, while vBrowser accesses the knowledge resources for use during search. We use RDF and OWL to represent knowledge and Sesame to store RDF and OWL representations of knowledge.

The workflow contains the following steps: (i) add the ontology that you want to extend to Sesame (e.g. a model that contains the protein EZH2), (ii) extract the entities of interest from the ontology (e.g. EZH2), (iii) retrieve abstracts from Medline for these entities, (iv) extract proteins and protein-protein relationships from the abstracts, (v) add a ranking score to the discoveries, (vi) query OMIM with the extracted proteins and retrieve the disease labels (service from the National Institute of Genetics in Japan), (vii) add the discoveries and their interrelationships to the repository, (viii) export the enriched ontology to the knowledge space where for instance vBrowser can be used to explore the results. Future work includes metrics to more effectively retrieve biologically interesting suggestions from semantic data.

We show how the vBrowser can be used to browse both data resources and knowledge resources from the same basic interface. We show how vBrowser uses an AIDA thesaurus service to improve finding resources such as Medline documents and workflows on myExperiment.org. We found thesauri terms effective for search and advocate SKOS for its intuitive ‘broader/narrower-than’ relationships. We further show that the protein-disease relationships resulting from our knowledge capture workflow as well as the documents that contained these relationships can be accessed as knowledge resources from the vBrowser. We think OWL can adequately represent the knowledge in many biological cartoon models and have used it to represent the workflow provenance in our knowledge capture workflow.

  • M. Roos, S. M. Marshall, P. T. de Boer, K. van den Berg, S. Katrenko, E. Meij, W. R. van Hage, and P. W. Adriaans, “Biological applications of AIDA knowledge management components,” in Ismb ’08, 2008.
    [Bibtex]
    @inproceedings{ISMB:2008:roos,
    Author = {Marco Roos and M. Scott Marshall and Piter T. de Boer and Kasper van den Berg and Sophia Katrenko and Edgar Meij and Willem R. van Hage and Pieter W. Adriaans},
    Booktitle = {ISMB '08},
    Date-Added = {2011-10-16 10:45:35 +0200},
    Date-Modified = {2012-10-28 23:04:46 +0000},
    Title = {Biological applications of {AIDA} knowledge management components},
    Year = {2008}}
escience graph

Enabling Data Transport between Web Services through alternative protocols and Streaming

As web services gain acceptance in the e-Science community, some of their shortcomings have begun to appear. A significant challenge is to find reliable and efficient methods to transfer large data between web services. This paper describes the problem of scalable data transport between web services, and proposes a solution: the development of a modular Server/Client library that uses SOAP as a control channel while the actual data transport is accomplished by various protocol implementation, as well as a simple API that developers can use for data-intensive applications. Apart from file transport, the proposed approach offers the facility of direct data streaming between web services, an approach that could benefit workflow execution time by creating a data pipeline between web services. Finally, the performance and usability of this library is evaluated, under the indexing application that the Adaptive Information Disclosure Application (AIDA) Toolkit offers as a Web Service.

  • [PDF] S. Koulouzis, E. Meij, M. S. Marshall, and A. Belloum, “Enabling data transport between web services through alternative protocols and streaming,” in 4th ieee international conference on e-science, 2008.
    [Bibtex]
    @inproceedings{IEEE:2008:koulouzis,
    Author = {Koulouzis, S. and Meij, E. and Marshall, M.S. and Belloum, A.},
    Booktitle = {4th IEEE International Conference on e-Science},
    Date-Added = {2011-10-16 10:35:31 +0200},
    Date-Modified = {2011-10-16 10:35:31 +0200},
    Title = {Enabling Data Transport between Web Services through alternative protocols and Streaming},
    Year = {2008}}
Text Mining

Bootstrapping Language Associated with Biomedical Entities

The TREC Genomics 2007 task included recognizing topic-specific entities in the returned passages. To address this task, we have designed and implemented a novel data-driven ap- proach by combining information extraction with language modeling techniques. Instead of using an exhaustive list of all possible instances for an entity type, we look at the language usage around each entity type and use that as a classifier to determine whether or not a piece of text discusses such an entity type. We do so by comparing it with language models of the passages. E.g., given the entity type “genes”, our approach can measure the gene-iness of a piece of text.

Our algorithm works as follows. Given an entity type, it first uses Hearst patterns to extract instances of the type. To extract more instances, we look for new contextual patterns around the instances and use them as input for a bootstrapping method, in which new instances and patterns are discovered iteratively. Afterwards, all discovered instances and patterns are used to find the sentences in the collection which are most on par with the requested entity type. A language model is then generated from these sentences and, at retrieval time, we use this model to rerank retrieved passages.

As to the results of our submitted runs, we find that our baseline run performs well above the median of all participant’s scores. Additionally, we find that applying our proposed method helps the entity types for which there are unambiguous patterns and numerous instances most.

  • [PDF] E. Meij and S. Katrenko, “Bootstrapping language associated with biomedical entities,” in The sixteenth text retrieval conference, 2008.
    [Bibtex]
    @inproceedings{TREC:2008:meij,
    Author = {Meij, E. and Katrenko, S.},
    Booktitle = {The Sixteenth Text REtrieval Conference},
    Date-Added = {2011-10-16 10:24:41 +0200},
    Date-Modified = {2012-10-30 09:23:12 +0000},
    Series = {TREC 2007},
    Title = {Bootstrapping Language Associated with Biomedical Entities},
    Year = {2008}}