Entity Linking and Retrieval for Semantic Search (WSDM 2014)

This morning, we presented the last edition of our tutorial series on Entity Linking and Retrieval, entitled “Entity Linking and Retrieval for Semantic Search” (with Krisztian Balog and Daan Odijk) at WSDM 2014! This final edition of the series builds upon our earlier tutorials at WWW 2013 and SIGIR 2013. The focus of this edition lies on the practical applications of Entity Linking and Retrieval, in particular for semantic search: more and more search engine users are expecting direct answers to their information needs (rather than just documents). Semantic search and its recent applications are enabling search engines to organize their wealth of information around entities. Entity linking and retrieval is at the basis of these developments, providing the building stones for organizing the web of entities.

This tutorial aims to cover all facets of semantic search from a unified point of view and connect real-world applications with results from scientific publications. We provide a comprehensive overview of entity linking and retrieval in the context of semantic search and thoroughly explore techniques for query understanding, entity-based retrieval and ranking on unstructured text, structured knowledge repositories, and a mixture of these. We point out the connections between published approaches and applications, and provide hands-on examples on real-world use cases and datasets.

As before, all our tutorial materials are available for free online, see http://ejmeij.github.io/entity-linking-and-retrieval-tutorial/.

RepLab 2014

RepLab is a competitive evaluation exercise for Online Reputation Management systems. In 2012 and 2013, RepLab focused on the problem of monitoring the reputation of (company) entities on Twitter, and dealt with the tasks of entity linking (“Is the tweet about the entity?”), reputation polarity (“Does the tweet have positive or negative implications for the entity’s reputation?”), topic detection (“What is the issue relative to the entity that is discussed in the tweet?”), and topic ranking (“Is the topic an alert that deserves immediate attention?”).

RepLab 2014 will again focus on Reputation Management on Twitter and will be addressing two new tasks, see below. We will use tweets in two languages: English and Spanish.

  1. The classification of tweets with respect to standard reputation dimensions such as Performance, Leadership, Innovation, etc.
  2. The classification of Twitter profiles (authors) with respect to a certain domain, classifying them as journalists, professionals, etc. Second, this task focuses on finding the opinion makers.

The second task is a part of the shared PAN-RepLab author profiling task. Besides the characterization of profiles from a reputation analysis perspective, participants can also attempt to classify authors by gender and age, which is the focus of PAN 2014.

Important dates:

  • March 1 – Training data released
  • March 17 – Test data released
  • May 5 – System results due

See http://nlp.uned.es/replab2014/ for more info and how to participate.

We’re now hiring next year’s interns!

I’m happy to announce that we have just opened up our applications for next year’s internships at Yahoo Labs in Barcelona. So, if you’re a PhD student in a related field, do consider applying. Especially if you’re interested in spending some time in sunny Barcelona and gaining research experience along the way, you’re more than welcome.

The application form can be found at http://comunicacio.barcelonamedia.org/yahoo/, the deadline is January 13th, 2014.

Do reach out if you have any questions!

Time-Aware Chi-squared for Document Filtering over Time

To appear at TAIA2013 (a SIGIR 2013 workshop).

Document filtering over time is widely applied in various tasks such as tracking topics in online news or social media. We consider it a classification task, where topics of interest correspond to classes, and the feature space consists of the words associated to each class. In “streaming” settings the set of words associated with a concept may change. In this paper we employ a multinomial Naive Bayes classifier and perform periodic feature selection to adapt to evolving topics. We propose two ways of employing Pearson’s χ2 test for feature selection and demonstrate its benefit on the TREC KBA 2012 data set. By incorporating a time-dependent function in our equations for χ2 we provide an elegant method for applying different weighting schemes. Experiments show improvements of our approach over a non-adaptive baseline.

TREC KBA logo

Hadoop code for TREC KBA

I’ve decided to put some of the Hadoop code I developed for the TREC KBA task online. It’s available on Github: https://github.com/ejmeij/trec-kba. In particular, it provides classes to read/write topic files, read/write run files, and expose the documents in the Thrift files as Hadoop-readable objects (‘ThriftFileInputFormat’) to be used as input to mappers. I obviously also implemented a toy KBA system on Hadoop :-). See Github for more info.

Time series

OpenGeist: Insight in the Stream of Page Views on Wikipedia

We present a RESTful interface that captures insights into the zeitgeist of Wikipedia users. In recent years many so-called zeitgeist applications have been launched. Such applications are used to gain insights into the current gist of society and actual affairs. Several news sources run zeitgeist applications for popular and trending news. In addition, there are zeitgeist applications that report on trending publications such as LibraryThing, and trending topics, such as Google Zeitgeist. There is an interesting open data source from which a stream of people’s changing interests can be observed across a very broad spectrum of areas: the Wikimedia access logs. These logs contain the number of requests made to any Wikimedia domain, sorted by subdomain, and aggregated on an hourly basis. Since they are a log of the actual requests, they are noisy and can also contain non-existing pages. They are also quite large, yielding 60 GB worth of compressed textual data per month. Currently, we update the data on a daily basis and filter the raw source data by matching the URLs of all English Wikipedia articles and their redirects.

In this paper we describe an API that facilitates easy access to the access logs. We have identified the following requirements our system should have:

  • The user must have access to the raw time series data for a concept.
  • The user must be able to find the N most temporally similar concepts.
  • The user must be able to group concepts and their data, based either on the categorial system of Wikipedia or on similarity between concepts.
  • The system must return either a textual or a visual representation.
  • The user should be able to apply time series filters to extract trends and (recurring) events.

The API is an interface for clustering and comparing concepts based on the time series of the number of views of their Wikipedia page.

See http://www.opengeist.org for more info and examples.

  • [PDF] M-H. Peetz, E. Meij, and M. de Rijke, “OpenGeist: insight in the stream of page views on Wikipedia,” in Sigir 2012 workshop on time-aware information access, 2012.
    [Bibtex]
    @inproceedings{SIGIR-WS:2012:Peetz,
    Author = {Peetz, M-H. and Meij, E. and de Rijke, M.},
    Booktitle = {SIGIR 2012 Workshop on Time-aware Information Access},
    Date-Added = {2012-10-28 16:35:47 +0000},
    Date-Modified = {2012-10-31 10:48:46 +0000},
    Title = {{OpenGeist}: Insight in the Stream of Page Views on {Wikipedia}},
    Year = {2012}}