Skip to content

Evaluating Elements of Web-based Data Enrichment for Pseudo-Relevance Feedback Retrieval

Notifications You must be signed in to change notification settings

irgroup/clef2021-web-prf

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Evaluating Elements of Web-based Data Enrichment for Pseudo-Relevance Feedback Retrieval

This repository accompanies our study in which we analyze pseudo-relevance classifiers based on the results of web search engines. By enriching topics with text data from web search engine result pages and linked contents, we train topic-specific and cost-efficient classifiers that can be used to search test collections for relevant documents. Building up on attempts that were initially made at TREC Common Core 2018 by Grossman and Cormack (uwmrg and uwmrgx), we address the questions of system performance over time considering different search engines, queries and test collections. In order to avoid re-scraping web contents, we provide scraped artifacts in an external Zenodo archive. This archive also contains the final runs for the later evaluation.

Data resources

Corpus qrels topics
TREC Washington Post Corpus Core18 Core18
New York Times Annotated Corpus Core17 Core17
The AQUAINT Corpus of English News Text Robust05 Robust05
TREC disks 4 and 5 Robust04 Robust04

Setup:

  • Install requirements:
pip install -r requirements.txt
git clone https://github.com/usnistgov/trec_eval.git && make -C trec_eval
  • Configure conf/path.py in order to set path to test collection data and make sure the directory complies with the directory tree.

  • Configure conf/settings.py in order to specify run

Run in advance in order to prepare data from newswire test collections. This needs to be done only once.

prep_wapo.py
Will extract documents from Washington Post test collection (Core18).

prep_nyt.py
Will extract documents New York Times test collection (Core17).

prep_rob05.py
Will extract documents from AQUAINT test collection (Robust05).

prep_rob04.py
Will extract documents TREC Disks 4 & 5 (Robust04).

prep_topics.py
Run this script to add closing tags to the topics files. In order to read contents from topic files, we rely on Beautifulsoup. If closing tags are missing, values will not be read out properly.

Workflow

workflow

Order of execution

Depending on the run, the following files have to be run from the corresponding directory in the respective order.

  1. scrape.py
    Scrapes webpages. (Alternatively use dump.py and parse.py)
    cmd: python -m uwmrg.scrape

  2. vectorize.py
    Generates TfidfVectorizer from scraped webpages / makes term-doc-matrix.
    cmd: python -m uwmrg.vectorize

  3. prep_train.py
    Prepares training data from scraped webpages based on tfidf-features derived from term-doc-matrix.
    cmd: python -m uwmrg.prep_train

  4. prep_test.py
    Prepares tfidf-features for test corpus.
    cmd: python -m uwmrg.prep_test

  5. rank.py
    Training of logistic regression classifiers. Ranking of tfidf-features of test corpus.
    cmd: python -m uwmrg.rank

  6. evaluate.py
    Final evaluation with trec_eval.
    cmd: python -m uwmrg.evaluate

Evaluations

The underlying evaluation scripts of the tables and figures reported in our paper can be found in the directory eval/. To re-run the evaluations the data has to be downloaded from the Zenodo archive. The extracted folders runs/, scrape/ and time-series should be placed in the subdirectory eval/data/. All our evaluations are conducted with repro_eval.

About

Evaluating Elements of Web-based Data Enrichment for Pseudo-Relevance Feedback Retrieval

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages