This repository accompanies our study in which we analyze pseudo-relevance classifiers based on the results of web search engines. By enriching topics with text data from web search engine result pages and linked contents, we train topic-specific and cost-efficient classifiers that can be used to search test collections for relevant documents. Building up on attempts that were initially made at TREC Common Core 2018 by Grossman and Cormack (uwmrg
and uwmrgx
), we address the questions of system performance over time considering different search engines, queries and test collections. In order to avoid re-scraping web contents, we provide scraped artifacts in an external Zenodo archive. This archive also contains the final runs for the later evaluation.
- Install requirements:
pip install -r requirements.txt
- Install trec_eval:
git clone https://github.com/usnistgov/trec_eval.git && make -C trec_eval
-
Configure
conf/path.py
in order to set path to test collection data and make sure the directory complies with the directory tree. -
Configure
conf/settings.py
in order to specify run
Run in advance in order to prepare data from newswire test collections. This needs to be done only once.
prep_wapo.py
Will extract documents from Washington Post test collection (Core18).
prep_nyt.py
Will extract documents New York Times test collection (Core17).
prep_rob05.py
Will extract documents from AQUAINT test collection (Robust05).
prep_rob04.py
Will extract documents TREC Disks 4 & 5 (Robust04).
prep_topics.py
Run this script to add closing tags to the topics files. In order to read contents from topic files, we rely on Beautifulsoup. If closing tags are missing, values will not be read out properly.
Depending on the run, the following files have to be run from the corresponding directory in the respective order.
-
scrape.py
Scrapes webpages. (Alternatively usedump.py
andparse.py
)
cmd:python -m uwmrg.scrape
-
vectorize.py
Generates TfidfVectorizer from scraped webpages / makes term-doc-matrix.
cmd:python -m uwmrg.vectorize
-
prep_train.py
Prepares training data from scraped webpages based on tfidf-features derived from term-doc-matrix.
cmd:python -m uwmrg.prep_train
-
prep_test.py
Prepares tfidf-features for test corpus.
cmd:python -m uwmrg.prep_test
-
rank.py
Training of logistic regression classifiers. Ranking of tfidf-features of test corpus.
cmd:python -m uwmrg.rank
-
evaluate.py
Final evaluation withtrec_eval
.
cmd:python -m uwmrg.evaluate
The underlying evaluation scripts of the tables and figures reported in our paper can be found in the directory eval/
. To re-run the evaluations the data has to be downloaded from the Zenodo archive. The extracted folders runs/
, scrape/
and time-series
should be placed in the subdirectory eval/data/
. All our evaluations are conducted with repro_eval
.