Skip to content

Latest commit

 

History

History
525 lines (432 loc) · 50.2 KB

experiments-msmarco-passage.md

File metadata and controls

525 lines (432 loc) · 50.2 KB

Anserini: BM25 Baselines for MS MARCO Passage Ranking

This page contains instructions for running BM25 baselines on the MS MARCO passage ranking task. Note that there is a separate MS MARCO document ranking task. This exercise will require a machine with >8 GB RAM and >15 GB free disk space . If you're using a Windows machine, equivalent commands are provided alongside the Unix-like (Linux/macOS) commands.

If you're a Waterloo student traversing the onboarding path, start here. In general, don't try to rush through this guide by just blindly copying and pasting commands into a shell; that's what I call cargo culting. Instead, really try to understand what's going on.

Learning outcomes for this guide, building on previous steps in the onboarding path:

  • Be able to use Anserini to build a Lucene inverted index on the MS MARCO passage collection.
  • Be able to use Anserini to perform a batch retrieval run on the MS MARCO passage collection with the dev queries.
  • Be able to evaluate the retrieved results above.
  • Understand the MRR metric.

What's Anserini? Well, it's the repo that you're in right now. Anserini is a toolkit (in Java) for reproducible information retrieval research built on the Lucene search library. The Lucene search library provides components of the popular Elasticsearch platform.

Think of it this way: Lucene provides a "kit of parts". Elasticsearch provides "assembly of parts" targeted to production search applications, with a REST-centric API. Anserini provides an alternative way of composing the same core components together, targeted at information retrieval researchers. By building on Lucene, Anserini aims to bridge the gap between academic information retrieval research and the practice of building real-world search applications. That is, most things done with Anserini can be "translated" into Elasticsearch quite easily.

Data Prep

In this guide, we're just going through the mechanical steps of data prep. To better understand what you're actually doing, go through the start here guide. The guide contains the same exact instructions, but provide more detailed explanations.

We're going to use the repository's root directory as the working directory. First, we need to download and extract the MS MARCO passage dataset:

mkdir collections/msmarco-passage

wget https://msmarco.z22.web.core.windows.net/msmarcoranking/collectionandqueries.tar.gz -P collections/msmarco-passage

# Alternative mirror:
# wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/collectionandqueries.tar.gz -P collections/msmarco-passage

tar xvfz collections/msmarco-passage/collectionandqueries.tar.gz -C collections/msmarco-passage

To confirm, collectionandqueries.tar.gz should have MD5 checksum of 31644046b18952c1386cd4564ba2ae69.

Next, we need to convert the MS MARCO tsv collection into Anserini's jsonl files (which have one json object per line):

python tools/scripts/msmarco/convert_collection_to_jsonl.py \
  --collection-path collections/msmarco-passage/collection.tsv \
  --output-folder collections/msmarco-passage/collection_jsonl

The above script should generate 9 jsonl files in collections/msmarco-passage/collection_jsonl, each with 1M lines (except for the last one, which should have 841,823 lines).

We need to do a bit of data munging on the queries as well. There are queries in the dev set that don't have relevance judgments. Let's discard them:

python tools/scripts/msmarco/filter_queries.py \
  --qrels collections/msmarco-passage/qrels.dev.small.tsv \
  --queries collections/msmarco-passage/queries.dev.tsv \
  --output collections/msmarco-passage/queries.dev.small.tsv

The output queries file collections/msmarco-passage/queries.dev.small.tsv should contain 6980 lines.

Indexing

In building a retrieval system, there are generally two phases:

  • In the indexing phase, an indexer takes the document collection (i.e., corpus) and builds an index, which is a data structure that supports efficient retrieval.
  • In the retrieval (or search) phase, the retrieval system returns a ranked list given a query q, with the aid of the index constructed in the previous phase.

(There's also a training phase when we start to discuss models that learn from data, but we're not there yet.)

Given a (static) document collection, indexing only needs to be performed once, and hence there are fewer constraints on latency, throughput, and other aspects of performance (just needs to be "reasonable"). On the other hand, retrieval needs to be fast, i.e., low latency, high throughput, etc.

With the data prep above, we can now index the MS MARCO passage collection in collections/msmarco-passage/collection_jsonl.

If you haven't built Anserini already, build it now using the instructions in anserini#-installation.

We index these docs as a JsonCollection (a specification of how documents are encoded) using Anserini:

bin/run.sh io.anserini.index.IndexCollection \
  -collection JsonCollection \
  -input collections/msmarco-passage/collection_jsonl \
  -index indexes/msmarco-passage/lucene-index-msmarco \
  -generator DefaultLuceneDocumentGenerator \
  -threads 9 -storePositions -storeDocvectors -storeRaw 

For Windows:

bin\run.bat io.anserini.index.IndexCollection -collection JsonCollection -input collections\msmarco-passage\collection_jsonl -index indexes\msmarco-passage\lucene-index-msmarco -generator DefaultLuceneDocumentGenerator -threads 9 -storePositions -storeDocvectors -storeRaw

In this case, Lucene creates what is known as an inverted index.

Upon completion, we should have an index with 8,841,823 documents. The indexing speed may vary; on a modern desktop with an SSD, indexing takes a couple of minutes. On the new MacBook Pro M3 Laptop, if you only have 8GB memory, you might encounter an issue where the threads are forced to abort before indexing finishes. This is likely caused by JVM allocating more memory than available on the system, thus causing too much memory swapping without actively garbage collecting. To mitigate this issue, you may need to modify run.sh to change the -Xms option to 2GB and -Xmx to 6GB.

Retrieval

In the above step, we've built the inverted index. Now we can perform a retrieval run using queries we've prepared:

bin/run.sh io.anserini.search.SearchCollection \
  -index indexes/msmarco-passage/lucene-index-msmarco \
  -topics collections/msmarco-passage/queries.dev.small.tsv \
  -topicReader TsvInt \
  -output runs/run.msmarco-passage.dev.small.tsv -format msmarco \
  -parallelism 4 \
  -bm25 -bm25.k1 0.82 -bm25.b 0.68 -hits 1000

For Windows:

bin\run.bat io.anserini.search.SearchCollection -index indexes\msmarco-passage\lucene-index-msmarco -topics collections\msmarco-passage\queries.dev.small.tsv -topicReader TsvInt -output runs\run.msmarco-passage.dev.small.tsv -format msmarco -parallelism 4 -bm25 -bm25.k1 0.82 -bm25.b 0.68 -hits 1000

This is the retrieval (or search) phase. We're performing retrieval in batch, on a set of queries.

Retrieval here uses a "bag-of-words" model known as BM25. A "bag of words" model just means that documents are scored based on the matching of query terms (i.e., words) that appear in the documents, without regard to the structure of the document, the order of the words, etc. BM25 is perhaps the most popular bag-of-words retrieval model; it's the default in the popular Elasticsearch platform. We'll discuss retrieval models in much more detail later.

The above command uses BM25 with tuned parameters k1=0.82, b=0.68. The option -hits specifies the number of documents per query to be retrieved. Thus, the output file should have approximately 6980 × 1000 = 6.9M lines.

Retrieval speed will vary by machine: On a reasonably modern desktop with an SSD, with four threads (as specified above), the run takes a couple of minutes. Adjust the parallelism by changing the -parallelism argument.

Congratulations, you've performed your first retrieval run!

Recap of what you've done: You've fed the retrieval system a bunch of queries and the retrieval run is the output. For each query, the retrieval system produced a ranked list of results (i.e., a list of hits). The retrieval run contains the ranked lists for all queries you fed to it.

Let's take a look:

$ head runs/run.msmarco-passage.dev.small.tsv
1048585	7187158	1
1048585	7187157	2
1048585	7187163	3
1048585	7546327	4
1048585	7187160	5
1048585	8227279	6
1048585	7617404	7
1048585	7187156	8
1048585	2298838	9
1048585	7187155	10

The first column is the qid (corresponding to the query). From above, we can see that qid 1048585 is the query "what is paula deen's brother". The second column is the docid of the retrieved result (i.e., the hit), and the third column is the rank position. That is, in a search interface, docid 7187158 would be shown in the top position, docid 7187157 would be shown in the second position, etc.

You can grep through the collection to see what that actual passage is:

$ grep 7187158 collections/msmarco-passage/collection.tsv
7187158	Paula Deen and her brother Earl W. Bubba Hiers are being sued by a former general manager at Uncle Bubba'sâ�¦ Paula Deen and her brother Earl W. Bubba Hiers are being sued by a former general manager at Uncle Bubba'sâ�

In this case, the document (hit) seems relevant. That is, it contains information that addresses the information need. So here, the retrieval system "did well". Remember that this document was indeed marked relevant in the qrels, as we saw in the start here guide.

As an additional sanity check, run the following:

$ cut -f 1 runs/run.msmarco-passage.dev.small.tsv | uniq | wc
    6980    6980   51039

This tells us that there are 6980 unique tokens in the first column of the run file. Since the first column indicates the qid, it means that the file contains ranked lists for 6980 queries, which checks out.

Evaluation

Finally, we can evaluate the retrieved documents using this the official MS MARCO evaluation script:

python tools/scripts/msmarco/msmarco_passage_eval.py \
 collections/msmarco-passage/qrels.dev.small.tsv runs/run.msmarco-passage.dev.small.tsv

And the output should be like this:

#####################
MRR @10: 0.18741227770955546
QueriesRanked: 6980
#####################

(Yea, the number of digits of precision is a bit... excessive)

Remember from the start here guide that with relevance judgments (qrels), we can automatically evaluate the retrieval system output (i.e., the run).

The final ingredient is a metric, i.e., how to quantify the "quality" of a ranked list. Here, we're using a metric called MRR, or mean reciprocal rank. The idea is quite simple: We look at the rank position of the first relevant docid. If it appears at rank 1, the system gets a score of one. If it appears at rank 2, the system gets a score of 1/2. If it appears at rank 3, the system gets a score of 1/3. And so on. MRR@10 means that we only go down to rank 10. If the relevant docid doesn't appear in the top 10, then the system gets a score of zero.

That's the score of a query. We take the average of the scores across all queries (6980 in this case), and we arrive at the score for the entire run.

You can find this run on the MS MARCO Passage Ranking Leaderboard as the entry named "BM25 (Lucene8, tuned)", dated 2019/06/26. So you've just reproduced (part of) a leaderboard submission!

We can also use the official TREC evaluation tool, trec_eval, to compute other metrics than MRR@10. For that we first need to convert runs and qrels files to the TREC format:

python tools/scripts/msmarco/convert_msmarco_to_trec_run.py \
  --input runs/run.msmarco-passage.dev.small.tsv \
  --output runs/run.msmarco-passage.dev.small.trec

python tools/scripts/msmarco/convert_msmarco_to_trec_qrels.py \
  --input collections/msmarco-passage/qrels.dev.small.tsv \
  --output collections/msmarco-passage/qrels.dev.small.trec

And run the trec_eval tool:

bin/trec_eval -c -mrecall.1000 -mmap \
  collections/msmarco-passage/qrels.dev.small.trec \
  runs/run.msmarco-passage.dev.small.trec

The output should be:

map                   	all	0.1957
recall_1000           	all	0.8573

In many retrieval applications, average precision and recall@1000 are the two metrics we care about the most.

You can use trec_eval to compute the MRR@10 also, which gives results identical to above (just fewer digits of precision):

bin/trec_eval -c -M 10 -m recip_rank \
  collections/msmarco-passage/qrels.dev.small.trec \
  runs/run.msmarco-passage.dev.small.trec

It's a different command-line incantation of trec_eval to compute MRR@10. And if you add -q, the tool will spit out the MRR@10 per query (for all 6980 queries, in addition to the final average).

bin/trec_eval -q -c -M 10 -m recip_rank \
  collections/msmarco-passage/qrels.dev.small.trec \
  runs/run.msmarco-passage.dev.small.trec

We can find the MRR@10 for qid 1048585 above:

$ bin/trec_eval -q -c -M 10 -m recip_rank \
    collections/msmarco-passage/qrels.dev.small.trec \
    runs/run.msmarco-passage.dev.small.trec | grep 1048585

recip_rank            	1048585	1.0000

This is consistent with the example we worked through above. At this point, make sure that the connections between a query, the relevance judgments for a query, the ranked list, and the metric (MRR@10) are clear in your mind. Work through a few more examples (take another query, look at its qrels and ranked list, and compute its MRR@10 by hand) to convince yourself that you understand what's going on.

The tl;dr is that there are different formats for run files and lots of different metrics you can compute. trec_eval is a standard tool used by information retrieval researchers (which has many command-line options that you'll slowly learn over time). Researchers have been trying to answer the question "how do we know if a search result is good and how do we measure it" for over half a century... and the question still has not been fully resolved. In short, it's complicated.

At this time, look back through the learning outcomes again and make sure you're good. As a next step in the onboarding path, you basically do the same thing again in Python with Pyserini (as opposed to Java with Anserini here).

Before you move on, however, add an entry in the "Reproduction Log" at the bottom of this page, following the same format: use yyyy-mm-dd, make sure you're using a commit id that's on the main trunk of Anserini, and use its 7-hexadecimal prefix for the link anchor text. In the description of your pull request, please provide some details on your setup (e.g., operating system, environment and configuration, etc.). In addition, also provide some indication of success (e.g., everything worked) or document issues you encountered. If you think this guide can be improved in any way (e.g., you caught a typo or think a clarification is warranted), feel free to include it in the pull request.

BM25 Tuning

This section is not part of the onboarding path, so feel free to skip.

Note that this figure differs slightly from the value reported in Document Expansion by Query Prediction, which uses the Anserini (system-wide) default of k1=0.9, b=0.4.

Tuning was accomplished with tools/scripts/msmarco/tune_bm25.py, using the queries found here; the basic approach is grid search of parameter values in tenth increments. There are five different sets of 10k samples (using the shuf command). We tuned on each individual set and then averaged parameter values across all five sets (this has the effect of regularization). In separate trials, we optimized for:

  • recall@1000, since Anserini output serves as input to downstream rerankers (e.g., based on BERT), and we want to maximize the number of relevant documents the rerankers have to work with;
  • MRR@10, for the case where Anserini output is directly presented to users (i.e., no downstream reranking).

It turns out that optimizing for MRR@10 and MAP yields the same settings.

Here's the comparison between the Anserini default and optimized parameters:

Setting MRR@10 MAP Recall@1000
Default (k1=0.9, b=0.4) 0.1840 0.1926 0.8526
Optimized for recall@1000 (k1=0.82, b=0.68) 0.1874 0.1957 0.8573
Optimized for MRR@10/MAP (k1=0.60, b=0.62) 0.1892 0.1972 0.8555

As mentioned above, the BM25 run with k1=0.82, b=0.68 corresponds to the entry "BM25 (Lucene8, tuned)" dated 2019/06/26 on the MS MARCO Passage Ranking Leaderboard. The BM25 run with default parameters k1=0.9, b=0.4 roughly corresponds to the entry "BM25 (Anserini)" dated 2019/04/10 (but Anserini was using Lucene 7.6 at the time).

Reproduction Log*