Preliminary draft c 2008 Cambridge UP
82
4 Index construction
access list. Search results are then intersected with this list. However, such an index is difficult to maintain when access permissions change – we discussed
these difficulties in the context of incremental indexing for regular postings lists in the last section. It also requires the processing of very long postings
lists for users with access to large document subsets. User membership is therefore often verified by retrieving access information directly from the file
system at query time – even though this slows down retrieval.
We discussed indexes for storing and retrieving terms as opposed to doc- uments in Chapter
3 .
?
Exercise 4.5
Can spelling correction compromise document-level security? Consider the case where a spelling correction is based on documents the user does not have access to.
4.7 References and further reading
Witten et al. 1999
, ch. 5 present an extensive treatment of the subject of in- dex construction and additional indexing algorithms with different tradeoffs
of memory, disk space and time. In general, blocked sort-based indexing does well on all three counts. However, if conserving memory or disk space
is the main criterion, then other algorithms may be a better choice. See Wit-
ten et al. 1999
, Tables 5.4 and 5.5; BSBI is closest to “sort-based multiway merge”, but the two algorithms differ in dictionary structure and use of com-
pression. Moffat and Bell
1995 show how to construct an index “in-situ”, that is,
with disk space usage close to what is needed for the final index and with a minimum of additional temporary files cf. also
Harman and Candela 1990
. They give
Lesk 1988
and Somogyi
1990 credit for being among the first to
employ sorting for index construction. The SPIMI method in Section
4.3 is from
Heinz and Zobel 2003 . We have
simplified several aspects of the algorithm, including compression and the fact that each term’s data structure also contains, in addition to the postings
list, its document frequency and house keeping information. We recommend Heinz and Zobel
2003 and
Zobel and Moffat 2006
as up-do-date in-depth treatments of index construction. Other algorithms with good scaling prop-
erties with respect to vocabulary size require several passes through the data, e.g., FAST-INV
Fox and Lee 1991 ,
Harman et al. 1992 .
The MapReduce architecture was introduced by Dean and Ghemawat
2004 .
An open source implementation of MapReduce is available at
http:lucene.apache.orghadoop
. Ribeiro-Neto et al.
1999 and
Melnik et al. 2001
describe other approaches to distributed indexing. Introductory chapters on distributed IR are
Baeza- Yates and Ribeiro-Neto 1999
, ch. 9 and Grossman and Frieder 2004
, ch. 8. See also
Callan 2000
.
Preliminary draft c 2008 Cambridge UP
4.8 Exercises
83 step
time 1
reading of collection line 4 2
10 initial sorts of 10
7
records each line 5 3
writing of 10 blocks line 6 4
total disk transfer time for merging line 7 5
time of actual merging line 7 total
◮
Table 4.3
The five steps in constructing an index for Reuters-RCV1 in blocked sort-based indexing. Line numbers refer to Figure
4.2 .
Lester et al. 2005
and Büttcher and Clarke
2005a analyze the proper-
ties of logarithmic merging and compare it with other construction methods. One of the first uses of this method was in Lucene
http:lucene.apache.org
. Other dynamic indexing methods are discussed by
Büttcher et al. 2006
and Lester et al.
2006 . The latter paper also discusses the strategy of replacing
the old index by one built from scratch. Heinz et al.
2002 compare data structures for accumulating the vocabu-
lary in memory. Büttcher and Clarke
2005b discuss security models for a
common inverted index for multiple users. A detailed characterization of the Reuters-RCV1 collection can be found in
Lewis et al. 2004 . NIST distributes
the collection see
http:trec.nist.govdatareutersreuters.html
. Garcia-Molina et al.
1999 , ch. 2 review computer hardware relevant to
system design in depth. An effective indexer for enterprise search needs to be able to communicate
efficiently with a number of applications that hold text data in corporations, including Microsoft Outlook, IBM’s Lotus software, databases like Oracle
and MySQL, content management systems like Open Text and enterprise re- source planning software like SAP.
4.8 Exercises
?
Exercise 4.6
Total index construction time in blocked sort-based indexing is broken down in Ta- ble
4.3 . Fill out the time column of the table for Reuters-RCV1 assuming a system
with the parameters given in Table 4.1
.
Exercise 4.7
Repeat Exercise 4.6
for the larger collection in Table 4.4
. Choose a block size that is realistic for current technology remember that a block should easily fit into main
memory. How many blocks do you need?
Preliminary draft c 2008 Cambridge UP
84
4 Index construction
symbol statistic
value N
documents 1,000,000,000
L
ave
tokens per document 1000
M distinct terms
44,000,000 ◮
Table 4.4
Collection statistics for a large collection.
Exercise 4.8
Assume that we have a collection of modest size whose index can be constructed with the simple in-memory indexing algorithm in Figure
1.4 page
8 . For this collection,
compare memory, disk and time requirements of the simple algorithm in Figure 1.4
and blocked sort-based indexing.
Exercise 4.9
Assume that machines in MapReduce have 100 GB of disk space each. Assume fur- ther that the postings list of the term
the
has a size of 200 GB. Then the MapReduce algorithm as described cannot be run to construct the index. How would you modify
MapReduce so that it can handle this case?
Exercise 4.10
For optimal load balancing, the inverters in MapReduce must get segmented postings files of similar sizes. For a new collection, the distribution of key-value pairs may not
be known in advance. How would you solve this problem?
Exercise 4.11
Apply MapReduce to the problem of counting how often each term occurs in a set of files. Specify map and reduce operations for this task. Write down an example along
the lines of Figure 4.6
.
Exercise 4.12
We claimed above page 80
that an auxiliary index can impair the quality of collec- tion statistics. An example is the term weighting method idf, which is defined as
log N
df
i
where N is the total number of documents and df
i
is the number of docu- ments that term i occurs in Section
6.2.1 , page
117 . Show that even a small auxiliary
index can cause significant error in idf when it is computed on the main index only. Consider a rare term that suddenly occurs frequently e.g.,
Flossie
as in
Tropical Storm Flossie
.
Preliminary draft c 2008 Cambridge UP
DRAFT © July 12, 2008 Cambridge University Press. Feedback welcome.
85
5
Index compression
Chapter 1
introduced the dictionary and the inverted index as the central data structures in information retrieval. In this chapter, we employ a number
of compression techniques for dictionary and inverted index that are essen- tial for efficient IR systems.
One benefit of compression is immediately clear. We will need less disk space. As we will see, compression ratios of 1:4 are easy to achieve, poten-
tially cutting the cost of storing the index by 75. There are two more subtle benefits of compression. The first is increased
use of caching. Search systems use some parts of the dictionary and the index much more than others. For example, if we cache the postings list of a fre-
quently used query term t, then the computations necessary for responding to the one-term query t can be entirely done in memory. With compression,
we can fit a lot more information into main memory. Instead of having to expend a disk seek when processing a query with t, we instead access its
postings list in memory and decompress it. As we will see below, there are simple and efficient decompression methods, so that the penalty of having
to decompress the postings list is small. As a result, we are able to decrease the response time of the IR system substantially. Since memory is a more
expensive resource than disk space, increased speed due to caching – rather than decreased space requirements – is often the prime motivator for com-
pression.
The second more subtle advantage of compression is faster transfer of data from disk to memory. Efficient decompression algorithms run so fast on
modern hardware that the total time of transferring a compressed chunk of data from disk and then decompressing it is usually less than transferring
the same chunk of data in uncompressed form. For instance, we can reduce IO inputoutput time by loading a much smaller compressed postings
list, even when you add on the cost of decompression. So in most cases, the retrieval system will run faster on compressed postings lists than on uncom-
pressed postings lists.
If the main goal of compression is to conserve disk space, then the speed
Preliminary draft c 2008 Cambridge UP
86
5 Index compression
distinct terms non-positional
postings tokens
=
number of po- sition entries in postings
number ∆
T number
∆ T
number ∆
T unfiltered
484,494 109,971,179
197,879,290 no numbers
473,723
−
2
−
2 100,680,242
−
8
−
8 179,158,204
−
9
−
9 case folding
391,523
−
17
−
19 96,969,056
−
3
−
12 179,158,204
− −
9 30 stop words
391,493
− −
19 83,390,443
−
14
−
24 121,857,825
−
31
−
38 150 stop words
391,373
− −
19 67,001,847
−
30
−
39 94,516,599
−
47
−
52 stemming
322,383
−
17
−
33 63,812,300
−
4
−
42 94,516,599
− −
52 ◮
Table 5.1
The effect of preprocessing on the number of terms, non-positional postings, and tokens for RCV1. “∆” indicates the reduction in size from the previ-
ous line, except that “30 stop words” and “150 stop words” both use “case folding” as their reference line. “T” is the cumulative “total” reduction from unfiltered. We
performed stemming with the Porter stemmer Chapter 2
, page 33
.
of compression algorithms is of no concern. But for improved cache uti- lization and faster disk-to-memory transfer, decompression speeds must be
high. The compression algorithms we discuss in this chapter are highly effi- cient and can therefore serve all three purposes of index compression.
In this chapter, we define a posting as a docID in a postings list. For exam-
POSTING
ple, the postings list 6; 20, 45, 100, where 6 is the termID of the list’s term, contains 3 postings. As discussed in Section
2.4.2 page
41 , postings in most
search systems also contain frequency and position information; but we will only consider simple docID postings here. See Section
5.4 for references on
compressing frequencies and positions. This chapter first gives a statistical characterization of the distribution of
the entities we want to compress – terms and postings in large collections Section
5.1 . We then look at compression of the dictionary, using the dictionary-
as-a-string method and blocked storage Section 5.2
. Section 5.3
describes two techniques for compressing the postings file, variable byte encoding and
γ encoding.
5.1 Statistical properties of terms in information retrieval