Vector space scoring and query operator interaction
7.3 Vector space scoring and query operator interaction
147 ◮ Figure 7.5 A complete search system. Data paths are shown primarily for a free text query. tiered positional indexes, indexes for spelling correction and other tolerant retrieval, and structures for accelerating inexact top-K retrieval. A free text user query top center is sent down to the indexes both directly and through a module for generating spelling-correction candidates. As noted in Chap- ter 3 the latter may optionally be invoked only when the original query fails to retrieve enough results. Retrieved documents dark arrow are passed to a scoring module that computes scores based on machine-learned rank- ing MLR, a technique that builds on Section 6.1.2 to be further developed in Section 15.4.1 for scoring and ranking documents. Finally, these ranked documents are rendered as a results page. ? Exercise 7.9 Explain how the postings intersection algorithm first introduced in Section 1.3 can be adapted to find the smallest integer ω that contains all query terms. Exercise 7.10 Adapt this procedure to work when not all query terms are present in a document.7.3 Vector space scoring and query operator interaction
We introduced the vector space model as a paradigm for free text queries. We conclude this chapter by discussing how the vector space scoring model Preliminary draft c 2008 Cambridge UP 148 7 Computing scores in a complete search system relates to the query operators we have studied in earlier chapters. The re- lationship should be viewed at two levels: in terms of the expressiveness of queries that a sophisticated user may pose, and in terms of the index that supports the evaluation of the various retrieval methods. In building a search engine, we may opt to support multiple query operators for an end user. In doing so we need to understand what components of the index can be shared for executing various query operators, as well as how to handle user queries that mix various query operators. Vector space scoring supports so-called free text retrieval, in which a query is specified as a set of words without any query operators connecting them. It allows documents matching the query to be scored and thus ranked, unlike the Boolean, wildcard and phrase queries studied earlier. Classically, the interpretation of such free text queries was that at least one of the query terms be present in any retrieved document. However more recently, web search engines such as Google have popularized the notion that a set of terms typed into their query boxes thus on the face of it, a free text query carries the semantics of a conjunctive query that only retrieves documents containing all or most query terms. Boolean retrieval Clearly a vector space index can be used to answer Boolean queries, as long as the weight of a term t in the document vector for d is non-zero when- ever t occurs in d. The reverse is not true, since a Boolean index does not by default maintain term weight information. There is no easy way of combin- ing vector space and Boolean queries from a user’s standpoint: vector space queries are fundamentally a form of evidence accumulation, where the pres- ence of more query terms in a document adds to the score of a document. Boolean retrieval on the other hand, requires a user to specify a formula for selecting documents through the presence or absence of specific com- binations of keywords, without inducing any relative ordering among them. Mathematically, it is in fact possible to invoke so-called p-norms to combine Boolean and vector space queries, but we know of no system that makes use of this fact. Wildcard queries Wildcard and vector space queries require different indexes, except at the basic level that both can be implemented using postings and a dictionary e.g., a dictionary of trigrams for wildcard queries. If a search engine allows a user to specify a wildcard operator as part of a free text query for instance, the query rom restaurant , we may interpret the wildcard component of the query as spawning multiple terms in the vector space in this example, rome Preliminary draft c 2008 Cambridge UP7.4 References and further reading
Parts
» cambridge introductiontoinformationretrieval2008 readingversion
» An example information retrieval problem
» A first take at building an inverted index
» Processing Boolean queries cambridge introductiontoinformationretrieval2008 readingversion
» The extended Boolean model versus ranked retrieval
» References and further reading
» Obtaining the character sequence in a document
» Tokenization Determining the vocabulary of terms
» Dropping common terms: stop words
» Normalization equivalence classing of terms
» Stemming and lemmatization Determining the vocabulary of terms
» Faster postings list intersection via skip pointers
» Biword indexes Positional postings and phrase queries
» Positional indexes Positional postings and phrase queries
» Combination schemes Positional postings and phrase queries
» Search structures for dictionaries
» General wildcard queries Wildcard queries
» k-gram indexes for wildcard queries
» Implementing spelling correction Spelling correction
» Forms of spelling correction Edit distance
» k-gram indexes for spelling correction
» Context sensitive spelling correction
» Phonetic correction cambridge introductiontoinformationretrieval2008 readingversion
» Hardware basics Blocked sort-based indexing
» Single-pass in-memory indexing cambridge introductiontoinformationretrieval2008 readingversion
» Distributed indexing cambridge introductiontoinformationretrieval2008 readingversion
» Dynamic indexing cambridge introductiontoinformationretrieval2008 readingversion
» References and further reading Exercises
» Heaps’ law: Estimating the number of terms Zipf’s law: Modeling the distribution of terms
» Blocked storage Dictionary compression
» Variable byte codes γ Postings file compression
» Exercises cambridge introductiontoinformationretrieval2008 readingversion
» Weighted zone scoring Parametric and zone indexes
» Learning weights Parametric and zone indexes
» The optimal weight Parametric and zone indexes
» Inverse document frequency Term frequency and weighting
» Tf-idf weighting Term frequency and weighting
» Dot products The vector space model for scoring
» Queries as vectors Computing vector scores
» Sublinear tf scaling Maximum tf normalization
» Document and query weighting schemes Pivoted normalized document length
» Inexact top Efficient scoring and ranking
» Index elimination Champion lists
» Static quality scores and ordering
» Impact ordering Efficient scoring and ranking
» Cluster pruning Efficient scoring and ranking
» Tiered indexes Query-term proximity
» Designing parsing and scoring functions
» Vector space scoring and query operator interaction
» Information retrieval system evaluation
» Standard test collections cambridge introductiontoinformationretrieval2008 readingversion
» Evaluation of unranked retrieval sets
» Evaluation of ranked retrieval results
» Critiques and justifications of the concept of relevance
» Results snippets cambridge introductiontoinformationretrieval2008 readingversion
» The Rocchio algorithm for relevance feedback
» Probabilistic relevance feedback When does relevance feedback work?
» Relevance feedback on the web
» Evaluation of relevance feedback strategies
» Pseudo relevance feedback Indirect relevance feedback
» Vocabulary tools for query reformulation Query expansion
» Basic XML concepts cambridge introductiontoinformationretrieval2008 readingversion
» A vector space model for XML retrieval
» Text-centric vs. data-centric XML retrieval
» Review of basic probability theory
» The 10 loss case The Probability Ranking Principle
» Deriving a ranking function for query terms
» Probability estimates in theory
» Probability estimates in practice
» Probabilistic approaches to relevance feedback
» An appraisal of probabilistic models
» Tree-structured dependencies between terms Okapi BM25: a non-binary model
» Finite automata and language models
» Multinomial distributions over words
» Using query likelihood language models in IR
» Estimating the query generation probability
» Ponte and Croft’s Experiments
» Language modeling versus other approaches in IR
» Extended language modeling approaches
» The text classification problem
» The Bernoulli model cambridge introductiontoinformationretrieval2008 readingversion
» Mutual information Feature selection
» Frequency-based feature selection Feature selection for multiple classifiers
» Evaluation of text classification
» Document representations and measures of relatedness in vec-
» Rocchio classification cambridge introductiontoinformationretrieval2008 readingversion
» Time complexity and optimality of kNN
» Linear versus nonlinear classifiers
» Classification with more than two classes
» The bias-variance tradeoff cambridge introductiontoinformationretrieval2008 readingversion
» Support vector machines: The linearly separable case
» Soft margin classification Extensions to the SVM model
» Multiclass SVMs Nonlinear SVMs
» Experimental results Extensions to the SVM model
» Choosing what kind of classifier to use
» Improving classifier performance Issues in the classification of text documents
» A simple example of machine-learned scoring
» Result ranking by machine learning
» Clustering in information retrieval
» Evaluation of clustering cambridge introductiontoinformationretrieval2008 readingversion
» Cluster cardinality in K-means
» Model-based clustering cambridge introductiontoinformationretrieval2008 readingversion
» Centroid clustering cambridge introductiontoinformationretrieval2008 readingversion
» Optimality of HAC cambridge introductiontoinformationretrieval2008 readingversion
» Divisive clustering cambridge introductiontoinformationretrieval2008 readingversion
» Cluster labeling cambridge introductiontoinformationretrieval2008 readingversion
» Implementation notes cambridge introductiontoinformationretrieval2008 readingversion
» Term-document matrices and singular value decompositions
» Low-rank approximations cambridge introductiontoinformationretrieval2008 readingversion
» Latent semantic indexing cambridge introductiontoinformationretrieval2008 readingversion
» Background and history cambridge introductiontoinformationretrieval2008 readingversion
» The web graph Web characteristics
» Advertising as the economic model
» Near-duplicates and shingling cambridge introductiontoinformationretrieval2008 readingversion
» Features a crawler Features a crawler
» Distributing indexes cambridge introductiontoinformationretrieval2008 readingversion
» Connectivity servers cambridge introductiontoinformationretrieval2008 readingversion
» Anchor text and the web graph
» The PageRank computation PageRank
Show more