Multinomial distributions over words
12.1 Language models
241 questionable whether there is enough training data to do more. Losses from data sparseness see the discussion on page 260 tend to outweigh any gains from richer models. This is an example of the bias-variance tradeoff cf. Sec- tion 14.6 , page 308 : With limited training data, a more constrained model tends to perform better. In addition, unigram models are more efficient to estimate and apply than higher-order models. Nevertheless, the importance of phrase and proximity queries in IR in general suggests that future work should make use of more sophisticated language models, and some has be- gun to see Section 12.5 , page 252 . Indeed, making this move parallels the model of van Rijsbergen in Chapter 11 page 231 .12.1.3 Multinomial distributions over words
Under the unigram language model the order of words is irrelevant, and so such models are often called “bag of words” models, as discussed in Chap- ter 6 page 117 . Even though there is no conditioning on preceding context, this model nevertheless still gives the probability of a particular ordering of terms. However, any other ordering of this bag of terms will have the same probability. So, really, we have a multinomial distribution over words. So long MULTINOMIAL DISTRIBUTION as we stick to unigram models, the language model name and motivation could be viewed as historical rather than necessary. We could instead just refer to the model as a multinomial model. From this perspective, the equa- tions presented above do not present the multinomial probability of a bag of words, since they do not sum over all possible orderings of those words, as is done by the multinomial coefficient the first term on the right-hand side in the standard presentation of a multinomial model: P d = L d tf t 1 ,d tf t 2 ,d · · · tf t M ,d P t 1 tf t1,d P t 2 tf t2,d · · · P t M tf tM,d 12.7 Here, L d = ∑ 1≤i≤M tf t i ,d is the length of document d, M is the size of the term vocabulary, and the products are now over the terms in the vocabulary, not the positions in the document. However, just as with STOP probabilities, in practice we can also leave out the multinomial coefficient in our calculations, since, for a particular bag of words, it will be a constant, and so it has no effect on the likelihood ratio of two different models generating a particular bag of words. Multinomial distributions also appear in Section 13.2 page 258 . The fundamental problem in designing language models is that we do not know what exactly we should use as the model M d . However, we do gener- ally have a sample of text that is representative of that model. This problem makes a lot of sense in the original, primary uses of language models. For ex- ample, in speech recognition, we have a training sample of spoken text. But we have to expect that, in the future, users will use different words and in Preliminary draft c 2008 Cambridge UP 242 12 Language models for information retrieval different sequences, which we have never observed before, and so the model has to generalize beyond the observed data to allow unknown words and se- quences. This interpretation is not so clear in the IR case, where a document is finite and usually fixed. The strategy we adopt in IR is as follows. We pretend that the document d is only a representative sample of text drawn from a model distribution, treating it like a fine-grained topic. We then esti- mate a language model from this sample, and use that model to calculate the probability of observing any word sequence, and, finally, we rank documents according to their probability of generating the query. ? Exercise 12.1 [ ⋆ ] Including stop probabilities in the calculation, what will the sum of the probability estimates of all strings in the language of length 1 be? Assume that you generate a word and then decide whether to stop or not i.e., the null string is not part of the language. Exercise 12.2 [ ⋆ ] If the stop probability is omitted from calculations, what will the sum of the scores assigned to strings in the language of length 1 be? Exercise 12.3 [ ⋆ ] What is the likelihood ratio of the document according to M 1 and M 2 in Exam- ple 12.2 ? Exercise 12.4 [ ⋆ ] No explicit STOP probability appeared in Example 12.2 . Assuming that the STOP probability of each model is 0.1, does this change the likelihood ratio of a document according to the two models? Exercise 12.5 [ ⋆⋆ ] How might a language model be used in a spelling correction system? In particular, consider the case of context-sensitive spelling correction, and correcting incorrect us- ages of words, such as their in Are you their? See Section 3.5 page 65 for pointers to some literature on this topic.12.2 The query likelihood model
Parts
» cambridge introductiontoinformationretrieval2008 readingversion
» An example information retrieval problem
» A first take at building an inverted index
» Processing Boolean queries cambridge introductiontoinformationretrieval2008 readingversion
» The extended Boolean model versus ranked retrieval
» References and further reading
» Obtaining the character sequence in a document
» Tokenization Determining the vocabulary of terms
» Dropping common terms: stop words
» Normalization equivalence classing of terms
» Stemming and lemmatization Determining the vocabulary of terms
» Faster postings list intersection via skip pointers
» Biword indexes Positional postings and phrase queries
» Positional indexes Positional postings and phrase queries
» Combination schemes Positional postings and phrase queries
» Search structures for dictionaries
» General wildcard queries Wildcard queries
» k-gram indexes for wildcard queries
» Implementing spelling correction Spelling correction
» Forms of spelling correction Edit distance
» k-gram indexes for spelling correction
» Context sensitive spelling correction
» Phonetic correction cambridge introductiontoinformationretrieval2008 readingversion
» Hardware basics Blocked sort-based indexing
» Single-pass in-memory indexing cambridge introductiontoinformationretrieval2008 readingversion
» Distributed indexing cambridge introductiontoinformationretrieval2008 readingversion
» Dynamic indexing cambridge introductiontoinformationretrieval2008 readingversion
» References and further reading Exercises
» Heaps’ law: Estimating the number of terms Zipf’s law: Modeling the distribution of terms
» Blocked storage Dictionary compression
» Variable byte codes γ Postings file compression
» Exercises cambridge introductiontoinformationretrieval2008 readingversion
» Weighted zone scoring Parametric and zone indexes
» Learning weights Parametric and zone indexes
» The optimal weight Parametric and zone indexes
» Inverse document frequency Term frequency and weighting
» Tf-idf weighting Term frequency and weighting
» Dot products The vector space model for scoring
» Queries as vectors Computing vector scores
» Sublinear tf scaling Maximum tf normalization
» Document and query weighting schemes Pivoted normalized document length
» Inexact top Efficient scoring and ranking
» Index elimination Champion lists
» Static quality scores and ordering
» Impact ordering Efficient scoring and ranking
» Cluster pruning Efficient scoring and ranking
» Tiered indexes Query-term proximity
» Designing parsing and scoring functions
» Vector space scoring and query operator interaction
» Information retrieval system evaluation
» Standard test collections cambridge introductiontoinformationretrieval2008 readingversion
» Evaluation of unranked retrieval sets
» Evaluation of ranked retrieval results
» Critiques and justifications of the concept of relevance
» Results snippets cambridge introductiontoinformationretrieval2008 readingversion
» The Rocchio algorithm for relevance feedback
» Probabilistic relevance feedback When does relevance feedback work?
» Relevance feedback on the web
» Evaluation of relevance feedback strategies
» Pseudo relevance feedback Indirect relevance feedback
» Vocabulary tools for query reformulation Query expansion
» Basic XML concepts cambridge introductiontoinformationretrieval2008 readingversion
» A vector space model for XML retrieval
» Text-centric vs. data-centric XML retrieval
» Review of basic probability theory
» The 10 loss case The Probability Ranking Principle
» Deriving a ranking function for query terms
» Probability estimates in theory
» Probability estimates in practice
» Probabilistic approaches to relevance feedback
» An appraisal of probabilistic models
» Tree-structured dependencies between terms Okapi BM25: a non-binary model
» Finite automata and language models
» Multinomial distributions over words
» Using query likelihood language models in IR
» Estimating the query generation probability
» Ponte and Croft’s Experiments
» Language modeling versus other approaches in IR
» Extended language modeling approaches
» The text classification problem
» The Bernoulli model cambridge introductiontoinformationretrieval2008 readingversion
» Mutual information Feature selection
» Frequency-based feature selection Feature selection for multiple classifiers
» Evaluation of text classification
» Document representations and measures of relatedness in vec-
» Rocchio classification cambridge introductiontoinformationretrieval2008 readingversion
» Time complexity and optimality of kNN
» Linear versus nonlinear classifiers
» Classification with more than two classes
» The bias-variance tradeoff cambridge introductiontoinformationretrieval2008 readingversion
» Support vector machines: The linearly separable case
» Soft margin classification Extensions to the SVM model
» Multiclass SVMs Nonlinear SVMs
» Experimental results Extensions to the SVM model
» Choosing what kind of classifier to use
» Improving classifier performance Issues in the classification of text documents
» A simple example of machine-learned scoring
» Result ranking by machine learning
» Clustering in information retrieval
» Evaluation of clustering cambridge introductiontoinformationretrieval2008 readingversion
» Cluster cardinality in K-means
» Model-based clustering cambridge introductiontoinformationretrieval2008 readingversion
» Centroid clustering cambridge introductiontoinformationretrieval2008 readingversion
» Optimality of HAC cambridge introductiontoinformationretrieval2008 readingversion
» Divisive clustering cambridge introductiontoinformationretrieval2008 readingversion
» Cluster labeling cambridge introductiontoinformationretrieval2008 readingversion
» Implementation notes cambridge introductiontoinformationretrieval2008 readingversion
» Term-document matrices and singular value decompositions
» Low-rank approximations cambridge introductiontoinformationretrieval2008 readingversion
» Latent semantic indexing cambridge introductiontoinformationretrieval2008 readingversion
» Background and history cambridge introductiontoinformationretrieval2008 readingversion
» The web graph Web characteristics
» Advertising as the economic model
» Near-duplicates and shingling cambridge introductiontoinformationretrieval2008 readingversion
» Features a crawler Features a crawler
» Distributing indexes cambridge introductiontoinformationretrieval2008 readingversion
» Connectivity servers cambridge introductiontoinformationretrieval2008 readingversion
» Anchor text and the web graph
» The PageRank computation PageRank
Show more