Features a crawler Features a crawler
20.1 Overview
Web crawling is the process by which we gather pages from the Web, in order to index them and support a search engine. The objective of crawling is to quickly and efficiently gather as many useful web pages as possible, together with the link structure that interconnects them. In Chapter 19 we studied the complexities of the Web stemming from its creation by millions of uncoordinated individuals. In this chapter we study the resulting difficulties for crawling the Web. The focus of this chapter is the component shown in Figure 19.7 as web crawler; it is sometimes referred to as a spider. WEB CRAWLER SPIDER The goal of this chapter is not to describe how to build the crawler for a full-scale commercial web search engine. We focus instead on a range of issues that are generic to crawling from the student project scale to substan- tial research projects. We begin Section 20.1.1 by listing desiderata for web crawlers, and then discuss in Section 20.2 how each of these issues is ad- dressed. The remainder of this chapter describes the architecture and some implementation details for a distributed web crawler that satisfies these fea- tures. Section 20.3 discusses distributing indexes across many machines for a web-scale implementation.20.1.1 Features a crawler
must provide We list the desiderata for web crawlers in two categories: features that web crawlers must provide, followed by features they should provide. Robustness: The Web contains servers that create spider traps, which are gen- erators of web pages that mislead crawlers into getting stuck fetching an infinite number of pages in a particular domain. Crawlers must be de- signed to be resilient to such traps. Not all such traps are malicious; some are the inadvertent side-effect of faulty website development. Preliminary draft c 2008 Cambridge UP 444 20 Web crawling and indexes Politeness: Web servers have both implicit and explicit policies regulating the rate at which a crawler can visit them. These politeness policies must be respected.20.1.2 Features a crawler
should provide Distributed: The crawler should have the ability to execute in a distributed fashion across multiple machines. Scalable: The crawler architecture should permit scaling up the crawl rate by adding extra machines and bandwidth. Performance and efficiency: The crawl system should make efficient use of various system resources including processor, storage and network band- width. Quality: Given that a significant fraction of all web pages are of poor util- ity for serving user query needs, the crawler should be biased towards fetching “useful” pages first. Freshness: In many applications, the crawler should operate in continuous mode: it should obtain fresh copies of previously fetched pages. A search engine crawler, for instance, can thus ensure that the search engine’s index contains a fairly current representation of each indexed web page. For such continuous crawling, a crawler should be able to crawl a page with a frequency that approximates the rate of change of that page. Extensible: Crawlers should be designed to be extensible in many ways – to cope with new data formats, new fetch protocols, and so on. This de- mands that the crawler architecture be modular.20.2 Crawling
Parts
» cambridge introductiontoinformationretrieval2008 readingversion
» An example information retrieval problem
» A first take at building an inverted index
» Processing Boolean queries cambridge introductiontoinformationretrieval2008 readingversion
» The extended Boolean model versus ranked retrieval
» References and further reading
» Obtaining the character sequence in a document
» Tokenization Determining the vocabulary of terms
» Dropping common terms: stop words
» Normalization equivalence classing of terms
» Stemming and lemmatization Determining the vocabulary of terms
» Faster postings list intersection via skip pointers
» Biword indexes Positional postings and phrase queries
» Positional indexes Positional postings and phrase queries
» Combination schemes Positional postings and phrase queries
» Search structures for dictionaries
» General wildcard queries Wildcard queries
» k-gram indexes for wildcard queries
» Implementing spelling correction Spelling correction
» Forms of spelling correction Edit distance
» k-gram indexes for spelling correction
» Context sensitive spelling correction
» Phonetic correction cambridge introductiontoinformationretrieval2008 readingversion
» Hardware basics Blocked sort-based indexing
» Single-pass in-memory indexing cambridge introductiontoinformationretrieval2008 readingversion
» Distributed indexing cambridge introductiontoinformationretrieval2008 readingversion
» Dynamic indexing cambridge introductiontoinformationretrieval2008 readingversion
» References and further reading Exercises
» Heaps’ law: Estimating the number of terms Zipf’s law: Modeling the distribution of terms
» Blocked storage Dictionary compression
» Variable byte codes γ Postings file compression
» Exercises cambridge introductiontoinformationretrieval2008 readingversion
» Weighted zone scoring Parametric and zone indexes
» Learning weights Parametric and zone indexes
» The optimal weight Parametric and zone indexes
» Inverse document frequency Term frequency and weighting
» Tf-idf weighting Term frequency and weighting
» Dot products The vector space model for scoring
» Queries as vectors Computing vector scores
» Sublinear tf scaling Maximum tf normalization
» Document and query weighting schemes Pivoted normalized document length
» Inexact top Efficient scoring and ranking
» Index elimination Champion lists
» Static quality scores and ordering
» Impact ordering Efficient scoring and ranking
» Cluster pruning Efficient scoring and ranking
» Tiered indexes Query-term proximity
» Designing parsing and scoring functions
» Vector space scoring and query operator interaction
» Information retrieval system evaluation
» Standard test collections cambridge introductiontoinformationretrieval2008 readingversion
» Evaluation of unranked retrieval sets
» Evaluation of ranked retrieval results
» Critiques and justifications of the concept of relevance
» Results snippets cambridge introductiontoinformationretrieval2008 readingversion
» The Rocchio algorithm for relevance feedback
» Probabilistic relevance feedback When does relevance feedback work?
» Relevance feedback on the web
» Evaluation of relevance feedback strategies
» Pseudo relevance feedback Indirect relevance feedback
» Vocabulary tools for query reformulation Query expansion
» Basic XML concepts cambridge introductiontoinformationretrieval2008 readingversion
» A vector space model for XML retrieval
» Text-centric vs. data-centric XML retrieval
» Review of basic probability theory
» The 10 loss case The Probability Ranking Principle
» Deriving a ranking function for query terms
» Probability estimates in theory
» Probability estimates in practice
» Probabilistic approaches to relevance feedback
» An appraisal of probabilistic models
» Tree-structured dependencies between terms Okapi BM25: a non-binary model
» Finite automata and language models
» Multinomial distributions over words
» Using query likelihood language models in IR
» Estimating the query generation probability
» Ponte and Croft’s Experiments
» Language modeling versus other approaches in IR
» Extended language modeling approaches
» The text classification problem
» The Bernoulli model cambridge introductiontoinformationretrieval2008 readingversion
» Mutual information Feature selection
» Frequency-based feature selection Feature selection for multiple classifiers
» Evaluation of text classification
» Document representations and measures of relatedness in vec-
» Rocchio classification cambridge introductiontoinformationretrieval2008 readingversion
» Time complexity and optimality of kNN
» Linear versus nonlinear classifiers
» Classification with more than two classes
» The bias-variance tradeoff cambridge introductiontoinformationretrieval2008 readingversion
» Support vector machines: The linearly separable case
» Soft margin classification Extensions to the SVM model
» Multiclass SVMs Nonlinear SVMs
» Experimental results Extensions to the SVM model
» Choosing what kind of classifier to use
» Improving classifier performance Issues in the classification of text documents
» A simple example of machine-learned scoring
» Result ranking by machine learning
» Clustering in information retrieval
» Evaluation of clustering cambridge introductiontoinformationretrieval2008 readingversion
» Cluster cardinality in K-means
» Model-based clustering cambridge introductiontoinformationretrieval2008 readingversion
» Centroid clustering cambridge introductiontoinformationretrieval2008 readingversion
» Optimality of HAC cambridge introductiontoinformationretrieval2008 readingversion
» Divisive clustering cambridge introductiontoinformationretrieval2008 readingversion
» Cluster labeling cambridge introductiontoinformationretrieval2008 readingversion
» Implementation notes cambridge introductiontoinformationretrieval2008 readingversion
» Term-document matrices and singular value decompositions
» Low-rank approximations cambridge introductiontoinformationretrieval2008 readingversion
» Latent semantic indexing cambridge introductiontoinformationretrieval2008 readingversion
» Background and history cambridge introductiontoinformationretrieval2008 readingversion
» The web graph Web characteristics
» Advertising as the economic model
» Near-duplicates and shingling cambridge introductiontoinformationretrieval2008 readingversion
» Features a crawler Features a crawler
» Distributing indexes cambridge introductiontoinformationretrieval2008 readingversion
» Connectivity servers cambridge introductiontoinformationretrieval2008 readingversion
» Anchor text and the web graph
» The PageRank computation PageRank
Show more