Evaluation: the key to success

  Evaluating what’s been learned

  Credibility:

  Issues: training, testing, tuning Predicting performance: confidence limits Holdout, cross-validation, bootstrap

  Data Mining

  Comparing schemes: the t-test Practical Machine Learning Tools and Techniques

  Predicting probabilities: loss functions

  Slides for Chapter 5 of Data Mining by I. H. Witten and E. Frank

  Cost-sensitive measures Evaluating numeric prediction The Minimum Description Length principle

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5)

  1 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5)

  2 03/04/06

  03/04/06 Evaluation: the key to success Issues in evaluation

  ✁ ✁

  How predictive is the model we learned? Statistical reliability of estimated differences in

  ✁

  performance ( significance tests)

  →

  Error on the training data is not a good

  ✁

  indicator of performance on future data Choice of performance measure:

  ♦ Otherwise 1-NN would be the optimum classifier! ♦

  Number of correct classifications

  ✁ ♦ Accuracy of probability estimates

  Simple solution that can be used if lots of

  ♦

  (labeled) data is available: Error in numeric predictions

  ✁ ♦

  Split data into training and test set Costs assigned to different types of errors

  ✁ ♦

  However: (labeled) data is usually limited Many practical applications involve costs

  ♦

  More sophisticated techniques need to be used

  03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 3 03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5)

  4 Training and testing I Training and testing II ✁

  ✂

  Natural performance measure for Test set : independent instances that have played classification problems: error rate no part in formation of classifier

  ✄ ♦

  Success : instance’s class is predicted correctly Assumption: both training data and test data are

  representative samples of the underlying problem

  ♦ Error : instance’s class is predicted incorrectly

  ✂ ♦

  Error rate: proportion of errors made over the Test and training data may differ in nature

  ✄

  whole set of instances Example: classifiers built using customer data from two

  ✁

  different towns A and B

  Resubstitution error: error rate obtained ☎

  To estimate performance of classifier from town A in completely

  from training data

  ✁ new town, test it on data from B

  Resubstitution error is (hopelessly) optimistic!

  03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 5 03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5)

  6 Note on parameter tuning Making the most of the data ✂

  It is important that the test data is not used in

  ✂

  Once evaluation is complete, all the data

  any way to create the classifier ✂

  can be used to build the final classifier Some learning schemes operate in two stages:

  ✂ ✄

  Generally, the larger the training data the Stage 1: build the basic structure

  ✄

  better the classifier (but returns diminish) Stage 2: optimize parameter settings

  ✂ ✂

  The larger the test data the more accurate The test data can’t be used for parameter tuning!

  ✂

  the error estimate Proper procedure uses three sets: training data,

  ✂ Holdout procedure: method of splitting validation data , and test data

  ✄

  original data into training and test set Validation data is used to optimize parameters

  ✄

  Dilemma: ideally both training set and test set should be large!

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5)

  7 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5)

  8 03/04/06

  03/04/06 Predicting performance Confidence intervals

  ✁ ✂

  Assume the estimated error rate is 25%. We can say: p lies within a certain specified How close is this to the true error rate? interval with a certain specified confidence

  ✂ ♦

  Depends on the amount of test data Example: S=750 successes in N=1000 trials

  ✁ ✄

  Prediction is just like tossing a (biased!) coin Estimated success rate: 75%

  ✄ ♦

  “Head” is a “success”, “tail” is an “error”

  ✁

  How close is this to true success rate p?

  ☎

  In statistics, a succession of independent

  Answer: with 80% confidence p [73.2,76.7] ∈

  ✂

  events like this is called a Bernoulli process Another example: S=75 and N=100

  ♦

  Statistical theory provides us with confidence ✄ Estimated success rate: 75% intervals for the true underlying proportion

  ✄

  With 80% confidence p ∈ [69.1,80.1]

  03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 9 03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 1 0 Mean and variance Confidence limits

  ✂

  Confidence limits for the normal distribution with Mean and variance for a Bernoulli trial: 0 mean and a variance of 1:

  Pr[ X z ] z p, p (1–p)

  0.1%

  3.09 ✂

  0.5%

  2.58 Expected success rate f=S/N

  1%

  2.33 Mean and variance for f : p, p (1–p)/N 5%

  1.65 ✂

  For large enough N, f follows a Normal

  10%

  1.28

  distribution

  20%

  0.84 ✂

  40%

  0.25

  • –1 0 1 1.65

  c% confidence interval [–z

   X z ] for

  Thus: random variable with 0 mean is given by:

  ✂✁☎✄ ✆✞✝✂✟✡✠☛✟✞✝✌☞ ✍✞✎ ✂✁☎✄ ✆✕✑✓✜ ✢✂✣✂✟✕✠☛✟✕✑✌✜ ✢✤✣✂☞ ✍✗✥☎✦✗✧ ✂

  With a symmetric distribution: To use this we have to reduce our random variable f

  ✂✁☎✄ ✆✞✝✂✟✏✠☛✟✞✝✌☞ ✍✒✑✓✆✕✔✂✖✗✘✁☎✄ ✙✛✚✏✝✌☞

  to have 0 mean and unit variance

  03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 1 1 03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 1 2

Solving for p :

  ✁ Cross-validation avoids overlapping test sets

  Called k-fold cross-validation

  ✁

  Second step: use each subset in turn for testing, the remainder for training

  ♦

  First step: split data into k subsets of equal size

  ♦

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 1 7 03/04/06 Cross-validation

  Often the subsets are stratified before the cross-validation is performed

  Can we prevent overlapping?

  ♦

  Still not optimum: the different test sets overlap

  ✁

  This is called the repeated holdout method

  ✁

  The error rates on the different iterations are averaged to yield an overall error rate

  ✁

  ✁

  In each iteration, a certain proportion is randomly selected for training (possibly with stratificiation)

  There is also some theoretical evidence for this

  E.g. ten-fold cross-validation is repeated ten times and results are averaged (reduces the variance)

  ♦

  Even better: repeated stratified cross-validation

  ✁

  Stratification reduces the estimate’s variance

  ✁

  ♦

  The error estimates are averaged to yield an overall error estimate

  best choice to get an accurate estimate

  ♦ Extensive experiments have shown that this is the

  Why ten?

  ✁

  Standard method for evaluation: stratified ten- fold cross-validation

  ✁

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 1 8 03/04/06 More on cross-validation

  ♦

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 1 3 03/04/06

  Transforming f

  ✤ Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 1 4

  ✄ f = 75%, N = 10, c = 80% (so that z = 1.28):

  Note that normal distribution assumption is only valid for large N (i.e. N > 100)

  ✄

  ✄ f = 75%, N = 100, c = 80% (so that z = 1.28):

  ✄ f = 75%, N = 1000, c = 80% (so that z = 1.28):

  03/04/06 Examples

  ✣ ✘ ✖ ✤ ✥ ✎ ✑✦✒✧✔ ✖ ☛

  ✍✩★ ✄ ✦✂✜ ✪✬✫☎✔✮✭ ✦✗✜ ✪✓✢✦✪✘☞ ✯✱✰✬✲ ✳✦✴ ✵✷✶✹✸✝✺ ✳✚✴ ✻✦✳✼✸✮✽ ✯✚✰✦✲ ✳✾✴ ✿❁❀✾✶❂✺ ✳✩✴ ✻✾✻✼✸✮✽

  ☛ ✆ ✖ ☛ ✒✢✔ ✖

  ✍ ✍✏✎ ✑✓✒✕✔ ✖ ✗ ✘✚✙ ✝✜✛

  ✁☞✂ ☎ ✂✠✆ ✞ ✁☞✂✌✟ ✡ ☛ ✟✏✝✌☞ ✍✡✎

  ✁✄✂ ☎ ✂✝✆ ✞ ✁ ✂✠✟ ✡ ☛ ✂✁☎✄ ✆✞✝✂✟

  Resulting equation:

  (i.e. subtract the mean and divide by the standard deviation)

  Transformed value for f :

  ( should be taken with a grain of salt)

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 1 5 03/04/06 Holdout estimation

  Holdout estimate can be made more reliable by repeating the process with different subsamples

  Example: class might be missing in the test data

  ✁

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 1 6 03/04/06 Repeated holdout method

  approximately equal proportions in both subsets

  ♦ Ensures that each class is represented with

  Advanced version uses stratification

  ✁

  ♦

  ✁

  Problem: the samples might not be representative

  ✁

  Usually: one third for testing, the rest for training

  ♦

  The holdout method reserves a certain amount for testing and uses the remainder for training

  ✁

  What to do if the amount of data is limited?

  ♦

  Leave-One-Out-CV and stratification Leave-One-Out cross-validation

  ✁ ✁

  Leave-One-Out: Disadvantage of Leave-One-Out-CV: a particular form of cross-validation: stratification is not possible

  ♦ ♦

  Set number of folds to number of training instances It guarantees a non-stratified sample because

  ♦

  there is only one instance in the test set! I.e., for n training instances, build classifier n times

  ✁ ✁

  Extreme example: random dataset split Makes best use of the data

  ✁

  equally into two classes Involves no random subsampling

  ✁ ♦

  Best inducer predicts majority class Very computationally expensive

  ♦

  50% accuracy on fresh data

  ♦

  (exception: NN)

  ♦ Leave-One-Out-CV estimate is 100% error! Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 1 9 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5)

  20 03/04/06

  03/04/06 The bootstrap The 0.632 bootstrap

  ✂

  CV uses sampling without replacement

  ✄ ✁

  The same instance, once selected, can not be selected Also called the 0.632 bootstrap again for a particular training/test set

  ♦ A particular instance has a probability of 1–1/n

  The bootstrap uses sampling with replacement to of not being picked form the training set

  ♦

  Thus its probability of ending up in the test

  ✄

  Sample a dataset of n instances n times with replacement data is:

  ✞ ✁ ✞ ✎ ✑✓✆ ✂✁☎✄ ✁ ✦✤✜ ✫✓✢✝✆

  to form a new dataset of n instances

  ✤ ✄

  Use this data as the training set

  ✄

  Use the instances from the original ♦ This means the training data will contain dataset that don’t occur in the new approximately 63.2% of the instances training set for testing

  03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 2 1 03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5)

  22 Estimating error with the bootstrap More on the bootstrap

  ✁

  The error estimate on the test data will be

  ✁

  Probably the best way of estimating very pessimistic performance for very small datasets

  ♦

  Trained on just ~63% of the instances

  ✁ ✁

  However, it has some problems Therefore, combine it with the

  ♦

  Consider the random dataset from above resubstitution error:

  ✄ ✄✟✞ ✠ ✡ ✞☞☛ ✌✍✡ ✞ ✎ ✌☞✏ ✠ ✡ ✄✟✞ ✒ ✎ ☛ ✌☞☛ ✌☞✓ ☛ ✌✍✡ ✞ ✎ ✌ ✏ ✠ ✡ ✁ ✁ ✍✗✦✂✜ ✢❂✫✌✔ ✖ ✒ ✦✂✜ ✫ ✢✑✆✞✖ ♦

  A perfect memorizer will achieve

  ✔ ✁

  0% resubstitution error and The resubstitution error gets less weight

  ~50% error on test data than the error on the test data

  ♦

  Bootstrap estimate for this classifier:

  ✁ ✕✑✖✗✖✙✘✛✚✝✜ ✢✤✣✑✥✧✦✩★✟✚✫✪✭✬✮✚✯✜ ✣✑✢✝✰✧✦✮✚✱✪✭✘✮✣✝✲✙✜ ✢☎✪

  Repeat process several times with different replacement samples; average the results ♦ True expected error: 50%

  03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 2 3 03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 2 4

  σ x

  ✄

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 2 8 03/04/06 Distribution of the means

  ✄ x

  1 x

  2 … x k

  and y

  1 y

  2 … y k

  are the 2k samples for the k different datasets

  ✄ m x

  and m

  y

  are the means

  ✄

  With enough samples, the mean of a set of independent samples is normally distributed

  Estimated variances of the means are

  ✍

  x

  ✌

  ☞☛ ✄✆☎ ☛ ✝ ✞ ☛ ✟ ✠ ✡

  ✂✁ ✄✆☎ ✁ ✝ ✞ ✁ ✟ ✠ ✡

  are the true means then are approximately normally distributed with mean 0, variance 1

  y

  and µ

  If µ

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 2 5 03/04/06

  ✄

  /k

  2

  y

  /k and σ

  2

  ♦ The same CV is applied twice

  Use a paired t-test because the individual samples are paired

  ✁

  Want to show that scheme A is better than scheme B in a particular domain ♦

  Comparing data mining schemes ✁

  Frequent question: which of two learning schemes performs better?

  ✁

  Note: this is domain dependent!

  ✁

  Obvious way: compare 10-fold CV estimates

  ✁

  Generally sufficient in applications (we don't loose if the chosen method is not truly better)

  ✁

  However, what about machine learning research?

  ♦

  Need to show convincingly that a particular method works better

  26 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 03/04/06

  Comparing schemes II ✂

  For a given amount of training data

  In our case the samples are cross-validation estimates for different datasets from the domain

  ♦

  ✁

  samples are significantly different

  ✁ Student’s t-test tells whether the means of two

  In practice we have limited data and a limited number of estimates for computing the mean

  ✁

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 2 7 03/04/06 Paired t-test

  Check if mean accuracy for scheme A is better than mean accuracy for scheme B

  ♦

  Obtain cross-validation estimate on each dataset for each scheme

  ♦

  Sample infinitely many dataset of specified size

  ♦

  ✂ Let's assume we have an infinite amount of data from the domain:

  On average, across all possible training sets

William Go ss et Bo rn : 1 87 6 in Can terbu ry ; D ied : 19 37 in Be acon s fie ld , En glan d Ob tain ed a p os t as a c h em is t in th e Gu in n e ss b rew ery in D u blin in 18 99. In v en ted th e t- tes t to h an d le s m all s am p les for q u ality con tro l in brew in g. Wrote u n d er th e n am e "Stu d e n t"

  Let m

  • – m

  0.88 20% 1.38 10% 1.83 5%

  2.82

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 2 9 03/04/06 Student’s distribution

  ✟ ✠ ✡

  ✎✑✏ ✓✒ ✝ ✞ ✒

  ✍

  is called the t- statistic:

  d

  The standardized version of m

  ✍

  be the variance of the difference

  2

  d

  Let σ

  ✍

  ) also has a Student’s distribution with k–1 degrees of freedom

  d

  The difference of the means (m

Confidence limits:

X ≥ z ]

  2.58

  3.25

  4.30 z

  1% 0.5% 0.1% Pr[

  0.84 20% 1.28 10% 1.65 5%

  2.33

  3.09 z

  y

  1% 0.5% 0.1% Pr[

  X z ] 9 degrees of f reedom normal distribution

  Assuming we have 10 estimates

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 3 0 03/04/06 Distribution of the differences

  Student’s distribution with k– 1 degrees of freedom

  With small samples (k < 100) the mean follows

  d = m x

We use t to perform the t-test

  • Fix a significance level
  • If a difference is significant at the
  • Divide the significance level by two because the
  • I.e. the true difference can be +ve or ve
  • Look up the value for z that corresponds to α /2
  • If t –z or t z then the difference is significant
  • I.e. the null hypothesis (that the difference is zero)

  ✾ a

  ✾ ✾

  Want to minimize

  ✾ ✾

  ✾ Quadratic loss is:

  which is 1

  k = 0, except for a c

  … a

  1

  ✾ c is the index of the instance’s actual class

  j

  are probability estimates for an instance

  k

  … p

  1

  ✾ p

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 3 5 03/04/06 Quadratic loss function

  ✒✔✓✖✕✘✗✚✙ ✛✢✜✤✣✦✥★✧✩✙ ✪✬✫✖✙ ✭✯✮✰✙ ✱✲✪✳✭✯✣✴✣✦✥✳✪✵✫ ✶ ✙ ✛✷✜✸✣✦✥✯✧✹✙ ✪✺✫✻✙ ✭✯✮✰✙ ✱✼✙ ✮✤✪✍✭✯✣✴✣✦✥✳✪✵✫✳✽

  0-1 loss is not the right thing to use in those cases

  Can show that this is minimized when p

  = p

  Depending on the application, we might want to check the accuracy of the probability estimates

  1

  ❘❏❨ ❨ ❨ ❘ ❑✷❩ ❚✵❯ ❲✴❳ ◗ ❑✷❩

  ❘ ❑✢❙ ❚❱❯ ❲✵❳ ◗ ❑✢❙

  j = p j

  Justification: minimized when p

  ✾

  Then the expected value for the loss function is:

  ✾

  k

  Let p

  j

  ✾

  Number of bits required to communicate the actual class

  ✾

  ), where c is the index of the instance’s actual class

  c

  The informational loss function is –log(p

  Informational loss function ✾

  ✿✩❀✻❁ ❂✺❀☞❃✷❄✬❀ ❅ ❆❈❇✸✿✩❀✘❉ ❊☞❋✖❂✍❀ ❆❈●✹❁ ❄✹❃✩❂✍❋✘❅ ❆ ❍❏■ ❑✯▲◆▼✤❖✴▲ P ◗ Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 3 6 03/04/06

  ✌

  ✌

  E.g. running cross-validations with different randomizations on the same data

  ✍

  Need to re-use data if that's not the case

  ✌

  We assumed that we have enough data to create several datasets of the desired size

  Dependent estimates ✌

  ✟ ☎ Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 3 3 03/04/06

  ✁ ✟ ✁✄✂ ☛

  The t-statistic becomes:

  Then we have to use an un paired t-test with min(k , j) 1 degrees of freedom

  Most classifiers produces class probabilities

  ✍

  If the CV estimates are from different datasets, they are no longer paired (or maybe we used k -fold CV for one scheme, and j -fold CV for the other one)

  Unpaired observations ✍

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 3 2 03/04/06

  can be rejected

  test is two-tailed

  % level, there is a (100- α )% chance that the true means differ

  α

  ♦

  ✌

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 3 1 03/04/06

  New test statistic is:

  ✌

  Also called 0-1 loss function:

  ✌

  Performance measure so far: success rate

  ✌

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 3 4 03/04/06 Predicting probabilities

  ✌ ✏ ✑ ✒ ✟

  ✎✑✏✝✆ ✒ ✞ ✟ ✠ ✡☞☛✍✌ ✎

  ♦

  Samples become dependent ⇒ insignificant differences can become significant

  for testing

  2

  instances for training and n

  1

  with n

  ♦ Assume we use the repeated hold-out method,

  A heuristic test is the corrected resampled t-test:

  ✌

  Performing the test

  • p
  • be the true class probabilities
  • , the true probabilities

Difficulty: zero-frequency problem

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 3 7 03/04/06

  Examples:

  The confusion matrix: There are many other types of cost!

  ✍

  actual predictor (left) vs. random predictor (right)

  Promotional mailing

  ♦

  Fault diagnosis

  ♦

  Oil-slick detection

  ♦

  Loan decisions

  “Not a terrorist” correct 99.99% of the time ♦

  ♦ Terrorist profiling ✁

  ✌

  ✍ Two confusion matrices for a 3-class problem:

  In practice, different types of classification errors often incur different costs

  Counting the cost ✌

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 3 8 03/04/06

  [later] ✺●✸✿✩▲✖❂✍▲ ◗

  Informational loss can be infinite Informational loss is related to MDL principle

  ♦

  it can never exceed 2

  ♦ Quadratic loss is bounded:

  Informational loss focuses only on the probability estimate for the actual class

  ♦

  Quadratic loss function takes into account all class probability estimates for an instance

  ♦ Both encourage honesty ♦

  Which loss function to choose?

  Discussion ✌

Predicted class

Act ual class

  True negat ive Fal se posit ive No False negat ive True posit ive Yes No Yes

  ♦

  Choose column (class) that minimizes expected cost

  ✾

  Expected cost: dot product of vector of class probabilities and appropriate column in cost matrix

  ✾

  Here, we should make the prediction that minimizes the expected cost

  ♦

  Normally we just predict the most likely class

  ♦

  ✍ Given: predicted class probabilities

  Basic idea: only predict high-cost class when very confident about prediction

  ✍ Can take costs into account when making predictions

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 4 0 03/04/06 Aside: the kappa statistic

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 4 2 03/04/06 Cost-sensitive classification

  Cost is given by appropriate entry in the cost matrix

  Success rate is replaced by average cost per prediction ♦

  Two cost matrices: ✍

  Classification with costs ✍

  ✟ ✌ ✍ ✡ ✄ ✎ Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 4 1 03/04/06

  ✂☎✄ ✆ ✝ ✞ ✟ ✠ ✞ ✡ ☛☞✂☎✟ ✌ ✍ ✡ ✄ ✎ ✂☎✏ ✞ ✟ ✑ ✞ ✒ ✓ ☛☞✂

  measures relative improvement over random predictor

  ✍ Kappa statistic:

  ✍ Number of successes: sum of entries in diagonal (D)

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 3 9 03/04/06 Counting the cost

  E.g.: cost of collecting training data

  ✾

  • Mail to all; 0.1% respond (1000)
  • Data mining tool identifies subset of 100,000
  • Identify subset of 400,000 most promising,

  0.95

  ✾

  Sort instances according to predicted probability of being positive:

  ✾ x axis is sample size y axis is number of true positives

  … … …

  Yes

  0.88

  4 No

  0.93

  3 Yes

  0.93

  2 Yes

  1 Actual class Predict ed probability Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 4 6 03/04/06

  A lift chart allows a visual comparison

  A hypothetical lift chart 40% of responses for 10% of cost

  80% of responses for 40% of cost Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 4 7 03/04/06

  ROC curves ROC curves are similar to lift charts

  ♦ Stands for “receiver operating characteristic” ♦

  Used in signal detection to show tradeoff between hit rate and false alarm rate over noisy channel Differences to lift chart:

  ♦ y axis shows percentage of true positives in

  sample

  rather than absolute numberx axis shows percentage of false positives in

  sample

  rather than sample size Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 4 8 03/04/06

  A sample ROC curve ✁

  Jagged curve—one set of test data ✁

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 4 5 03/04/06 Generating a lift chart

  ✍

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 4 3 03/04/06

  Resampling of instances according to costs

  Cost-sensitive learning ✍

  So far we haven't taken costs into account at training time

  ✍

  Most learning schemes do not perform cost- sensitive learning

  ✾

  They generate the same classifier no matter what costs are assigned to the different classes

  ✾

  Example: standard decision tree learner

  ✍

  Simple methods for cost-sensitive learning:

  ✾

  ✾

  0.2% respond (800)

  Weighting of instances according to costs

  ✍

  Some schemes can take costs into account by varying a parameter, e.g. naïve Bayes

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 4 4 03/04/06

  Lift charts ✍

  In practice, costs are rarely known

  ✍

  Decisions are usually made by comparing possible scenarios

  ✍

  Example: promotional mailout to 1,000,000 households

  most promising, 0.4% of these respond (400)

  40% of responses for 10% of cost may pay off

  Smooth curve—use cross-validation TP rate for combined scheme:

  ✾

  ✾ sensitivity × specificity = (TP / (TP + FN)) × (TN / (TP + TN))

  2

  and f

  2 If scheme 1 is used to predict 100 × q % of the

  cases and scheme 2 for the rest, then

  ✾

  q × t

  ✾ F-measure =(2 recall precision)/(recall+precision)

  and f

  Summary measures: average precision at 20%, 50% and 80% recall (three-point average recall)

  ✾

  Precision/recall curves have hyperbolic shape

  recall = TP/(TP+FN) ✾

  ✾

  Percentage of retrieved documents that are relevant:

  1 TP and FP rates for scheme 2: t

  1

  Percentage of relevant documents that are returned:

  Sort instances according to probabilities This method is implemented in WEKA However, this is just one possibility

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 4 9 03/04/06

  Cross-validation and ROC curves

  Simple method of getting a ROC curve using cross-validation:

  ♦

  Collect probabilities for instances in test folds

  ♦

  ♦

  Given two learning schemes we can achieve any point on the convex hull! TP and FP rates for scheme 1: t

  Another possibility is to generate an ROC curve for each fold and average them

  50 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 03/04/06

  ROC curves for two schemes ✁

  For a small, focused sample, use method A ✁

  For a larger one, use method B ✁

  In between, choose between A and B with appropriate probabilities Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 5 1 03/04/06

  The convex hull

  precision= TP/(TP+FP) ✾

  • (1-q) × t

  2 03/04/06 52 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) More measures...

  1

  q × f

  FP rate for combined scheme:

  2

  1

  Area under the ROC curve (AUC): probability that randomly chosen positive instance is ranked above randomly chosen negative one

  • (1-q) × f

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 5 3 03/04/06 Summary of some measures

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 5 4 03/04/06 Cost curves

  ✾ Cost curves plot expected costs directly

  ✾

  Example for case with uniform costs (i.e. error):

Explanation Plot Domain TP/ (TP+ FN) TP/ (TP+ FP) Recall Precision Information retrieval Recall- precision curve TP/ (TP+ FN) FP/ (FP+ TN) TP rate FP rate Communications ROC curve TP (TP+ FP)/ (TP+ FP+ TN+ FN) TP Subset size Marketing Lift chart

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 5 5 03/04/06

  03/04/06 60 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) Which measure?

  ❈ ❉❆❊ ❋✒●✘❊ ❍ ■ ❏✗❑ ❑ ❑ ❏✍❈ ❉❆▲ ❋✗● ▲ ❍ ■ ❈ ❱ ●✗❋❆●✘❊ ❍ ■ ❏✗❑ ❑ ❑ ❏✍❈ ❱ ●✍❋❆● ▲ ❍ ■ ❲ ❉ ❊ ❋❆● ❊ ❲ ❏✗❑ ❑ ❑ ❏ ❲ ❉ ▲ ❋❆● ▲ ❲ ❲ ❱ ●✝❋✒● ❊ ❲ ❏✗❑ ❑ ❑ ❏ ❲ ❱●✝❋✒● ▲ ❲

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 5 9 03/04/06 Correlation coefficient

  Measures the statistical correlation between the predicted values and the actual values Scale independent, between –1 and +1

  ❳✝❨ ❩ ❬ ❳✸❨✸❳✘❩ ❭✯❪☎❫❵❴✗❛

  ❈ ❉ ❛ ❋ ❱ ❉✝❍ ■ ▼

  ❋✒❜ ❭✺❝✺❫❵❴✝❛ ❈ ● ❛

  ❋ ❱ ●❄❍ ■ ▼ ❋✏❜ ❭✺❞❄❡✷❫❵❢✗❛ ❣

  P ❛ ◗✒❤ P✝✐

  ❣ ❘ ❛ ◗✗❤ ❘

  ✐ ❯ ◗✡❥

  ❦

  Cost curves: example with costs ✂✁☎✄✝✆✟✞✡✠ ☛ ☞✍✌✏✎✟✌✒✑☎✓✔✌✏✕✗✖✘✌✒✎✂✕✍✁☎✙ ✖✒✚✜✛ ✢✤✣✦✥★✧✝✩ ✪✜✫✘✬✭✛ ✓✤✣✯✮✝✰✗✱✯✥★✧✝✩ ✪✜✫ ✲ ✳✏✄✝✁✡✴✔✞☎✴✦☛ ✠ ☛ ✖ ✵✶✕✒✁✷✙✸✖✯✛ ✹✡✢✔✕✗✖✸☛ ✁✷✢✺✥★✧✝✩ ✪✻✫✘✚✽✼✗✾ ✿✘❀ ❁✝✾ ✿✗❂ ❃ ❀

  Best to look at all of them

  ❦

  Often it doesn’t matter

  ❦

  Example:

  Correlat ion coef ficient 0 .9 1 0 .8 9 0 .8 8 0 .8 8 3 0.4% 3 4.8% 4 0.1% 4 3.1% Relat ive absolut e error 3 5.8 % 3 9.4% 5 7.2 % 4 2.2% Root rel squared error 2 9.2 3 3.4 3 8.5 4 1.3 Mean absolute error 5 7.4 6 3.3 9 1.7 6 7.8 Root mean- squared error

  ✁ D best

  ✁ C second- best

  ✁

  A, B arguable

  How much does the scheme improve on simply predicting the average? The relative squared error is: The relative absolute error is:

  Improvement on the mean

  ❯ Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 5 8 03/04/06

  ❊ ◗✍❘✝❊ ❖ ❙✗❚ ❚ ❚ ❙❆❖ P ▲ ◗❆❘❄▲ ❖

  ✼✍✾ ✿❄❀ ❁✝✾ ✿✗❂ ❃ ❀ ❅✸✼✝✾ ❃ ❀ ❁❆✾ ❃ ❂ ✿❆❀

  56 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 03/04/06

  Evaluating numeric prediction

  Same strategies: independent test set, cross-validation, significance tests, etc. Difference: error measures Actual target values: a

  1

  a

  2

  …a

  n

  Predicted target values: p

  1

  p

  2

  … p

  n

  Most popular measure: mean-squared

  error

  Easy to manipulate mathematically

  ❈ ❉❆❊ ❋✒●✘❊ ❍ ■ ❏✗❑ ❑ ❑ ❏☎❈ ❉❆▲ ❋❆● ▲ ❍ ■ ▼ Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 5 7 03/04/06

  Other measures

  The root mean-squared error : The mean absolute error is less sensitive to outliers than the mean-squared error: Sometimes relative error values are more appropriate (e.g. 10% for an error of 50 when predicting 500)

  ◆ ❈ ❉ ❊ ❋✍● ❊ ❍ ■ ❏✗❑ ❑ ❑ ❏✍❈ ❉ ▲ ❋❆● ▲ ❍ ■ ▼ ❖ P

D C B A

Good performance leads to large values!

  The MDL principle Model selection criteria

  MDL stands for minimum description length Model selection criteria attempt to find a good compromise between: The description length is defined as: ❇

  The complexity of a model space required to describe a theory

Its prediction accuracy on the training data

  • Reasoning: a good model is a simple model space required to describe the theory’s mistakes that achieves high accuracy on the given data

  In our case the theory is the classifier and the Also known as Occam’s Razor : mistakes are the errors on the training data the best theory is the smallest one

  Aim: we seek a classifier with minimal DL that describes all the facts MDL principle is a model selection criterion

William of Ockham, born in the v illage of Ockham in Surrey (England) about 1 28 5, w as the m o st influe ntial philo so pher of the 14 th century an d a co ntro v ersial theol ogian

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 6 1 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5)

  62 03/04/06

  03/04/06 Elegance vs. errors MDL and compression

  MDL principle relates to data compression:

  ❇

  The best theory is the one that compresses the data Theory 1: very simple, elegant theory that the most explains the data almost perfectly

  ❇

I.e. to compress a dataset we generate a model and

  Theory 2: significantly more complex theory then store the model and its mistakes that reproduces the data without mistakes

  We need to compute Theory 1 is probably preferable

  (a) size of the model, and Classical example: Kepler’s three laws on

  (b) space needed to encode the errors planetary motion (b) easy: use the informational loss function

  ♦

  Less accurate than Copernicus’s latest (a) need a method to encode the model refinement of the Ptolemaic theory of epicycles

  03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 6 3 03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 6 4 MDL and Bayes’s theorem MDL and MAP

  L[T]=“length” of the theory

  ❦

  MAP stands for maximum a posteriori probability L[E|T]=training set encoded wrt the theory ❦

  Finding the MAP theory corresponds to finding the Description length= L[T] + L[E|T]

  MDL theory

  ❦

  Difficult bit in applying the MAP principle: Bayes’s theorem gives a posteriori determining the prior probability Pr[T] of the theory probability of a theory given the data:

  ❦

  Corresponds to difficult part in applying the MDL

  ✂✁☎✄ ✆✞✝ ✟✡✠☞☛✍✌✏✎☞✑ ✒✔✓ ✕✗✖ ✌✏✎☞✘ ✙✛✚

  principle: coding scheme for the theory

  ✌✜✎✗✘ ✢✛✚ ❦

  I.e. if we know a priori that a particular theory is more likely we need fewer bits to encode it Equivalent to:

  ✂✁☎✄ ✆✪✝ ✟✪✠✗☛ ✫✁☎✄✏✟✍✝ ✆✬✠ ✂✁☎✭✗✮✥✯✧✰ ✫✁✱✭ ✲✞✯ ✣✥✤✧✦✩★ ✣✥✤✧✦✩★ ✣✞✤ ✦✩★ ✤ ✦✩★ constant

  03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 6 5 03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5)

  66 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 6 7 03/04/06

  Discussion of MDL principle ❦

  Epicurus’s principle of multiple explanations: keep all theories that are consistent with the data

  ♦

  e.g. cluster centers Description length of data given theory: encode cluster membership and position relative to cluster

  ♦

  Description length of theory: bits needed to encode the clusters

  MDL and clustering

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 5) 6 8 03/04/06

  ❦

  Advantage: makes full use of the training data when selecting a model

  Note: Occam’s Razor is an axiom!

  ❦

  Disadvantage 2: no guarantee that the MDL theory is the one which minimizes the expected error

  ❦

  Disadvantage 1: appropriate coding scheme/prior probabilities for theories are crucial

  ❦

  e.g. distance to cluster center Works if coding scheme uses less code space for small numbers than for large ones With nominal attributes, must communicate probability distributions for each cluster