Decision Tree for PlayTennis

  Decision T ree Learning [read Chapter 3]

  [recommended exercises 3.1, 3.4] Decision tree representation

  ID3 learning algorithm Entropy, Information gain Over tting Decision T ree for P l ay T ennis

  Outlook Humidity Sunny Overcast Rain Wind Yes

No Yes No Yes

High Normal Strong Weak A T ree to Predict C-Section Risk Learned from medical records of 1000 women Negative examples are C-sections

  [833+,167-] .83+ .17- Fetal_Presentation = 1: [822+,116-] .88+ .12- | Previous_Csection = 0: [767+,81-] .90+ .10- | | Primiparous = 0: [399+,13-] .97+ .03- | | Primiparous = 1: [368+,68-] .84+ .16- | | | Fetal_Distress = 0: [334+,47-] .88+ .12- | | | | Birth_Weight < 3349: [201+,10.6-] .95+ .05-

| | | | Birth_Weight >= 3349: [133+,36.4-] .78+ .22-

| | | Fetal_Distress = 1: [34+,21-] .62+ .38- | Previous_Csection = 1: [55+,35-] .61+ .39- Fetal_Presentation = 2: [3+,29-] .11+ .89- Fetal_Presentation = 3: [8+,22-] .27+ .73-

  Decision T rees Decision tree represen tation: Eac h in ternal no de tests an attribute Eac h leaf no de assigns a classi cati on Eac h branc h corresp onds to attribute v alue Ho w w ould w e represen t: ^; _; X OR M of N (A ^ B ) _ (C ^ :D ^ E )

  When to Consider Decision T rees Instances describable by attribute{value pairs Target function is discrete valued Disjunctive hypothesis may be required Possibly noisy training data

  Examples: Equipment or medical diagnosis Credit risk analysis Modeling calendar scheduling preferences

  T op-Do wn Induction of Decision T rees Main loop: A node

  1. the \best" decision attribute for next A node

  2. Assign as decision attribute for A

  3. For each value of , create new descendant of node

  4. Sort training examples to leaf nodes

  5. If training examples perfectly classi ed, Then STOP, Else iterate over new leaf nodes

  Which attribute is best? [29+,35-] [29+,35-] A1=? A2=?

  

t f t f [21+,5-] [8+,30-] [18+,33-] [11+,2-] En trop y

1.0 Entropy(S)

  0.5 +

  0.0 0.5 p 1.0 S p is a sample of training examples S p is the proportion of positive examples in S is the proportion of negative examples in S

  Entropy measures the impurity of E ntr opy S ?p p ? p p 2 2 ( ) log log En trop y E ntr opy

  ( S ) = expected number of bits needed to encode class ( or ) of randomly drawn member of S (under the optimal, shortest-length code)

  Why? Information theory: optimal length code assigns ? log 2 p bits to message having probability p .

  So, expected number of bits to encode or of random member of S : p ( ? log 2 p ) + p ( ? log E ntr opy 2 p )

  ( S ) ?p log 2 p ? p log 2 p Information Gain

  Gain(S; sorting on A A) = exp ected reduction in en trop y due to Gain(S; A) E ntr opy (S ) ? E ntr opy (S ) v 2V al ues(A) X jS j jS j v v

  [29+,35-] [29+,35-] t f t f A1=? A2=?

[21+,5-] [8+,30-] [18+,33-] [11+,2-] Training Examples

  Day Outlook Temperature Humidity Wind PlayTennis D1 Sunny Hot High Weak No D2 Sunny Hot High Strong No D3 Overcast Hot High Weak Yes D4 Rain Mild High Weak Yes D5 Rain Cool Normal Weak Yes D6 Rain Cool Normal Strong No D7 Overcast Cool Normal Strong Yes D8 Sunny Mild High Weak No D9 Sunny Cool Normal Weak Yes

  D10 Rain Mild Normal Weak Yes D11 Sunny Mild Normal Strong Yes D12 Overcast Mild High Strong Yes D13 Overcast Hot Normal Weak Yes D14 Rain Mild High Strong No

  Selecting the Next A ttribute

  Which attribute is the best classifier? High Normal Humidity

  [3+,4-] [6+,1-] Wind Weak Strong [6+,2-] [3+,3-] = .940 - (7/14).985 - (7/14).592 = .151 = .940 - (8/14).811 - (6/14)1.0 = .048 Gain (S, Humidity ) Gain (S, ) Wind =0.940 E =0.940 E =0.811 E =0.592 E =0.985 E =1.00 E

[9+,5-] S: [9+,5-] S:

  Outlook Sunny Overcast Rain [9+,5−] {D1,D2,D8,D9,D11} {D3,D7,D12,D13} {D4,D5,D6,D10,D14} [2+,3−] [4+,0−] [3+,2−]

  Yes {D1, D2, ..., D14} ? ? Which attribute should be tested here? Ssunny = {D1,D2,D8,D9,D11}

  Gain (Ssunny , Humidity) sunny Gain (S , Temperature) = .970 − (2/5) 0.0 − (2/5) 1.0 − (1/5) 0.0 = .570 Gain (S sunny , Wind) = .970 − (2/5) 1.0 − (3/5) .918 = .019 = .970 − (3/5) 0.0 − (2/5) 0.0 = .970

  Hyp othesis Space Searc h b y ID3 ...

  • + +

  • A1
  • – + – A2 A3 + ... + – + A2 A4 + – + – A2 + – + ... ...

  Hyp othesis Space Searc h b y ID3 Hypothesis space is complete! { Target function surely in there...

  Outputs a single hypothesis (which one?) { Can't play 20 questions...

  No back tracking { Local minima...

  Statisically-based search choices { Robust to noisy data...

  Inductive bias: approx \prefer shortest tree" Inductiv e Bias in ID3 H X ! Note is the power set of instances Unbiased? Not really...

  Preference for short trees, and for those with high information gain attributes near the root pr efer enc e Bias is a for some hypotheses, rather r estriction H than a of hypothesis space Occam's razor: prefer the shortest hypothesis that ts the data Occam's Razor

  Why prefer short hypotheses? Argument in favor: ! Fewer short hyps. than long hyps. a short hyp that ts data unlikely to be ! coincidence a long hyp that ts data might be coincidence

  Argument opposed: There are many ways to de ne small sets of hyps e.g., all trees with a prime number of nodes that use attributes beginning with \Z" size

  What's so special about small sets based on of hypothesis?? Ov er tting in Decision T rees Consider adding noisy training example #15:

  =

  

Sunny; Hot; Normal; Strong; PlayTennis No

  What e ect on earlier tree? Outlook Humidity Wind Sunny Overcast Yes Rain

No Yes No Yes

High Normal Strong Weak

  Ov er tting Consider error of h yp othesis h o v er training data: er r or (h) tr ain Hyp othesis h en tire distribution D of data: er r or (h) 2 H o v er ts training data if there is D an alternativ e h yp othesis h er r or (h) < er r or (h ) tr ain tr ain 2 H suc h that and er r or (h) > er r or (h ) D D

  Ov er tting in Decision T ree Learning 0.85

  0.9 0.75

0.8 Accuracy

  0.65

  0.7 0.55

  0.6 On training data On test data

  0.5 10 20 30 Size of tree (number of nodes) 40 50 60 70 80 90 100

  Av oiding Ov er tting How can we avoid over tting? stop growing when data split not statistically signi cant grow full tree, then post-prune

  How to select \best" tree: Measure performance over training data Measure performance over separate validation data set MDL: minimize

  ( ) + ( ( ))

  size tree size misclassifications tree Reduced-Error Pruning tr aining v al idation Split data into and set Do until further pruning is harmful: v al idation

  1. Evaluate impact on set of pruning each possible node (plus those below it)

  2. Greedily remove the one that most improves v al idation set accuracy produces smallest version of most accurate subtree What if data is limited?

  E ect of Reduced-Error Pruning 0.85

  0.9 0.75

0.8 Accuracy

  0.65

  0.7 0.55

  0.6 On test data (during pruning) On training data On test data

  0.5 10 20 30 Size of tree (number of nodes) 40 50 60 70 80 90 100

  Rule P ost-Pruning

  1. Convert tree to equivalent set of rules

  2. Prune each rule independently of others

  3. Sort nal rules into desired sequence for use C4.5 Perhaps most frequently used method (e.g., )

  Con v erting A T ree to Rules

  Outlook Humidity Wind Sunny Overcast Yes Rain

No Yes No Yes

High Normal Strong Weak IF ( = ) ^ ( = ) THEN = Outlook Sunny Humidity High

IF ( = ) ^ ( = )

PlayTennis No THEN = Outlook Sunny Humidity Normal PlayTennis Y es

  :::

  Con tin uous V alued A ttributes Create a discrete attribute to test con tin uous = 82 5 ( Temperature : 72 3) = Temperature > : t;f

  : 40 48 60 72 80 90 Temperature : No No Y es Y es Y es No PlayTennis A ttributes with Man y V alues Problem:

  If attribute has many values, will select it

  Gain

  Imagine using = 3 1996 as attribute

  Date Jun

  One approach: use instead

  GainRatio

  ( )

  Gain S;A

  ( )

  GainRatio S;A SplitInformation ( S;A ) X c j j j j i i ? S S 2

  ( ) log

  SplitInformation S;A i=1 j j j j i S S i

  where is subset of for which has value

  S S A v A ttributes with Costs Consider medical diagnosis,

  BloodTest

  has cost $150 robotics,

  Width from

  1

  ft has cost 23 sec.

  How to learn a consistent tree with low expected cost? One approach: replace gain by

  Tan and Schlimmer (1990)

  Gain 2

  (

  )

S;A

  (

  A

  Cost

  :

  Nunez (1988)

  2 Gain(S;A) ?

  1 (

  Cost

  (

  A

  ) + 1) w where

  w 2

  [0

  ;

  1] determines importance of cost

  ) Unkno wn A ttribute V alues A

  What if some examples missing values of ? Use training example anyway, sort through tree n A A If node tests , assign most common value of n among other examples sorted to node A assign most common value of among other examples with same target value p i v i A assign probability to each possible value of { p i assign fraction of example to each descendant in tree

  Classify new examples in same fashion