Chapter14 - Introduction to Multiple Regression

  Statistics for Managers Using Microsoft® Excel 5th Edition

Chapter 14 Introduction to Multiple Regression

  Learning Objectives In this chapter, you learn:

   How to develop a multiple regression model

   How to interpret the regression coefficients

   How to determine which independent variables to include in the regression model

  

How to determine which independent variables are

most important in predicting a dependent variable

   How to use categorical variables in a regression model

  The Multiple Regression Model

  Idea: Examine the linear relationship between 1 dependent (Y) & 2 or more independent variables (X i ).

  2i i ki k 2 1i 1 i

  ε X β X β X β β Y

        

  Multiple Regression Model with k Independent Variables:

  Y-intercept Population slopes Random Error

  Multiple Regression Equation

  The coefficients of the multiple regression model are estimated using sample data Multiple regression equation with k independent variables:

  Estimated Estimated Estimated slope coefficients (or predicted) intercept value of Y

  Yˆ b b X b X b

  X       i 1 1i 2 2i k ki

  In this chapter we will always use Excel to obtain the regression slope coefficients and other regression summary measures.

  Multiple Regression Equation

  Y

  Example with Yˆ b b X b

  X   

  1

  1

  2

  2 two independent variables

1 X

  ble ria va or e f

  X

  op 2 Sl 2 le X r variab

  Slope fo

  X 1

  Multiple Regression Equation

2 Variable Example

  A distributor of frozen dessert pies wants to

   evaluate factors thought to influence demand

Dependent variable: Pie sales (units per week)

   Independent variables: Price (in $)

   Advertising ($100’s) Data are collected for 15 weeks

  

  Multiple Regression Equation

2 Variable Example

   Sales = b + b

  1 (Price) + b

  2 (Advertising)

  • b

   Sales = b +b

  1 X

  1

  2 X

  2 Where X

  1 = Price

  X

  2 = Advertising Week Pie Sales Price ($) Advertising ($100s) 1 350 5.50 3.3 2 460 7.50 3.3 3 350 8.00 3.0 4 430 8.00 4.5 5 350 6.80 3.0 6 380 7.50 4.0 7 430 4.50 3.0 8 470 6.40 3.7 9 450 7.00 3.5 10 490 5.00 4.0 11 340 7.20 3.5 12 300 7.90 3.2 13 440 5.90 4.0 14 450 5.00 3.5 15 300 7.00 2.7 Multiple regression equation: Multiple Regression Equation Multiple R 0.72213 Regression Statistics

2 Variable Example, Excel

  Standard Error 47.46341 Adjusted R Square 0.44172 R Square 0.52148 Sales 306.526 - 24.975(X ) 74.131(X ) Observations 15   1 2 Residual Regression ANOVA df SS MS F Significance F 12 27033.306 2252.776 2 29460.027 14730.013 6.53861 0.01201

  Total Coefficients Standard Error t Stat P-value Lower 95% Upper 95% 14 56493.333 Advertising 74.13096 25.96732 2.85478 0.01449 17.55303 130.70888 Price -24.97509 10.83213 -2.30565 0.03979 -48.57626 -1.37392 Intercept 306.52619 114.25389 2.68285 0.01993 57.58835 555.46404

  Multiple Regression Equation

2 Variable Example

  ) 74.131(X ) 24.975(X - 306.526 Sales

  2

  1   b 1 = -24.975: sales will

  decrease, on average, by 24.975 pies per week for each $1 increase in selling price, net of the effects of changes due to advertising

  b 2 = 74.131: sales will

  increase, on average, by 74.131 pies per week for each $100 increase in advertising, net of the effects of changes due to price

  where Sales is in number of pies per week Price is in $ Advertising is in $100’s.

  Multiple Regression Equation

2 Variable Example

  Predict sales for a week in which the selling price is $5.50 and advertising is $350:

  Predicted sales is 428.62 pies

  428.62 (3.5) 74.131 (5.50) 24.975 - 306.526

  ) 74.131(X ) 24.975(X - 306.526 Sales 2 1

     

  Note that Advertising is in $100’s, so $350 means that

  X 2 = 3.5 Coefficient of Multiple Determination

   Reports the proportion of total variation in Y explained by all X variables taken together squares of sum total squares of sum regression

  SST SSR

  2   r

  Coefficient of Regression Statistics Multiple Determination (Excel) Multiple R 0.72213 2 SSR 29460.0 r .52148 R Square 0.52148    Adjusted R Square 0.44172 SST 56493.3 Standard Error 47.46341 52.1% of the variation in pie sales is Observations 15 explained by the variation in price ANOVA df SS MS F Significance F and advertising Total Residual Regression 14 56493.333 12 27033.306 2252.776 2 29460.027 14730.013 6.53861 0.01201

Price -24.97509 10.83213 -2.30565 0.03979 -48.57626 -1.37392

Intercept 306.52619 114.25389 2.68285 0.01993 57.58835 555.46404

Coefficients Standard Error t Stat P-value Lower 95% Upper 95%

  Advertising 74.13096 25.96732 2.85478 0.01449 17.55303 130.70888

2 Adjusted r

  2

  r

never decreases when a new X variable is added

   to the model

  

This can be a disadvantage when comparing

models What is the net effect of adding a new variable?

   We lose a degree of freedom when a new X

   variable is added

   Did the new X variable add enough independent power to offset the loss of one degree of freedom?

2 Adjusted r

  Shows the proportion of variation in Y explained by all X

   variables adjusted for the number of X variables used n

  1 2  2  

    r 1 ( 1 r )

     Y . 12 .. k     n k

  1     

   (where n = sample size, k = number of independent variables)

   variables 2 Smaller than r

  Penalizes excessive use of unimportant independent

   

  Useful in comparing models

2 Adjusted r

  Regression Statistics Multiple R 0.72213 r .44172

  2  R Square 0.52148 adj Adjusted R Square 0.44172 44.2% of the variation in pie sales is explained Standard Error 47.46341 by the variation in price and advertising, taking Observations 15 into account the sample size and number of ANOVA df SS MS F Significance F

independent variables

Total Residual Regression 14 56493.333 12 27033.306 2252.776 2 29460.027 14730.013 6.53861 0.01201

Price -24.97509 10.83213 -2.30565 0.03979 -48.57626 -1.37392

Intercept 306.52619 114.25389 2.68285 0.01993 57.58835 555.46404

Coefficients Standard Error t Stat P-value Lower 95% Upper 95%

  Advertising 74.13096 25.96732 2.85478 0.01449 17.55303 130.70888

  

F-Test for Overall Significance

F-Test for Overall Significance of the Model

   Shows if there is a linear relationship between all of

   the X variables considered together and Y Use F test statistic

   Hypotheses:

   H : β = β = … = β = 0 (no linear relationship) 1 2 k H : at least one β ≠ 0 (at least one independent variable 1 i affects Y)

  F-Test for Overall Significance Test statistic:

   SSR MSR k F  

  SSE MSE n k

  1   where F has (numerator) = k and

   (denominator) = (n - k - 1) degrees of freedom

  Regression Statistics F-Test for Overall Significance R Square 0.52148 Multiple R 0.72213

  MSR 14730.0 F 6.5386 Adjusted R Square 0.44172    Observations Standard Error 47.46341 15 MSE 2252.8

  P-value for ANOVA df SS MS F Significance F the F-Test Total Residual Regression 14 56493.333 12 27033.306 2252.776 2 29460.027 14730.013 6.53861 0.01201

Price -24.97509 10.83213 -2.30565 0.03979 -48.57626 -1.37392

Intercept 306.52619 114.25389 2.68285 0.01993 57.58835 555.46404

Coefficients Standard Error t Stat P-value Lower 95% Upper 95%

  Advertising 74.13096 25.96732 2.85478 0.01449 17.55303 130.70888

  F-Test for Overall Significance

   H : β 1 = β 2 = 0

   H 1 : β 1 and β 2 not both zero

    = .05

   df 1 = 2 df 2 = 12 Test Statistic: Decision: Conclusion:

  Since F test statistic is in the rejection region (p-value < .05), reject H

  There is evidence that at least one independent variable affects Y  = .05

  F .05 = 3.89 Reject H Do not reject H 6.5386 MSE

  MSR F  

  Critical Value: F = 3.885

  F Residuals in Multiple Regression Two variable model

  Sample

  Y

  observation

  Y i

  Yˆ b b X b

  X   

  1

  1

  2

  2 Residual = < <

  e = (Y – Y ) i i i Y i x 2i

  X 2 The best fitting linear regression

  <

  x 1i equation, Y, is found by minimizing the sum of squared 2 errors, e

  X 1

  Multiple Regression Assumptions Errors ( residuals ) from the regression model:

  < e = (Y – Y ) i i i

  Assumptions:

   The errors are independent The errors are normally distributed

   Errors have an equal variance 

  Multiple Regression Assumptions

   These residual plots are used in multiple regression:

   Residuals vs. Y i

   Residuals vs. X 1i

   Residuals vs. X 2i

   Residuals vs. time (if time series data)

  Use the residual plots to check for violations of regression assumptions

  < Individual Variables Tests of Hypothesis Use t-tests of individual variable slopes

   Shows if there is a linear relationship

   between the variable X i and Y Hypotheses:

   H : β = 0 (no linear relationship) i H : β ≠ 0 (linear relationship does exist 1 i between X and Y) i

  Individual Variables Tests of Hypothesis H : β = 0 (no linear relationship) j H : β ≠ 0 (linear relationship does exist 1 j between X and Y) i Test Statistic:

   bj

  (df = n – k – 1) t

   S b j Individual Variables Regression Statistics Tests of Hypothesis Multiple R 0.72213 t-value for Price is t = -2.306, with p- Adjusted R Square 0.44172 R Square 0.52148 value .0398 Standard Error 47.46341 t-value for Advertising is t = 2.855, Observations 15 with p-value .0145 Residual Regression ANOVA df SS MS F Significance F 12 27033.306 2252.776 2 29460.027 14730.013 6.53861 0.01201

  Total Coefficients Standard Error t Stat P-value Lower 95% Upper 95% 14 56493.333 Advertising 74.13096 25.96732 2.85478 0.01449 17.55303 130.70888 Price -24.97509 10.83213 -2.30565 0.03979 -48.57626 -1.37392 Intercept 306.52619 114.25389 2.68285 0.01993 57.58835 555.46404

  Individual Variables Tests of Hypothesis Coefficients Standard Error t Stat P-value

  H : β = 0 j Price -24.97509 10.83213 -2.30565 0.03979

  H : β ≠ 0 1 j Advertising 74.13096 25.96732 2.85478 0.01449 d.f. = 15 - 2 - 1 = 12

  The test statistic for each variable falls in

   = .05

  the rejection region (p-values < .05)

  t = 2.1788 /2 Decision: /2=.025 /2=.025 Reject H for each variable

  Conclusion: Reject H Do not reject H Reject H There is evidence that both Price and

  Advertising affect pie sales at  = .05

  • t t α/2 α/2

  Confidence Interval Estimate for the Slope

  Confidence interval for the population slope β i

  Example: Form a 95% confidence interval for the effect of changes in

  price (X 1 ) on pie sales, holding constant the effects of advertising:

  

i

b

1 k n i

  S t b  

   Coefficients Standard Error

  Intercept 306.52619 114.25389 Price -24.97509 10.83213 Advertising 74.13096 25.96732 where t has (n – k – 1) d.f.

  Here, t has (15 – 2 – 1) = 12 d.f.

  • 24.975 ± (2.1788)(10.832): So the interval is (-48.576, -1.374)

  Confidence Interval Estimate for the Slope

  Confidence interval for the population slope β i

  Example: Excel output also reports these interval endpoints:

  Weekly sales are estimated to be reduced by between 1.37 to 48.58 pies for each increase of $1 in the selling price, holding constant the effects of advertising. Coefficients Standard Error Lower 95% Upper 95% Intercept 306.52619 114.25389 … 57.58835 555.46404 Price -24.97509 10.83213 … -48.57626 -1.37392 Advertising 74.13096 25.96732 … 17.55303 130.70888

  Testing Portions of the Multiple Regression Model Contribution of a Single Independent Variable X

   j SSR(X | all variables except X ) j j

  

= SSR (all variables) – SSR(all variables except X )

j

   j variation in Y (SST)

  Measures the contribution of X in explaining the total

  Testing Portions of the Multiple Regression Model

  Contribution of a Single Independent Variable X , assuming j all other variables are already included (consider here a 3-variable model): SSR(X | X and X ) 1 2 3

  = SSR (all variables) – SSR(X and X ) 2 3 From ANOVA section of From ANOVA section of

  regression for regression for

  Yˆ b b X b

  X Yˆ b b X b X b

  X   

     

  1

  1

  2

  2

  3

  

3

  2

  2

  3

  3 Measures the contribution of X in explaining SST 1 The Partial F-Test Statistic Consider the hypothesis test: H : variable X j does not significantly improve the model after all other variables are included H 1 : variable X j significantly improves the model after all other variables are included

  Test using the F-test statistic: (with 1 and n-k-1 d.f.)

  SSR (X | all variables except j) j

  F  MSE Testing Portions of Model: Example Test at the α = .05 level to determine whether the price variable significantly improves the model given that advertising is included Example: Frozen dessert pies

  Testing Portions of Model: Example

  H : X 1 (price) does not improve the model with X 2 (advertising) included H 1 : X 1 does improve model

  α = .05, df = 1 and 12 F critical Value = 4.75

  (For X 1 and X 2 ) (For X ANOVA 2 only) df SS MS

  Regression 2 29460.02687 14730.01343 Residual 12 27033.30647 2252.775539 Total 14 56493.33333 ANOVA df SS Regression 1 17484.22249 Residual 13 39009.11085 Total 14 56493.33333

  Testing Portions of Model: Example

  Conclusion: Reject H ; adding X 1 does improve model 316 .

  5 2252 78 . , 484 22 .

  17 , 460 03 .

  ) 29 MSE(all) X | (X SSR F 2 1

     

  (For X 1 and X 2 ) (For X ANOVA 2 only) df SS MS

  Regression 2 29460.02687 14730.01343 Residual 12 27033.30647 2252.775539 Total 14 56493.33333 ANOVA df SS Regression 1 17484.22249 Residual 13 39009.11085 Total 14 56493.33333

  Relationship Between Test Statistics

The partial F test statistic developed in this section

  

and the t test statistic are both used to determine the

contribution of an independent variable to a multiple

regression model. The hypothesis tests associated with these two

  

statistics always result in the same decision (that is,

the p-values are identical).

  2

  t Fa

  1 , a Where a = degrees of freedom Coefficient of Partial Determination for k Variable Model

  2 r Yj.(all variables except j)

SSR (X | all variables except j)

j

  

SST SSR(all variables ) SSR(X | all variables except j)

  j Measures the proportion of variation in the dependent variable that is explained by X j while controlling for (holding constant) the other independent variables

  Using Dummy Variables A dummy variable is a categorical

   independent variable with two levels: yes or no, on or off, male or female

   coded as 0 or 1

   Assumes equal slopes for other variables

   If more than two levels, the number of

   dummy variables needed is (number of levels

  • 1)
Dummy Variable Example Yˆ b b X b

  X    1 2

  1

  2 Let: Y = pie sales X = price

1 X = holiday (X = 1 if a holiday occurred during the week)

  2

  2

  (X = 0 if there was no holiday that week) 2

  

Dummy Variable Example

Holiday

  Yˆ b b X b (1) (b b ) b

  X       1 2 1

  1

  2

  1 No Holiday Yˆ b b X b (0) b b

  X      1 2 1

  1

  1 Different Same

  Y (sales)

  intercept slope b + b 2 If H : β = 0 is 2 Holiday (

  X

  rejected, then

  b 2 = 1)

  “Holiday” has a

  No Holid ay (X 2 significant effect = 0)

  on pie sales X (Price) 1

  

Dummy Variable Example

Sales: number of pies sold per week Price: pie price in $ Holiday:

  0 If no holiday occurred b

  2

= 15: on average, sales were 15 pies greater in weeks

with a holiday than in weeks without a holiday, given the same price

  ) 15(Holiday 30(Price) - 300 Sales  

1 If a holiday occurred during the week

  Dummy Variable Model More Than Two Levels

The number of dummy variables is one less than

   the number of levels Example:

   Y = house price ; X = square feet

  

1

   If style of the house is also thought to matter Style = ranch, split level, colonial

Three levels, so two dummy variables are needed

  Dummy Variable Model More Than Two Levels Example: Let “colonial” be the default category, and let X and 2 X be used for the other two categories: 3 Y = house price X = square feet 1 X = 1 if ranch, 0 otherwise 2 X = 1 if split level, 0 otherwise 3 The multiple regression equation is:

  Yˆ b b X b X b

  X    

  1

  1

  2

  2

  3

  3 Dummy Variable Model More Than Two Levels

  Consider the regression equation:

  Yˆ 20.43 0.045X

  23.53X

  18.84X    

  1

  2

  3 With the same square feet, a

  For a colonial: X = X = 0 2 3 ranch will have an estimated

  Yˆ 20.43 0.045X   1 average price of 23.53

  thousand dollars more than a For a ranch: X = 1; X = 0 2 3 colonial.

  Yˆ 20.43 0.045X

  23.53   

1 With the same square feet, a

  split-level will have an For a split level: X = 0; X = 1 2 3 estimated average price of

  18.84 thousand dollars more

  Yˆ 20.43 0.045X

  18.84    1 than a colonial. Interaction Between Independent Variables Hypothesizes interaction between pairs of X

   variables Response to one X variable may vary at different levels

   of another X variable

   Contains a two-way cross product term Yˆ b b X b X b

  X    

  1

  2

  

2

  3

  3 b b X b X b (X X )    

   1

  1

  1

  2

  

2

  3

  1

  2 Effect of Interaction

  Y β β X β X β

  X X ε       1 1 2 2 3 1 2 Given:

   1 measured by β

  Without interaction term, effect of X on Y is

1 With interaction term, effect of X on Y is measured

   1 by β + β

  X

  1

  3

2 Effect changes as X changes

   2

  Interaction Example

  X 2 = 1: Y = 1 + 2X 1 + 3(1) + 4X 1 (1) = 4 + 6X 1 X 2 = 0: Y = 1 + 2X 1 + 3(0) + 4X 1 (0) = 1 + 2X 1 Slopes are different if the effect of X 1 on Y depends on X 2 value

  X 1

  4

  4

  8

  8

  12

  12

  1

  1

  0.5

  0.5

  1.5

  1.5 Y = 1 + 2X 1 + 3X 2 + 4X 1 X 2 Suppose X 2 is a dummy variable and the estimated regression equation is

  Yˆ Significance of Interaction Term Can perform a partial F-test for the

   contribution of a variable to see if the

addition of an interaction term improves the

model Multiple interaction terms can be included

   Use a partial F-test for the simultaneous

   contribution of multiple variables to the model Simultaneous Contribution of Independent Variables Use partial F-test for the simultaneous contribution

   of multiple variables to the model Let m variables be an additional set of variables added

   simultaneously To test the hypothesis that the set of m variables

   improves the model:

[SSR(all) SSR (all except new set of m variables )] / m

  F

  

MSE(all)

  (where F has m and n-k-1 d.f.)

  Chapter Summary

   Developed the multiple regression model

   Tested the significance of the multiple regression model

   Discussed adjusted r

  2

   Discussed using residual plots to check model assumptions In this chapter, we have

  Chapter Summary In this chapter, we have

   Tested portions of the regression model

  Tested individual regression coefficients

   Used dummy variables

   Evaluated interaction effects 