Directory UMM :Data Elmu:jurnal:J-a:Journal of Econometrics:Vol99.Issue2.Dec2000:

Journal of Econometrics 99 (2000) 373}386

Simple resampling methods for censored
regression quantiles
Yannis Bilias!,", Songnian Chen#,*, Zhiliang Ying$
!Departments of Economics and Statistics, 266 Heady Hall, Iowa State University, Ames, IA 50011, USA
"Department of Economics, University of Cyprus, P.O. Box 20537, 1678 Nicosia, Cyprus
#Department of Economics, Hong Kong University of Science and Technology, Clear Water Bay,
Kowloon, Hong Kong
$Department of Statistics, Hill Center, Busch Campus, Rutgers University, Piscataway, NJ 08855, USA
Received 1 June 2000; accepted 25 June 2000

Abstract
Powell (Journal of Econometrics 25 (1984) 303}325; Journal of Econometrics 32 (1986)
143}155) considered censored regression quantile estimators. The asymptotic covariance
matrices of his estimators depend on the error densities and are therefore di$cult to
estimate reliably. The di$culty may be avoided by applying the bootstrap method
(Hahn, Econometric Theory 11 (1995) 105}121). Calculation of the estimators, however,
requires solving a nonsmooth and nonconvex minimization problem, resulting in high
computational costs in implementing the bootstrap. We propose in this paper computationally simple resampling methods by convexfying Powell's approach in the resampling
stage. A major advantage of the new methods is that they can be implemented by e$cient

linear programming. Simulation studies show that the methods are reliable even with
moderate sample sizes. ( 2000 Elsevier Science S.A. All rights reserved.
JEL classixcation: C14; C24
Keywords: Censored regression quantiles; Least absolute deviation; Linear programming; Resampling

* Corresponding author. Tel.: #852-2358-7602; fax: #852-2358-2084.
E-mail address: [email protected] (S. Chen).

0304-4076/00/$ - see front matter ( 2000 Elsevier Science S.A. All rights reserved.
PII: S 0 3 0 4 - 4 0 7 6 ( 0 0 ) 0 0 0 4 2 - 7

374

Y. Bilias et al. / Journal of Econometrics 99 (2000) 373}386

1. Introduction
Censored quantile regression models proposed by Powell (1984, 1986) have
attracted a great deal of interest in the recent literature, particularly due to their
robustness to distributional misspeci"cation of the error term and unknown
conditional heteroscedaticity. For empirical applications, see Buchinsky (1994),

Chamberlain (1994), Chay and HonoreH (1998), Horowitz and Neumann (1987),
and Melenberg and van Soest (1996), among others. A survey of the subject may
be found in Buchinsky (1998).
In order to carry out valid statistical inferences for censored quantile regression models, it is necessary to provide consistent estimators for the asymptotic
variance}covariance matrices. However, the asymptotic variance}covariance
matrices are di$cult to estimate reliably since they involve conditional densities
of error terms. For the uncensored counterpart, Parzen et al. (1994) (PWY)
proposed a resampling method that allows approximation of the distribution of
regression quantile estimators to avoid density estimation. Their algorithm
requires only repeated executions of the same linear programming algorithm
(see, e.g., Koenker and Bassett, 1978a) and is computationally simple and
e$cient. Hahn (1995), on the other hand, showed that the usual bootstrap
approximation is also valid for both the uncensored and censored regression
quantiles, and Buchinsky (1995) provided some simulation evidence. In principle, the bootstrap approximation can be used to approximate distribution of
Powell's regression quantile estimators for censored model. In practice, however, implementation of the bootstrap can be extremely time consuming as each
bootstrap replication involves solving a nonlinear and nonconvex minimization
problem and accurate approximation with the resampling methods typically
requires large number of such replications. This could lead to potentially
prohibitive computational costs in empirical applications.
The main objective of this paper is to propose computationally e$cient

methods for approximating the distribution of Powell's estimator through
certain resampling methods. This is accomplished via convexifying the objective
function of Powell (1984) so that the standard linear programming algorithm
can be used. Section 2 introduces our modi"ed bootstrap method and a similar
modi"cation of the resampling method of Parzen et al. (1994). In Section 3, it is
shown that both the resampling methods are asymptotically correct. Some
simulation results are presented in Section 4. Section 5 contains some concluding remarks.
2. Resampling methods for censored median regression
Consider the censored regression model
y "maxMx@ b #e , 0N i"1,2, n,
i 0
i
i

(1)

Y. Bilias et al. / Journal of Econometrics 99 (2000) 373}386

375


where e are error terms whose conditional medians given by x are 0, x are
i
i
i
vectors of independent variables and b is a conformable vector of true regres0
sion parameters. Powell (1984) recently proposed (CLAD) to estimate b by
0
n
(2)
bK "arg min + Dy !maxMx@ b, 0ND.
i
i
b i/1
Under certain regularity conditions, he showed that bK is consistent and asymptotically normal in the sense that
$ N(0, 1J~1 x@i bK ;0

(9)

where x
"2;H and y

is chosen large enough so that y
'x@ b. See
n`1
n`1
n`1
n`1
Parzen et al. (1994) for details.
To approximate the distribution of Jn (bK HH!bK ), one can simply simulate
a large number of ;H's, say ;H,2, ;H , by generating M sequences i.i.d.
1
M
Bernoulli random variables. For each ;H, solve (9) to get bK H. Then the empirical
j
j
distribution of bK HH!bK from the bK HH should be close to the true distribution of
j
bK !b , as will be shown in the next section.
0
3. Large sample properties
In this section, we show that the proposed resampling methods are asymptotically valid. We make the following assumptions.

Assumption 1. The true value of the parameter b is in the interior of a known
0
compact parameter space BLR1`1.

378

Y. Bilias et al. / Journal of Econometrics 99 (2000) 373}386

Assumption 2. Conditional medians of the e given the x are uniquely
i
i
equal to 0.
Assumption 3. P(x@dO0Dx@b '0)'0,
0
EDDxDD2(R.

for

dO0, P(x@b "0)"0
0


and

Assumption 4. E[ f (0Dx)xx@I(x@b '0)] is nonsingular.
0
Assumption 5. The conditional density f ( ) Dx) is continuous and uniformly
bounded.
The assumptions are standard for the desirable properties of Powell's estimator to hold and are similar to those made in Powell (1984), Newey and
McFadden (1994) and Pollard (1990). In particular, Assumption 1 is a standard
assumption for an estimator de"ned through an optimization procedure with
a nonconvex objective function. Assumption 2 is essential for b to be uniquely
0
de"ned; it is implied by the assumption that f (0Dx) is nonzero and f (tDx) is
continuous at t"0. Assumptions 3 and 4 are crucial to ensure consistency and
asymptotic normality of Powell's estimator. Assumption 5 on the conditional
density function is imposed to facilitate Taylor's expansions used in the proofs.
Theorem 1. Under Assumptions 1}5 given above, the distribution for the modixed
bootstrap estimator bK H, P(Jn (bK H!bK ))zDx , y , i"1,2, n), is consistent in the
i i
sense that

P(Jn (bK H!bK ))zDx , y , i"1,2, n)!P(Jn (bK !b ))z)P0
i i
0
in probability for every z.
Proof. Let F denote the empirical distribution formed from (x , y ),2, (x , y )
n
1 1
n n
and EH(PH) the corresponding expectation (probability measure) with respect to
n n
F . So (xH, yH) are i.i.d. with F as their common distribution. De"ne objective
n
i i
n
function
1 n
GH(b, b)" + [(DyH!xH@bD!DyH!xH@bD])I(xH@b'0).
n
i
i

i
i
i
n
i/1
Let G (b, b)"EHGH(b, b) and G(b, b)"EG (b, b). In view of Assumptions 3 and
n
n n
n
4, the class of functions, M(Dy!x@bD!Dy!x@bD)I(x@b'0): b, b3BN, is Euclidean
with an integrable envelope in the sense of Pakes and Pollard (1989). Thus, by
the uniform law of large numbers (Pollard, 1990),
G (b, b)!G(b, b)"o(1) uniformly in (b, b).
n

Y. Bilias et al. / Journal of Econometrics 99 (2000) 373}386

379

Further, by the bootstrap uniform law of large numbers (GineH and Zinn, 1990,

Theorem 2.6),
GH(b, b)!G (b, b)"o (1) uniformly in (b, b),
n
n
B
where, for a random sequence v constructed from the bootstrap sample,
n
v "o (1) if for every g'0, PH(Dv D*g)P0 in probability (Hahn, 1995). Since
n n
n
B
bK is consistent and G is continuous, it follows that
GH(b,bK )!G(b, b )"o (1) uniformly in b.
n
0
B
But G(b, b ) is continuous in b and attains its unique minimum at b"b .
0
0
Consequently, bK H, the minimizer of GH(b, bK ) over b, is consistent, i.e.,

n
bK H!b "o (1) or, equivalently, bK H!bK "o (1).
0
B
B
Next, we show that the bootstrap distribution is asymptotically correct. Let
* (b, b)"Dy !x@ bD!Dy !x@ bD and *H(b, b)"DyH!xH@bD!DyH!xH@bD. So
i
i
i
i
i
i
i
i
i
i
G (b, b)"n~1+n * (b, b)I(x@ b'0) and GH(b, b)"n~1+n *H(b, b)I(xH@b'0).
i
i/1 i
n
i
i/1 i
n
Further, let D (b)"sgn(yH!xH@b)I(x@ b'0)x , R (b, b)"[* (b, b)!(b!b)@D (b)]
i
i i
i
i
i
i
i
I(x@ b'0) and = (b)"n~1@2+n (D (b)!E D (b)). Likewise, let DH(b)"
i
i/1 i
n i
i
n
sgn(yH!xHb)I(xH@b'0)xH, RH(b, b)"[*H(b, b)!(b!b)@DH(b)]I(xH@b'0) and
i
i
i
i i
i
i
i
=H(b)"n~1@2+n (DH(b)!EHDH(b)) be their resampled counterparts. Then
n i
i/1 i
n
simple algebra results in
1
1 n
G (b, b)"
(b!b)@= (b)#G(b, b)# + [R (b, b)!E R (b, b)]
n
i
n i
n
n
Jn
i/1
(10)
and
1
1 n
GH(b, b)"
(b!b)@=H(b)#G (b, b)# + [RH(b, b)!EHRH(b, b)].
n
n i
i
n
n
n
Jn
i/1
(11)
The right-hand side of (11) involves (10), which will be dealt with "rst using
the same argument given in Pollard (1990, p. 63). Speci"cally, it is easy to verify
that
DR (b, b)D)2Dx@ (b!b)DI(Dy !x@ bD)Dx@(b!b)D),
i
i
i
i
which implies
DR (b, b)D
i
)2Dx DI(Dy !x@ b D)2rDx D)
sup
i 0
i
i
i
Db!bD
@b~b0 @`@b~b0 @xr
)2Dx D[I(De D)2rDx D)#I(Dx@ b D)2rDx D)].
i 0
i
i
i
i

380

Y. Bilias et al. / Journal of Econometrics 99 (2000) 373}386

So the maximal inequality of Pollard (1990, p. 38) can be applied to get, for some
C'0,

D

C

E

D+(R (b, b)!ER (b, b))D 2
i
i
JnDb!bD

sup
@b~b0 @`@b~b0 @xr

C n
) + Dx D2[I(De D)2rDx D)#I(Dx@ b D)2rDx D)]"o(1)
i 0
i
i
i
i
n
i/1
as nPR and rP0, where the last equality follows from Assumptions 3 and 5.
Thus, uniformly in b and b over shrinking neighborhoods of b ,
0
n
(12)
+ (R (b, b)!ER (b, b))"o(Db!bD/Jn).
i
i
i/1
By the Taylor expansion, G(b, b)"(b!b)@J (b!b)#o(Db!bD2), where
0
J "E[ f (0Dx)xx@I(x@b '0)]. This and (12) can now be applied to (10) to get
0
0
1
n
G (b, b)" (b!b)@ + x sgn(y !x@ b)I(x@ b'0)
i
i
i
i
n
n
i/1
Db!bD
# (b!b)@J (b!b)#o
#Db!bD2
0
1
Jn

CA

BD

uniformly in b and b over shrinking neighborhoods of b . But bK , Powell's
0
estimator, satis"es +x sgn(y !x bK )I(x bK '0)"o (Jn). Hence,
i
i
i
i
1
Db!bK D
#Db!bK D2 .
G (b,bK )"(b!bK )@J (b!bK )#o
n
0
1 Jn

B

A

Exactly the same arguments as those for (12) lead to
n
+ (RH(b, b)!EHRH(b, b))"o (Db!bD/Jn).
B
n i
i
i/1
Consequently, we obtain
1
GH(b,bK )"
(b!bK )@=H(bK )#(b!bK )@J (b!bK )
n
n
0
Jn
#o
B

A

Db!bK D
Jn

B

#Db!bK D2 .

Since, conditional on M(y , x ), i"1,2, nN, =H(bK ) is a sum of i.i.d. random
n
i i
variables with mean 0 and variance}covariance matrix converging to