iu iu iu iu iu iu . . . , A iu iu

u d 1 , . . . , u d p d ′ . Let G i,j ≡ G i,j u =[g j U c i , U d i −g j u c , u d − . g j u c , u d ′ U c i − u c ], and G i ≡ G i u = G i, 1

u, . . . , G

i,d u ′ . Let R i ≡ R i u = G i u ′ X i . Then Y i = d j =1 [g j u c , u d − . g j u c , u d ′ U c i − u c ]X i,j +ε i + R i = ξ ′

i,u

α u+ε i +R i , where αu and ξ

i,u

are defined after Equation 6 . Let ε ≡ ε 1 , . . . , ε n ′ and R ≡ Ru = R 1

u, . . . , R

n u ′ . Then we have the following bias-variance decomposition: H [ α n u;h, λ − α u] = H −1 ξ ′ K hλ Q h −1 n Q ′ h K hλ ξ H −1 −1 × H −1 ξ ′ K hλ Q h −1 n Q ′ h K hλ R + H −1 ξ ′ K hλ Q h −1 n Q ′ h K hλ ξ H −1 −1 × H −1 ξ ′ K hλ Q h −1 n Q ′ h K hλ ε = n u ′ −1 n n u −1 n u ′ −1 n B n u + n u ′ −1 n n u −1 n u ′ −1 n V n u A.1 where n u ≡ n u;h, λ = n −1 Q ′ h K hλ ξ H −1 , B n u ≡ B n u;h, λ = n −1 Q ′ h K hλ

R, and

V n u ≡ V n u;h, λ = n −1 Q ′ h K hλ ε. We prove Theorem 3.1 by proving the following three lemmata. Lemma 1. n u = u + o P

1, where u is defined in

11 . Proof. Recall that η i u c ≡ U c i − u c h and Q i ≡ QV i . By the definition of Q

h,iu

in 7 , we have n u = 1 n n i=1 K hλ,iu Q i Q i ⊗ η i u c X ′ i X ′ i ⊗ U c i − u c ′ H −1 = ⎛ ⎝ n, 11 k×d n, 12 k×p c d n, 21 kp c ×d n, 22 kp c ×p c d ⎞ ⎠ , where n, 11 ≡ n, 11 u;h, λ = 1 n n i=1 K hλ,iu Q i X ′ i , n, 12 ≡ n, 12 u;h, λ = 1 n n i=1 K hλ,iu Q i X ′ i ⊗ η i u c ′ , n, 21 ≡ n, 21 u;h, λ = 1 n n i=1 K hλ,iu Q i X ′ i ⊗ η i u c , and n, 22 ≡ n, 22 u;h, λ = 1 n n i=1 K hλ,iu Q i X ′ i ⊗ [η i u c η i u c ′ ]. It suffices to show that n,lj = lj u + o P 1 for l, j = 1, 2, where lj u denotes the l, j block of the block diagonal matrix u. By Assumptions A1–A3, E [ n, 11 ] = E[Q i X ′ i K hλ,iu ] = E[Q i X ′ i W

h,iu

c |d U d i u d = 0]pu d + p d s=1 E [Q i X ′ i W

h,iu

c L λ,iu d |d U d i u d = s]pd U d i u d = s = E 1 U c i , U d i W

h,iu

c |d U d i u d = 0 p u d + O λ = 1 u c + h ⊙ t, u d f U u c + h ⊙ t, u d W t dt + O λ = 1 u f U u + O h 2 + λ . A.2 Define two column vectors ω 1 ∈ R k and ω 2 ∈ R d such that ω l = 1 for l = 1, 2. Then it is easy to show that varω ′ 1 n, 11 ω 2 = 1 n Varω ′ 1 Q i X ′ i ω 2 K hλ,iu = Onh −1 = o1. It follows by Chebyshev’s inequality that n, 11 = 1 uf U u + o P 1. Similarly, n, 22 = E[ n, 22 ] + O P nh −12 = E[Q i X ′ i ⊗ η i u c η i u c ′ K hλ,iu ] + O P nh −12 = [ 1 u c + h ⊙ t, u d ⊗ tt ′ ]f U u c + h ⊙ t, u d × W tdt + O P λ + nh −12 = μ 2,1 [ 1 u ⊗ I p c ]f U u + o P 1 . By the same token, n, 12 = o P 1, and n, 21 = o P 1 . This completes the proof. Lemma 2. √ nh B n u = √ nh B u; h, λ + o P 1, where B u; h, λ is defined in 13 . Proof. Write √ nh B n u = 1 n n i=1 √ nh K hλ,iu Q

h,iu

R i = 1 n n i=1 ς i , where ς i = √ nh d j =1 g j U c i , U d i − g j u c , u d − . g j u c , u d ′ × U c i − u c Q i X i,j Q i X i,j ⊗ η i u c K hλ,iu = √ nh Q i X ′ i G i Q i X ′ i G i ⊗ η i u c K hλ,iu . It follows that √ nh E[ B n u] = Eς i = Eς i |d U d i u d = 0 p u d + Eς i |d U d i u d = 1 P d U d i u d = 1 + O √ nh λ 2 ≡ b n, 1 + b n, 2 + o 1 . On the set {U d i = u d , W

h,iu

c 0}, g j U c i , U d i −g j u c , u d − . g j u c , u d ′ U c i − u c = 1 2 A i,j u + oh 2 , where A i,j u ≡ U c i − u c ′ .. g j uU c i − u c and .. g j u ≡ ∂ . g j u∂u c′ . Let A i u ≡ A i, 1

u, . . . , A

i,d u ′ . Then we have b n, 1 = 1 2 √ nh E Q i X ′ i A i u Q i X ′ i A i u ⊗ η i u c W

h,iu

c p u d + o √ nh h 2 = 1 2 √ nh E 1 U i A i u 1 U i A i u ⊗ η i u c W

h,iu

c p u d + o 1 = √ nh μ 2,1 2 f U u 1 u A u; h kp c ×1 + o1, and b n, 2 = √ nh E 1 ⎧ ⎨ ⎩ d j =1 g j U i − g j u − . g j u ′ U c i − u c × Q i X i,j Q i X i,j ⊗ η i u c K hλ,iu ⎫ ⎬ ⎭ p 1 Downloaded by [Universitas Maritim Raja Ali Haji] at 22:05 11 January 2016 = √ nh E 1 ⎡ ⎣ d j =1 Q i X ′ i G i Q i X ′ i G i ⊗ η i u c K hλ,iu ⎤ ⎦ p 1 = √ nh E 1 ⎡ ⎢ ⎢ ⎢ ⎣ ⎛ ⎜ ⎜ ⎜ ⎝ 1 U i [g U i − g u] − 1 U i ⊗ η i u c ′ · g u 1 U i [g U i − g u] ⊗ η i u c − 1 U i ⊗ [η i u c η i u c ′ ] · g u ⎞ ⎟ ⎟ ⎟ ⎠ × K hλ,iu ⎤ ⎥ ⎥ ⎦ p 1 + o 1 = √ nh u d ∈U d p d s=1 λ s I s u d , u d f U u c , u d × 1 u c , u d gu c , u d − gu c , u d −μ 2,1 1 u c , u d ⊗ I p c · g u c , u d + o1, where A u; h and · g u are defined in Section 3.2 , E l {·} = E{·|d U d i u d = l} for l = 0 and 1, and p 1 = P d U d i u d = 1. Con- sequently, √ nh E[ B n u] = √ nh B u; h, λ + o 1. Noting that var √ nh B n u = Oh 2 + λ = o1, the conclusion then follows by Chebyshev’s inequality. Lemma 3. √ nh V n u = n −12 h 12 n i=1 Q i ε i Q i ε i ⊗ U c i − u c h × K hλ,iu d → N 0, ϒ u , where ϒu is defined in 12 . Proof. Let c be a unit vector on R k p c +1 . Let ζ i = h 12 c ′ Q i ε i Q i ε i ⊗ η i u c K hλ,iu . By the Cram´er–Wold device, it suffices to prove √ nh c ′ V n u = n −12 n i=1 ζ i d → N 0, c ′ ϒc . By the law of iterated expectations, E ζ i = 0. Now by arguments sim- ilar to those used in the proof of Lemma 1, var √ nh c ′ V n u = var ζ 1 = hc ′ E × Q i Q ′ i ε 2 i Q i Q ′ i ⊗ η i u c ′ ε 2 i Q i Q ′ i ⊗ η i u c ε 2 i Q i Q ′ i ⊗ [η i u c η i u c ′ ]ε 2 i K 2 hλ,iu c = hc ′ E × Q i Q ′ i σ 2 V i Q i Q ′ i ⊗ η i u c ′ σ 2 V i Q i Q ′ i ⊗ η i u c σ 2 V i Q i Q ′ i ⊗ [η i u c η i u c ′ ]σ 2 V i × K 2 hλ,iu c = hc ′ E 2 U i 2 U i ⊗ η i u c ′ 2 U i ⊗ η i u c 2 U i ⊗ [η i u c η i u c ′ ] K 2 hλ,iu c = c ′ ϒc + o 1 . The result follows as it is standard to check the Liapounov condition; see, for example, Li and Racine 2007 . By Lemmas 1–3 and the Slutsky lemma, √ nh [H α n u − α u − ′ −1 −1 ′ −1 B u; h, λ] = n u ′ −1 n n u −1 n u ′ −1 n √ nh V n u + n u ′ −1 n n u −1 n u ′ −1 n √ nh B n u − [ u ′ −1 u] −1 u ′ −1 √ nh B u; h, λ d → N0, ′ −1 −1 ′ −1 ϒ −1 ′ −1 −1 , where dependence of , , and ϒ on u is suppressed. This completes the proof of Theorem 3.1. Proof of Theorem 3.2. Let n u;h, λ, B n u;h, λ, and V n u;h, λ be as defined after A.1 . Let ¯ B n u;h, λ ≡ B n u;h, λ−Bu; h, λ. Let J 1n ≡ n u; h, λ − n u;h, λ, J 2n ≡ √ nh [ V n u; h, λ − V n u;h, λ], and J 3n ≡ √ nh [ ¯ B n u; h, λ − ¯ B n u;h, λ]. By the result in Theorem 3.1 and the ex- pansion in A.1 , it suffices to show that i J 1n = o P 1, ii J 2n = o P 1, and iii J 3n = o P 1. For notational simplicity, for the moment we assume that p c = p d = 1, so that we can write the bandwidth h, λ simply as h, λ. Similarly, we write U c i , u c and U d i , u d as U c i , u c and U d i , u d , respectively. Let h = bn −δ and λ = rn −σ for some b ∈ [b, ¯b], r ∈ [r, ¯r], δ 0, and σ 0. Note that when p c = p d = 1, we can write hK hλ,iu as hK hλ,iu = w U c i − u c h λ 1{U d i =u d } = w U c i − u c bn −δ rn −σ 1{U d i =u d } ≡ K br,iu . For any nonnegative random variable ς i , define m ζ u = E ς i |U i = u. m ζ is usually continuous and uniformly bounded below. Then by the C r inequality, for any γ 0, E ,, K b ′ r ′ ,iu − K br,iu , , γ i ς i = E[|h ′ K h ′ λ ′ ,iu − hK hλ,iu | γ m ς U i ] ≤ c γ {E[|h ′ K h ′ λ ′ ,iu − hK hλ,iu | γ m ζ U i ] + E[h|K hλ ′ ,iu − K hλ,iu | γ m ζ U i ]} ≡ c γ {K 1 + K 2 } , say, where c γ = 1 if γ ∈ 0, 1] and c γ = 2 γ −1 if γ 1. Here and in the remainder of this proof prime does not denote transpose. Let c b = ¯bb. By the fact that λ ′ ∈ 0, 1] and Assumption A5, for any b, b ′ ∈ [b, ¯b], K 1 = u d i ∈U d ,, , , w u c i − u c h ′ − w u c i − u c h λ ′ 1{u d i =u d } , , , , γ × m ς u c i , u d i f u c i , u d i du c i ≤ h u d i ∈U d c w c b −c w c b |wvh h ′ − wv| γ × m ς u c + hv, u d i f u c + hv, u d i dv Downloaded by [Universitas Maritim Raja Ali Haji] at 22:05 11 January 2016 ≤ C 1ς C γ w h|1 − h h ′ | γ c w c b −c w c b |v| γ dv = C 1ς C γ w h |b ′ − bb ′ | γ c w c b −c w c b |v| γ dv ≤ C 2ς h|b ′ − b| γ , where C sς is a finite constant that depends on ς i ; for example, C 1ς ≡ sup u c ∈U c u d i ∈U d m ς u c + hv, u d i f u c + hv, u d i dv ∞. Similarly, K 2 = u d i ∈U d ,u d i =u d ,, , ,w u c i − u c h ,, , , γ |λ ′ − λ| γ m ς u c i , u d i × f u c i , u d i du c i = h|λ ′ − λ| γ u d i ∈U d c w c b −c w c b w v γ m ς u c + hv, u d i × f u c + hv, u d i dv ≤ C 3ς h|λ ′ − λ| γ ≤ C 3ς hn −γ σ |r ′ − r| γ . It follows that E [|K b ′ r ′ ,iu − K br,iu | γ ς i ] ≤ c γ C 2ς ∨ C 3ς h|b ′ − b| γ + |r ′ − r| γ , A.3 where a ∨ b = max a, b . Then by the C