AUTOCORRELATION MODELING WITH TES PROCESSES

10.3 AUTOCORRELATION MODELING WITH TES PROCESSES

Having established the importance of correlation modeling, we turn our attention to the issue of modeling correlations in empirical data, say, fz 1 ,...,z N

g, assumed to come from a stationary process (see Section 3.9). The first step is to check whether the

empirical time series has appreciable autocorrelations for a range of lags, t ¼ 1, . . . , T. To this end, we collect all pairs of observations f(z 1 ,z 1 þt ), (z 2 ,z 2 þt ), . . . , (z 1N t ,z N ) g for each given lag t, and compute the corresponding sample correlation coefficient estimates, using Eq. 3.100, namely,

Nt P (z i z)(z i þt z)

In practice, an inspection of the empirical autocorrelation function, ^ r (t), will suffice to determine whether significant autocorrelations are present in the data (a rule of thumb is to model those autocorrelation lags with absolute values exceeding about 0.15 and ignore autocorrelation function tails that fall below this threshold).

Having established the presence of significant autocorrelations in the empirical data, the next problem faced by the modeler is how to construct a model incorporating the observed autocorrelations. This problem has two aspects:

1. How can one define a versatile family of stationary stochastic processes that can simultaneously model a broad range of marginal distributions and autocorrelation functions?

2. How does one go about fitting a particular process from such a family to empirical data obtained from field measurements?

The generality called for by the first aspect appears to be quite daunting. In particular, how can we “cook up” a stationary process with an arbitrarily prescribed

marginal distribution? And even if we can, would we still have additional modeling flexibility to fit (or at least approximate) the empirical autocorrelation function? Fortu- nately, both questions can be answered in the affirmative for a large number of cases. Recent developments in the theory of time series modeling have yielded new method- ologies that can do just that. These methodologies include the aforementioned TES (transform-expand-sample) class (Melamed [1991, 1997]), and the ARTA (autoregres- sive to anything) class (Cario and Nelson [1996]). Here we shall focus on TES processes.

The TES class has several modeling advantages: It is relatively easy (and fast) to generate a sequence of TES variates.

Any marginal distribution can be fitted exactly.

200 Correlation Analysis

A broad range of autocorrelation functions can be approximated, including decaying, oscillating, and alternating autocorrelation functions. Fitting algorithms have been devised, but they require software support. The physical meaning of TES processes suggests how to change the characteristics of the associated autocorrelation function. This is very useful in sensitivity analysis of correlations—a main component of correlation analysis.

TES processes have been studied in some detail. The theory of TES processes was developed in Melamed (1991, 1997) and Jagerman and Melamed (1992a, 1992b, 1994); TES fitting is presented in Jelenkovic and Melamed (1995); TES-based forecasting is described in Jagerman and Melamed (1995); and software support for TES modeling is described in Hill and Melamed (1995). A detailed overview of TES processes can be found in Melamed (1993), and a description of TES applications in various domains appears in Melamed and Hill (1995). Here we shall present an overview of the TES class at an elementary level. For complete details, the previous references are recom- mended.