Time Series Analysis Essay

.2. 3 Time series models Time series is an ordered sequence of values of a variable at equally spaced time intervals. Time series occur frequently when looking at industrial data. The essential difference between modeling data via time series methods and the other methods is that Time series analysis accounts for the fact that data points taken over time may have an internal structure such as autocorrelation, trend or seasonal variation that should be accounted for. A Time-series model explains a variable with regard to its own past and a random disturbance term.

Special attention is paid to exploring the historic trends and patterns (such as seasonality) of the time series involved, and to predict the future of this series based on the trends and patterns identified in the model. Since time series models only require historical observations of a variable, it is less costly in data collection and model estimation. . Time series models can broadly be categorized into linear and nonlinear Models. Linea models depend linearly on previous data points.

We will write a custom essay sample on
Time Series Analysis Essay
or any similar topic only for you
Order now

They include the autoregressive (AR) models, the integrated (I) models, and the moving average (MA) models. The general autoregressive model of order p (AR(p)) can be written as And that of the moving average model of order q as The autoregressive (AR) models, were first introduced by Yule (1927) while the moving average process was developed by Slutzky (1937). Combinations of these ideas produce autoregressive moving average (ARMA) and autoregressive integrated moving average (ARIMA) models. s an autoregressive moving average process of order p,q denoted as ARMA(p,q) if is stationary and if for every where . is linearly related to the p most recent observations , q most recent forecast errors and the current disturbance . A non-stationary ARMA(p,q) process which requires differencing d times before it becomes stationary is said to follow an Autoregressive Integrated Moving Average of order (p,d,q) abbreviated as ARIMA(p,d,q). The difference operator when applied to the entry yields the difference .

Nonlinear time series are able to show cyclicity, asymmetry and capture higher moments such as skewness and kurtosis. They include the bilinear models introduced by Subba and Gabr (1984), the exponential autoregressive models introduced by Ozaki and Oda (1978) and the Autoregressive Conditional Heteroscedastic (ARCH) models introduced by Engle (1982). The general bilinear model is given by Where is a sequence of i. i. d random variables, usually but not always with zero mean and variance and , , and are model parameters.

That of the EAR model given by , And that of the Autoregressive Conditional Heteroscedastic (ARCH) models For t =1,…,T where is a kx1 vector of exogenous variables, and is a kx1 regression parameters. 2. 4Autoregressive moving average (ARMA) models The ARMA model is expressed as ARMA (p,q) where p= Number of autoregressive parameters and q = Number of moving average parameters. It is defined as, The basic assumption in estimating the ARMA coefficients is that the data are stationary, that is, the trend or seasonality cannot affect variance. 2. 4. Model identification for ARMA models. This involves selecting the most appropriate lags for the AR and MA parts, as well as determining if the variable requires first-differencing to induce stationarity. The ACF and PACF are used to identify the best model. A consistent estimator of the ACF is the sample autocorrelation function. For any set of observation , the sample ACF is computed as The partial autocorrelation function (PACF) of an ARMA process is the function defined by the equations 1 and , Where is the last component of And .

For any set of observations with for some and , the sample PACF is given by 1 and , Where is the last component of 2. 4. 2 Model estimation for ARMA models The parameters of the selected model can be estimated consistently by least-squares or by maximum likelihood estimation 2. 4. 2. 1 Conditional Least Squares. Least squares estimation conditioned on the first p observations become straightforward in the case of pure AR models, leading to linear Least Squares. For example, for the AR(1) process with zero mean, and conditioned on the first value , equation becomes the linear problem, leading to the usual estimator which is consistent and asymptotically normal. In a general model with a MA component the optimization problem is nonlinear. For example, to estimate the parameter of the MA(1) process, if we substitute equation in the above equation it becomes, which is a nonlinear function of . Then, common nonlinear optimization algorithms such as Gauss-Newton can be applied in order to get the estimates. 2. 4. 2. 2 Maximum Likelihood estimator The ML estimator conditional to the first p values is equal to the conditional LS estimator.

For example, returning to the AR(1) specification, we substitute the innovations in the ML principle. Taking logarithms we get the corresponding log-likelihood conditional on the first value : …………………* The maximization of this function gives the LS estimator. Instead of setting initial conditions, we can compute the unconditional likelihood. For an AR(1) model, the joint density function can be decomposed as, where the marginal distribution of is normal with zero mean, if , and variance Then, the exact log-likelihood under the normality assumption is: here the second term is equation * above. Then, the exact likelihood for a general ARMA model is the combination of the conditional likelihood and the unconditional probability density function of the initial values. 2. 4. 3 Model selection for ARMA models Once a set of models have been identified and estimated, the best model is selected among them In general, the model which minimizes a certain criterion function is selected. A number of criteria have been proposed in the literature to evaluate the fit of the model versus the number of parameters.

The more applied model selection criteria are the Akaike Information Criterion, AIC, (Akaike; 1974) and the Schwarz Information Criterion, SIC, (Schwarz; 1978) given by: and Where k is the number of the estimated ARMA parameters p+q and T is the number of observations used for estimation. Another criteria is the Akaike’s Final Prediction Error (FPE) Akaike’s Final Prediction Error (FPE) is defined by the following equation: where V is the loss function, d is the number of estimated parameters, and N is the number of values in the estimation data set. The toolbox assumes that the final prediction error is asymptotic for d

×

Hi there, would you like to get such a paper? How about receiving a customized one? Check it out