首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The main thrust of this study is to consider the problem of simultaneous prediction of actual and average values of the simultaneous equations model through the target function of Shalabh (Bulletin of International Statistical Institute, 1995, 56, 1375–1390). We focus on the predictive performance of the two‐stage ridge estimator with the motivation for eliminating the disorder arising from multicollinearity. An optimal biasing parameter of the two‐stage ridge estimator is derived by a minimization process of prediction mean square error. In addition, an optimal estimator for the weight of observed value in target function is attained theoretically. The results inferred from a numerical example and a Monte Carlo experiment provide a dramatic improvement in the predictive ability of the two‐stage ridge estimator.  相似文献   

2.
In this paper, we investigate the performance of a class of M‐estimators for both symmetric and asymmetric conditional heteroscedastic models in the prediction of value‐at‐risk. The class of estimators includes the least absolute deviation (LAD), Huber's, Cauchy and B‐estimator, as well as the well‐known quasi maximum likelihood estimator (QMLE). We use a wide range of summary statistics to compare both the in‐sample and out‐of‐sample VaR estimates of three well‐known stock indices. Our empirical study suggests that in general Cauchy, Huber and B‐estimator have better performance in predicting one‐step‐ahead VaR than the commonly used QMLE. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

3.
We introduce a new strategy for the prediction of linear temporal aggregates; we call it ‘hybrid’ and study its performance using asymptotic theory. This scheme consists of carrying out model parameter estimation with data sampled at the highest available frequency and the subsequent prediction with data and models aggregated according to the forecasting horizon of interest. We develop explicit expressions that approximately quantify the mean square forecasting errors associated with the different prediction schemes and that take into account the estimation error component. These approximate estimates indicate that the hybrid forecasting scheme tends to outperform the so‐called ‘all‐aggregated’ approach and, in some instances, the ‘all‐disaggregated’ strategy that is known to be optimal when model selection and estimation errors are neglected. Unlike other related approximate formulas existing in the literature, those proposed in this paper are totally explicit and require neither assumptions on the second‐order stationarity of the sample nor Monte Carlo simulations for their evaluation. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

4.
This research proposes a prediction model of multistage financial distress (MSFD) after considering contextual and methodological issues regarding sampling, feature and model selection criteria. Financial distress is defined as a three‐stage process showing different nature and intensity of financial problems. It is argued that applied definition of distress is independent of legal framework and its predictability would provide more practical solutions. The final sample is selected after industry adjustments and oversampling the data. A wrapper subset data mining approach is applied to extract the most relevant features from financial statement and stock market indicators. An ensemble approach using a combination of DTNB (decision table and naïve base hybrid model), LMT (logistic model tree) and A2DE (alternative N dependence estimator) Bayesian models is used to develop the final prediction model. The performance of all the models is evaluated using a 10‐fold cross‐validation method. Results showed that the proposed model predicted MSFD with 84.06% accuracy. This accuracy increased to 89.57% when a 33.33% cut‐off value was considered. Hence the proposed model is accurate and reliable to identify the true nature and intensity of financial problems regardless of the contextual legal framework.  相似文献   

5.
We look at the problem of forecasting time series which are not normally distributed. An overall approach is suggested which works both on simulated data and on real data sets. The idea is intuitively attractive and has the considerable advantage that it can readily be understood by non-specialists. Our approach is based on ARMA methodology and our models are estimated via a likelihood procedure which takes into account the non-normality of the data. We examine in some detail the circumstances in which taking explicit account of the nonnormality improves the forecasting process in a significant way. Results from several simulated and real series are included.  相似文献   

6.
A widely used approach to evaluating volatility forecasts uses a regression framework which measures the bias and variance of the forecast. We show that the associated test for bias is inappropriate before introducing a more suitable procedure which is based on the test for bias in a conditional mean forecast. Although volatility has been the most common measure of the variability in a financial time series, in many situations confidence interval forecasts are required. We consider the evaluation of interval forecasts and present a regression‐based procedure which uses quantile regression to assess quantile estimator bias and variance. We use exchange rate data to illustrate the proposal by evaluating seven quantile estimators, one of which is a new non‐parametric autoregressive conditional heteroscedasticity quantile estimator. The empirical analysis shows that the new evaluation procedure provides useful insight into the quality of quantile estimators. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

7.
In this paper a nonparametric approach for estimating mixed‐frequency forecast equations is proposed. In contrast to the popular MIDAS approach that employs an (exponential) Almon or Beta lag distribution, we adopt a penalized least‐squares estimator that imposes some degree of smoothness to the lag distribution. This estimator is related to nonparametric estimation procedures based on cubic splines and resembles the popular Hodrick–Prescott filtering technique for estimating a smooth trend function. Monte Carlo experiments suggest that the nonparametric estimator may provide more reliable and flexible approximations to the actual lag distribution than the conventional parametric MIDAS approach based on exponential lag polynomials. Parametric and nonparametric methods are applied to assess the predictive power of various daily indicators for forecasting monthly inflation rates. It turns out that the commodity price index is a useful predictor for inflations rates 20–30 days ahead with a hump‐shaped lag distribution. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
I provide some philosophical groundwork for the recently proposed ‘trans-Planckian censorship’ conjecture in theoretical physics. In particular, I argue that structure formation in early universe cosmology is, at least as we typically understand it, autonomous with regards to quantum gravity, the high energy physics that governs the Planck regime in our universe. Trans-Planckian censorship is then seen as a means of rendering this autonomy an empirical constraint within ongoing quantum gravity research.  相似文献   

9.
Accurate demand prediction is of great importance in the electricity supply industry. Electricity cannot be stored, and generating plant must be scheduled well in advance to meet future demand. Up to now, where online information about external conditions is unavailable, time series methods on the historical demand series have been used for short-term demand prediction. These have drawbacks, both in their sensitivity to changing weather conditions and in their poor modelling of the daily/weekly business cycles. To overcome these problems a framework has been constructed whereby forecasts from different prediction methods and different forecasting origins can be selected and combined, solely on the basis of recent forecasting performance, with no a priori assumptions of demand behaviour. This added flexibility in univariate forecasting provides a significant improvement in prediction accuracy.  相似文献   

10.
We develop an ordinary least squares estimator of the long‐memory parameter from a fractionally integrated process that is an alternative to the Geweke and Porter‐Hudak (1983) estimator. Using the wavelet transform from a fractionally integrated process, we establish a log‐linear relationship between the wavelet coefficients' variance and the scaling parameter equal to the log‐memory parameter. This log‐linear relationship yields a consistent ordinary least squares estimator of the long‐memory parameter when the wavelet coefficients' population variance is replaced by their sample variance. We derive the small sample bias and variance of the ordinary least squares estimator and test it against the GPH estimator and the McCoy–Walden maximum likelihood wavelet estimator by conducting a number of Monte Carlo experiments. Based upon the criterion of choosing the estimator which minimizes the mean squared error, the wavelet OLS approach was superior to the GPH estimator, but inferior to the McCoy–Walden wavelet estimator for the processes simulated. However, given the simplicity of programming and running the wavelet OLS estimator and its statistical inference of the long‐memory parameter we feel the general practitioner will be attracted to the wavelet OLS estimator. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

11.
This study proposes Gaussian processes to forecast daily hotel occupancy at a city level. Unlike other studies in the tourism demand prediction literature, the hotel occupancy rate is predicted on a daily basis and 45 days ahead of time using online hotel room price data. A predictive framework is introduced that highlights feature extraction and selection of the independent variables. This approach shows that the dependence on internal hotel occupancy data can be removed by making use of a proxy measure for hotel occupancy rate at a city level. Six forecasting methods are investigated, including linear regression, autoregressive integrated moving average and recent machine learning methods. The results indicate that Gaussian processes offer the best tradeoff between accuracy and interpretation by providing prediction intervals in addition to point forecasts. It is shown how the proposed framework improves managerial decision making in tourism planning.  相似文献   

12.
In this paper a data analysis tool for analyzing highly correlated time series data is suggested. The main objective is to unify multiple time series into a single series and then apply a univariate method for the purpose of prediction. This method is essentially efficient for analyzing multiple time series with sparse data. Several time series data of relative demand for black and white television receivers in various countries are analyzed and quite accurate predictions are obtained.  相似文献   

13.
We use dynamic factors and neural network models to identify current and past states (instead of future) of the US business cycle. In the first step, we reduce noise in data by using a moving average filter. Dynamic factors are then extracted from a large-scale data set consisted of more than 100 variables. In the last step, these dynamic factors are fed into the neural network model for predicting business cycle regimes. We show that our proposed method follows US business cycle regimes quite accurately in-sample and out-of-sample without taking account of the historical data availability. Our results also indicate that noise reduction is an important step for business cycle prediction. Furthermore, using pseudo real time and vintage data, we show that our neural network model identifies turning points quite accurately and very quickly in real time.  相似文献   

14.
The problem of multicollinearity produces undesirable effects on ordinary least squares (OLS), Almon and Shiller estimators for distributed lag models. Therefore, we introduce a Liu‐type Shiller estimator to deal with multicollinearity for distributed lag models. Moreover, we theoretically compare the predictive performance of the Liu‐type Shiller estimator with OLS and the Shiller estimators by the prediction mean square error criterion under the target function. Furthermore, an extensive Monte Carlo simulation study is carried out to evaluate the predictive performance of the Liu‐type Shiller estimator.  相似文献   

15.
This work proposes a new approach for the prediction of the electricity price based on forecasting aggregated purchase and sale curves. The basic idea is to model the hourly purchase and the sale curves, to predict them and to find the intersection of the predicted curves in order to obtain the predicted equilibrium market price and volume. Modeling and forecasting of purchase and sale curves is performed by means of functional data analysis methods. More specifically, parametric (FAR) and nonparametric (NPFAR) functional autoregressive models are considered and compared to some benchmarks. An appealing feature of the functional approach is that, unlike other methods, it provides insights into the sale and purchase mechanism connected with the price and demand formation process and can therefore be used for the optimization of bidding strategies. An application to the Italian electricity market (IPEX) is also provided, showing that NPFAR models lead to a statistically significant improvement in the forecasting accuracy.  相似文献   

16.
In econometrics, as a rule, the same data set is used to select the model and, conditional on the selected model, to forecast. However, one typically reports the properties of the (conditional) forecast, ignoring the fact that its properties are affected by the model selection (pretesting). This is wrong, and in this paper we show that the error can be substantial. We obtain explicit expressions for this error. To illustrate the theory we consider a regression approach to stock market forecasting, and show that the standard predictions ignoring pretesting are much less robust than naive econometrics might suggest. We also propose a forecast procedure based on the ‘neutral Laplace estimator’, which leads to an improvement over standard model selection procedures. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

17.
In this paper we deal with the prediction theory of long-memory time series. The purpose is to derive a general theory of the convergence of moments of the nonlinear least squares estimator so as to evaluate the asymptotic prediction mean squared error (PMSE). The asymptotic PMSE of two predictors is evaluated. The first is defined by the estimator of the differencing parameter, while the second is defined by a fixed differencing parameter: in other words, a parametric predictor of the seasonal autoregressive integrated moving average model. The effects of misspecifying the differencing parameter is a long-memory model are clarified by the asymptotic results relating to the PMSE. The finite sample behaviour of the predictor and the model selection in terms of PMSE of the two predictors are examined using simulation, and the source of any differences in behaviour made clear in terms of asymptotic theory. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

18.
This study examines a new approach for short-term wind speed and power forecasting based on the mixture of Gaussian hidden Markov models (MoG-HMMs). The proposed approach focuses on the characteristics of wind speed and power in the consecutive hours of previous days. The proposed method is carried out in two steps. In the first step, for the hourly prediction of wind speed, several wind speed features are employed in MoG-HMM, and in the second step, the results obtained from the first step along with their characteristics and wind power features are used to predict wind power estimation. To increase the prediction accuracy, the data used in each step are classified, and then for each class, one HMM with its specific parameters is used. The performance of the proposed approach is examined using real NREL data. The results show that the proposed method is more precise than other examined methods.  相似文献   

19.
ABSTRACT

Astrologers have exercised self-censorship throughout the centuries in order to fend off criticism. This was largely for religious reasons, but social, political, and ethical motivations also have to be taken into account. This paper explores the main reasons that led astrologers to increase censorship in their writings in the decades that preceded the Church’s regulations and offers some examples of this self-imposed restraint in astrological judgements.  相似文献   

20.
In this paper we apply cointegration and Granger-causality analyses to construct linear and neural network error-correction models for an Austrian Initial Public Offerings IndeX (IPOXATX). We use the significant relationship between the IPOXATX and the Austrian Stock Market Index ATX to forecast the IPOXATX. For prediction purposes we apply augmented feedforward neural networks whose architecture is determined by Sequential Network Construction with the Schwartz Information Criterion as an estimator for the prediction risk. Trading based on the forecasts yields results superior to Buy and Hold or Moving Average trading strategies in terms of mean-variance considerations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号