首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
    
It often occurs that no model may be exactly right, and that different portions of the data may favour different models. The purpose of this paper is to propose a new procedure for the detection of regime switches between stationary and nonstationary processes in economic time series and to show its usefulness in economic forecasting. In the proposed procedure, time series observations are divided into several segments, and a stationary or nonstationary autoregressive model is fitted to each segment. The goodness of fit of the global model composed of these local models is evaluated using the corresponding information criterion, and the division which minimizes the information criterion defines the best model. Simulation and forecasting results show the efficacy and limitations of the proposed procedure. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

2.
In several countries, some macro-economic variables are not observed frequently (e.g. quarterly) and economic authorities need estimates of these high-frequency figures to make econometric analyses or to follow closely the country's economic growth. Two problems are involved in this context. The first is to make these estimates after observing low-frequency values and some related indicators, and the second is to obtain predictions using just the observed indicators, i.e. before observing a new low-frequency figure. This paper gives a new optimal solution to the first problem, and solves the second using a recursive optimal approach. In the second situation, additionally, statistical tests are developed for detecting structural changes at current periods in the macro-economic variable involved. © 1998 John Wiley & Sons, Ltd.  相似文献   

3.
    
In time-series analysis, a model is rarely pre-specified but rather is typically formulated in an iterative, interactive way using the given time-series data. Unfortunately the properties of the fitted model, and the forecasts from it, are generally calculated as if the model were known in the first place. This is theoretically incorrect, as least squares theory, for example, does not apply when the same data are used to formulates and fit a model. Ignoring prior model selection leads to biases, not only in estimates of model parameters but also in the subsequent construction of prediction intervals. The latter are typically too narrow, partly because they do not allow for model uncertainty. Empirical results also suggest that more complicated models tend to give a better fit but poorer ex-ante forecasts. The reasons behind these phenomena are reviewed. When comparing different forecasting models, the BIC is preferred to the AIC for identifying a model on the basis of within-sample fit, but out-of-sample forecasting accuracy provides the real test. Alternative approaches to forecasting, which avoid conditioning on a single model, include Bayesian model averaging and using a forecasting method which is not model-based but which is designed to be adaptable and robust.  相似文献   

4.
    
This paper presents an extension of the Stock and Watson coincident indicator model that allows one to include variables available at different frequencies while taking care of missing observations at any time period. The proposed procedure provides estimates of the unobserved common coincident component, of the unobserved monthly series underlying any included quarterly indicator, and of any missing values in the series. An application to a coincident indicator model for the Portuguese economy is presented. We use monthly indicators from business surveys whose results are published with a very short delay. By using the available data for the monthly indicators and for quarterly real GDP, it becomes possible to produce simultaneously a monthly composite index of coincident indicators and an estimate of the latest quarter real GDP growth well ahead of the release of the first official figures. Copyright © 2005 John Wiley & Son, Ltd.  相似文献   

5.
  总被引:1,自引:0,他引:1  
This paper discusses how to specify an observable high‐frequency model for a vector of time series sampled at high and low frequencies. To this end we first study how aggregation over time affects both the dynamic components of a time series and their observability, in a multivariate linear framework. We find that the basic dynamic components remain unchanged but some of them, mainly those related to the seasonal structure, become unobservable. Building on these results, we propose a structured specification method built on the idea that the models relating the variables in high and low sampling frequencies should be mutually consistent. After specifying a consistent and observable high‐frequency model, standard state‐space techniques provide an adequate framework for estimation, diagnostic checking, data interpolation and forecasting. An example using national accounting data illustrates the practical application of this method. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

6.
A short‐term mixed‐frequency model is proposed to estimate and forecast Italian economic activity fortnightly. We introduce a dynamic one‐factor model with three frequencies (quarterly, monthly, and fortnightly) by selecting indicators that show significant coincident and leading properties and are representative of both demand and supply. We conduct an out‐of‐sample forecasting exercise and compare the prediction errors of our model with those of alternative models that do not include fortnightly indicators. We find that high‐frequency indicators significantly improve the real‐time forecasts of Italian gross domestic product (GDP); this result suggests that models exploiting the information available at different lags and frequencies provide forecasting gains beyond those based on monthly variables alone. Moreover, the model provides a new fortnightly indicator of GDP, consistent with the official quarterly series.  相似文献   

7.
    
The paper presents a comparative real‐time analysis of alternative indirect estimates relative to monthly euro area employment. In the experiment quarterly employment is temporally disaggregated using monthly unemployment as related series. The strategies under comparison make use of the contribution of sectoral data of the euro area and its six larger member states. The comparison is carried out among univariate temporal disaggregations of the Chow and Lin type and multivariate structural time series models of small and medium size. Specifications in logarithms are also systematically assessed. All multivariate set‐ups, up to 49 series modelled simultaneously, are estimated via the EM algorithm. Main conclusions are that mean revision errors of disaggregated estimates are overall small, a gain is obtained when the model strategy takes into account the information by both sector and member state and that larger multivariate set‐ups perform very well, with several advantages with respect to simpler models.Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

8.
Various methods based on smoothing or statistical criteria have been used for constructing disaggregated values compatible with observed annual totals. The present method is based on a time‐series model in a state space form and allows for a prescribed multiplicative trend. It is applied to US GNP data which have been used for comparing methods suggested for this purpose. The model can be extended to include quarterly series, related to the unknown disaggregated values. But as the estimation criteria are based on prediction errors of the aggregated values, the estimated form may not be optimal for reproducing high‐frequency variations of the disaggregated values. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

9.
    
In this article, we propose a regression model for sparse high‐dimensional data from aggregated store‐level sales data. The modeling procedure includes two sub‐models of topic model and hierarchical factor regressions. These are applied in sequence to accommodate high dimensionality and sparseness and facilitate managerial interpretation. First, the topic model is applied to aggregated data to decompose the daily aggregated sales volume of a product into sub‐sales for several topics by allocating each unit sale (“word” in text analysis) in a day (“document”) into a topic based on joint‐purchase information. This stage reduces the dimensionality of data inside topics because the topic distribution is nonuniform and product sales are mostly allocated into smaller numbers of topics. Next, the market response regression model for the topic is estimated from information about items in the same topic. The hierarchical factor regression model we introduce, based on canonical correlation analysis for original high‐dimensional sample spaces, further reduces the dimensionality within topics. Feature selection is then performed on the basis of the credible interval of the parameters' posterior density. Empirical results show that (i) our model allows managerial implications from topic‐wise market responses according to the particular context, and (ii) it performs better than do conventional category regressions in both in‐sample and out‐of‐sample forecasts.  相似文献   

10.
    
The k-nearest neighbors algorithm is one of the prominent techniques used in classification and regression. Despite its simplicity, the k-nearest neighbors has been successfully applied in time series forecasting. However, the selection of the number of neighbors and feature selection is a daunting task. In this paper, we introduce two methodologies for forecasting time series that we refer to as Classical Parameters Tuning in Weighted Nearest Neighbors and Fast Parameters Tuning in Weighted Nearest Neighbors. The first approach uses classical parameters tuning that compares the most recent subsequence with every possible subsequence from the past of the same length. The second approach reduces the neighbors' search set, which leads to significantly reduced grid size and hence a lower computational time. To tune the models' parameters, both methods implement an approach inspired by cross-validation for weighted nearest neighbors. We evaluate the forecasting performance and accuracy of our models. Then, we compare them to other approaches, especially, Seasonal Autoregressive Integrated Moving Average, Holt Winters, and Exponential Smoothing State Space Model. Real data examples on retail and food services sales in the United States and milk production in the United Kingdom are analyzed to demonstrate the application and efficiency of the proposed approaches.  相似文献   

11.
本文根据氨基酸理化性质,基于氨基酸组成成分与自相关函数相结合特征提取法从非同源蛋白质序列中提取七个特征集,采用局部正确性的动态特征选择算法进行多特征组合来预测蛋白质结构类,并与各个特征集进行了比较。结果表明,DFS_LA算法的预测总精度较各个特征集均有不同程度的提高。Jackknife检验下,DFS_LA算法的预测总精度为82.80%,比COMP特征集提高8.91%;独立测试检验下,DFS_LA算法的预测总精度为86.67%,比COMP特征集提高11.67%。这说明DFS_A算法可有效提高结构类预测精度,多特征组合能在一定程度上更多地反映蛋白质的空间结构信息。  相似文献   

12.
An important tool in time series analysis is that of combining information in an optimal way. Here we establish a basic combining rule of linear predictors and show that such problems as forecast updating, missing value estimation, restricted forecasting with binding constraints, analysis of outliers and temporal disaggregation can be viewed as problems of optimal linear combination of restrictions and forecasts. A compatibility test statistic is also provided as a companion tool to check that the linear restrictions are compatible with the forecasts generated from the historical data. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

13.
In this paper we consider some of the prominent methods that are available in the literature for the problem of disaggregating annual time-series data to quarterly figures. The procedures are briefly described and illustrated through a real data set. The performances of the methods are compared in a Monte Carlo study. The results indicate that the complicated model-based procedure is usually superior to other non-model-based alternatives in the large sample situations. Based on the simulation results, we make some recommendations regarding the use of these methods.  相似文献   

14.
    
Is there a common model inherent in macroeconomic data? Macroeconomic theory suggests that market economies of various nations should share many similar dynamic patterns; as a result, individual country empirical models, for a wide variety of countries, often include the same variables. Yet, empirical studies often find important roles for idiosyncratic shocks in the differing macroeconomic performance of countries. We use forecasting criteria to examine the macrodynamic behaviour of 15 OECD countries in terms of a small set of familiar, widely used core economic variables, omitting country‐specific shocks. We find this small set of variables and a simple VAR ‘common model’ strongly support the hypothesis that many industrialized nations have similar macroeconomic dynamics. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

15.
    
This paper proposes a strategy to detect the presence of common serial cor‐ relation in large‐dimensional systems. We show that partial least squares can be used to consistently recover the common autocorrelation space. Moreover, a Monte Carlo study reveals that univariate autocorrelation tests on the factors obtained by partial least squares outperform traditional tests based on canonical correlation analysis. Some empirical applications are presented to illustrate concepts and methods. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

16.
    
This paper shows that out‐of‐sample forecast comparisons can help prevent data mining‐induced overfitting. The basic results are drawn from simulations of a simple Monte Carlo design and a real data‐based design similar to those used in some previous studies. In each simulation, a general‐to‐specific procedure is used to arrive at a model. If the selected specification includes any of the candidate explanatory variables, forecasts from the model are compared to forecasts from a benchmark model that is nested within the selected model. In particular, the competing forecasts are tested for equal MSE and encompassing. The simulations indicate most of the post‐sample tests are roughly correctly sized. Moreover, the tests have relatively good power, although some are consistently more powerful than others. The paper concludes with an application, modelling quarterly US inflation. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

17.
    
A methodology for estimating high‐frequency values of an unobserved multivariate time series from low‐frequency values of and related information to it is presented in this paper. This is an optimal solution, in the multivariate setting, to the problem of ex post prediction, disaggregation, benchmarking or signal extraction of an unobservable stochastic process. Also, the problem of extrapolation or ex ante prediction is optimally solved and, in this context, statistical tests are developed for checking online the ocurrence of extreme values of the unobserved time series and consistency of future benchmarks with the present and past observed information. The procedure is based on structural or unobserved component models, whose assumptions and specification are validated with the data alone. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

18.
    
Historical tourism volume, search engine data, and weather calendar data have close causal relationship with daily tourism volume. However, when used in the prediction of daily tourism volume, the feature variables of the huge and complex search engine data do not have strong independence. These repetitive and highly relevant data must be analyzed and selected; otherwise, they will increase the training burden of neural network and reduce the prediction effect. This study proposes a daily tourism volume prediction model, maximum correlation minimum redundancy feature selection and long short-term memory, on the basis of feature selection and deep learning. Firstly, the multivariate high-dimensional features, including search engine data and weather factors, are selected to identify the key influencing factors. Secondly, the deep neural network is used to make a multistep forward rolling prediction of daily tourism volume. Results show that keywords of famous scenic spots, weather, historical tourism volume, and tourism strategies in the search engine data significantly improve the prediction accuracy of daily tourism volume. The proposed maximum correlation minimum redundancy feature selection and long short-term memory model performs better than other models, such as autoregressive integrated moving average, multiple regression, support vector machine, and long short-term memory.  相似文献   

19.
In recent years an impressive array of publications has appeared claiming considerable successes of neural networks in modelling financial data but sceptical practitioners and statisticians are still raising the question of whether neural networks really are ‘a major breakthrough or just a passing fad’. A major reason for this is the lack of procedures for performing tests for misspecified models, and tests of statistical significance for the various parameters that have been estimated, which makes it difficult to assess the model's significance and the possibility that any short‐term successes that are reported might be due to ‘data mining’. In this paper we describe a methodology for neural model identification which facilitates hypothesis testing at two levels: model adequacy and variable significance. The methodology includes a model selection procedure to produce consistent estimators, a variable selection procedure based on statistical significance and a model adequacy procedure based on residuals analysis. We propose a novel, computationally efficient scheme for estimating sampling variability of arbitrarily complex statistics for neural models and apply it to variable selection. The approach is based on sampling from the asymptotic distribution of the neural model's parameters (‘parametric sampling’). Controlled simulations are used for the analysis and evaluation of our model identification methodology. A case study in tactical asset allocation is used to demonstrate how the methodology can be applied to real‐life problems in a way analogous to stepwise forward regression analysis. Neural models are contrasted to multiple linear regression. The results indicate the presence of non‐linear relationships in modelling the equity premium. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

20.
    
Stochastic covariance models have been explored in recent research to model the interdependence of assets in financial time series. The approach uses a single stochastic model to capture such interdependence. However, it may be inappropriate to assume a single coherence structure at all time t. In this paper, we propose the use of a mixture of stochastic covariance models to generalize the approach and offer greater flexibility in real data applications. Parameter estimation is performed by Bayesian analysis with Markov chain Monte Carlo sampling schemes. We conduct a simulation study on three different model setups and evaluate the performance of estimation and model selection. We also apply our modeling methods to high‐frequency stock data from Hong Kong. Model selection favors a mixture rather than non‐mixture model. In a real data study, we demonstrate that the mixture model is able to identify structural changes in market risk, as evidenced by a drastic change in mixture proportions over time. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号