首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 184 毫秒
1.
针对基于单张正面人脸图像进行三维人脸重建时所需脸部侧面深度信息缺失的问题,提出基于BP神经网络快速三维重建方法。通过建立BP神经网络估计出正侧面人脸数据的关系,从而由输入的正面数据得到侧面数据,并对BP算法做出改进,加速了算法的收敛,提高了拟合的精度。然后利用获取的人脸侧面数据调整CANDIDE-3人脸模型,生成近似图...  相似文献   

2.
The practice of modelling the components of a vector time series to arrive at a joint model for the vector is considered. It is shown that in some cases this is not unreasonable. A vector ARMA model is used to model the Canadian money and income data. We also use these data to discuss the issue of differencing a multiple time series. Finally, models based on first and second differences are compared using forecasts.  相似文献   

3.
For a target socioeconomic variable with data from two sources, benchmarking is a process which uses less frequent and more reliable data, called benchmarks, to adjust more frequent and less reliable data. Consequently, forecasts of unknown benchmarks are obtained. The regression method of benchmarking may lead to better results than widely used numerical methods, but the model for the error of the more frequent data is supposed to be known. By properly choosing a first‐order autoregressive model as ‘working model’ for the error, the regression method may work well in reality. We present two new error modeling procedures via inside‐data‐period benchmark forecasts. The performance of several modeling procedures is compared. These results may provide analysts with guidelines for choosing working models for the error in developing and applying benchmarking software. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
Use of monthly data for economic forecasting purposes is typically constrained by the absence of monthly estimates of GDP. Such data can be interpolated but are then prone to measurement error. However, the variance matrix of the measurement errors is typically known. We present a technique for estimating a VAR on monthly data, making use of interpolated estimates of GDP and correcting for the impact of measurement error. We then address the question how to establish whether the model estimated from the interpolated monthly data contains information absent from the analogous quarterly VAR. The techniques are illustrated using a bivariate VAR modelling GDP growth and inflation. It is found that, using inflation data adjusted to remove seasonal effects and the impacts of changes to indirect taxes, the monthly model has little to add to a quarterly model when projecting one quarter ahead. However, the monthly model has an important role to play in building up a picture of the current quarter once one or two months' hard data becomes available. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

5.
本文分析了基于iSCSI协议的灾备模型的特点及其存在的安全隐患。为了解决该模型存在的安全问题,提出了基于iSCSI协议的远程灾备安全模型。其主要设计思想是在传输线路和本地存储网络以及远程灾备网络的接口处设置防火墙.用以提高两个存储网络的抗攻击能力,利用VPN来对传输的数据进行加密,从而保证数据的安全。最后通过实验验证了该模型数据传输的安全性和具有良好的抗攻击能力。  相似文献   

6.
We present a forecasting model based on fuzzy pattern recognition and weighted linear regression. In this model fuzzy pattern recognition is used to find homogeneous fuzzy classes in a heterogeneous data set. It is assumed that the classes represent typical situations. For each class a weighted regression analysis is conducted. The forecasting results obtained by the class regression analysis are aggregated to obtain the ‘overall’ estimation of the regression model. We apply the model to the forecasting of economic data of the USA. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

7.
The generalized autoregression model or GARM, originally used to model series of non-negative data measured at irregularly spaced time points (Lambert, 1996a), is considered in a count data context. It is first shown how the GARM can be expressed as a GLM in the special case of a linear model for some transform of the location parameter. The Butler approximate predictive likelihood (Butler, 1986, Rejoinder) is then used to define likelihood prediction envelopes. The width of these intervals is shown to be slightly wider than the Fisher (1959, pp. 128–33) and Lejeune and Faulkenberry (1982) predictive likelihood-based envelopes which assume that the parameters have fixed known values (equal to their maximum likelihood estimates). The method is illustrated on a small count data set showing overdispersion.© 1997 John Wiley & Sons, Ltd.  相似文献   

8.
The effectiveness of road traffic control systems can be increased with the help of a model that can accurately predict short-term traffic flow. Therefore, the performance of the preferred approach to develop a prediction model should be evaluated with data sets with different statistical characteristics. Thus a correlation can be established between the statistical properties of the data set and the model performance. The determination of this relationship will assist experts in choosing the appropriate approach to develop a high-performance short-term traffic flow forecasting model. The main purpose of this study is to reveal the relationship between the long short-term memory network (LSTM) approach's short-term traffic flow prediction performance and the statistical properties of the data set used to develop the LSTM model. In order to reveal these relationships, two different traffic prediction models with LSTM and nonlinear autoregressive (NAR) approaches were created using different data sets, and statistical analyses were performed. In addition, these analyses were repeated for nonstandardized traffic data indicating unusual fluctuations in traffic flow. As a result of the analyses, LSTM and NAR model performances were found to be highly correlated with the kurtosis and skewness changes of the data sets used to train and test these models. On the other hand, it was found that the difference of mean and skewness values of training and test sets had a significant effect on model performance in the prediction of nonstandard traffic flow samples.  相似文献   

9.
In this paper we develop a latent structure extension of a commonly used structural time series model and use the model as a basis for forecasting. Each unobserved regime has its own unique slope and variances to describe the process generating the data, and at any given time period the model predicts a priori which regime best characterizes the data. This is accomplished by using a multinomial logit model in which the primary explanatory variable is a measure of how consistent each regime has been with recent observations. The model is especially well suited to forecasting series which are subject to frequent and/or major shocks. An application to nominal interest rates shows that the behaviour of the three‐month US Treasury bill rate is adequately explained by three regimes. The forecasting accuracy is superior to that produced by a traditional single‐regime model and a standard ARIMA model with a conditionally heteroscedastic error. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

10.
Stochastic covariance models have been explored in recent research to model the interdependence of assets in financial time series. The approach uses a single stochastic model to capture such interdependence. However, it may be inappropriate to assume a single coherence structure at all time t. In this paper, we propose the use of a mixture of stochastic covariance models to generalize the approach and offer greater flexibility in real data applications. Parameter estimation is performed by Bayesian analysis with Markov chain Monte Carlo sampling schemes. We conduct a simulation study on three different model setups and evaluate the performance of estimation and model selection. We also apply our modeling methods to high‐frequency stock data from Hong Kong. Model selection favors a mixture rather than non‐mixture model. In a real data study, we demonstrate that the mixture model is able to identify structural changes in market risk, as evidenced by a drastic change in mixture proportions over time. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

11.
In this article, we propose a regression model for sparse high‐dimensional data from aggregated store‐level sales data. The modeling procedure includes two sub‐models of topic model and hierarchical factor regressions. These are applied in sequence to accommodate high dimensionality and sparseness and facilitate managerial interpretation. First, the topic model is applied to aggregated data to decompose the daily aggregated sales volume of a product into sub‐sales for several topics by allocating each unit sale (“word” in text analysis) in a day (“document”) into a topic based on joint‐purchase information. This stage reduces the dimensionality of data inside topics because the topic distribution is nonuniform and product sales are mostly allocated into smaller numbers of topics. Next, the market response regression model for the topic is estimated from information about items in the same topic. The hierarchical factor regression model we introduce, based on canonical correlation analysis for original high‐dimensional sample spaces, further reduces the dimensionality within topics. Feature selection is then performed on the basis of the credible interval of the parameters' posterior density. Empirical results show that (i) our model allows managerial implications from topic‐wise market responses according to the particular context, and (ii) it performs better than do conventional category regressions in both in‐sample and out‐of‐sample forecasts.  相似文献   

12.
In this paper we develop a semi‐parametric approach to model nonlinear relationships in serially correlated data. To illustrate the usefulness of this approach, we apply it to a set of hourly electricity load data. This approach takes into consideration the effect of temperature combined with those of time‐of‐day and type‐of‐day via nonparametric estimation. In addition, an ARIMA model is used to model the serial correlation in the data. An iterative backfitting algorithm is used to estimate the model. Post‐sample forecasting performance is evaluated and comparative results are presented. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

13.
This research proposes a prediction model of multistage financial distress (MSFD) after considering contextual and methodological issues regarding sampling, feature and model selection criteria. Financial distress is defined as a three‐stage process showing different nature and intensity of financial problems. It is argued that applied definition of distress is independent of legal framework and its predictability would provide more practical solutions. The final sample is selected after industry adjustments and oversampling the data. A wrapper subset data mining approach is applied to extract the most relevant features from financial statement and stock market indicators. An ensemble approach using a combination of DTNB (decision table and naïve base hybrid model), LMT (logistic model tree) and A2DE (alternative N dependence estimator) Bayesian models is used to develop the final prediction model. The performance of all the models is evaluated using a 10‐fold cross‐validation method. Results showed that the proposed model predicted MSFD with 84.06% accuracy. This accuracy increased to 89.57% when a 33.33% cut‐off value was considered. Hence the proposed model is accurate and reliable to identify the true nature and intensity of financial problems regardless of the contextual legal framework.  相似文献   

14.
This paper examines several methods to forecast revised US trade balance figures by incorporating preliminary data. Two benchmark forecasts are considered: one ignoring the preliminary data and the other applying a combination approach; with the second outperforming the first. Competing models include a bivariate AR error-correction model and a bivariate AR error-correction model with GARCH effects. The forecasts from the latter model outperforms the combination benchmark for the one-step forecast case only. A restricted AR error-correction model with GARCH effects is discovered to provide the best forecasts. © 1997 John Wiley & Sons, Ltd.  相似文献   

15.
Interest in online auctions has been growing in recent years. There is an extensive literature on this topic, whereas modeling online auction price process constitutes one of the most active research areas. Most of the research, however, only focuses on modeling price curves, ignoring the bidding process. In this paper, a semiparametric regression model is proposed to model the online auction process. This model captures two main features of online auction data: changing arrival rates of bidding processes and changing dynamics of prices. A new inference procedure using B‐splines is also established for parameter estimation. The proposed model is used to forecast the price of an online auction. The advantage of this proposed approach is that the price can be forecast dynamically and the prediction can be updated according to newly arriving information. The model is applied to Xbox data with satisfactory forecasting properties. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

16.
Bankruptcy prediction methods based on a semiparametric logit model are proposed for simple random (prospective) and case–control (choice‐based; retrospective) data. The unknown parameters and prediction probabilities in the model are estimated by the local likelihood approach, and the resulting estimators are analyzed through their asymptotic biases and variances. The semiparametric bankruptcy prediction methods using these two types of data are shown to be essentially equivalent. Thus our proposed prediction model can be directly applied to data sampled from the two important designs. One real data example and simulations confirm that our prediction method is more powerful than alternatives, in the sense of yielding smaller out‐of‐sample error rates. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

17.
When time series data are available for both advertising and sales, it may be worth while to model the two series jointly. Such an analysis may contribute to our understanding of the dynamic relationships among the series and may improve the accuracy of forecasts. Multiple time series techniques are applied to the well-known Lydia Pinkham data to illustrate their use in modelling the advertising-sales relationship. In analysing the Lydia Pinkham data the need for a joint model is established and a bivariate model is identified, estimated and checked. Its forecasting properties are discussed and compared to other time series approaches.  相似文献   

18.
This paper proposes a new mixed‐frequency approach to predict stock return volatilities out‐of‐sample. Based on the strategy of momentum of predictability (MoP), our mixed‐frequency approach has a model switching mechanism that switches between generalized autoregressive conditional heteroskedasticity (GARCH)‐class models that only use low‐frequency data and heterogeneous autoregressive models of realized volatility (HAR‐RV)‐type that only use high‐frequency data. The MoP model simply selects a forecast with relatively good past performance between the GARCH‐class and HAR‐RV‐type forecasts. The model confidence set (MCS) test shows that our MoP strategy significantly outperforms the competing models, which is robust to various settings. The MoP test shows that a relatively good recent past forecasting performance of the GARCH‐class or HAR‐RV‐type model is significantly associated with a relatively good current performance, supporting the success of the MoP model.  相似文献   

19.
A Bayesian structural model with two components is proposed to forecast the occurrence of algal blooms, multivariate mean‐reverting diffusion process (MMRD), and a binary probit model with latent Markov regime‐switching process (BPMRS). The model has three features: (a) forecast of the occurrence probability of algal bloom is directly based on oceanographic parameters, not the forecasting of special indicators in traditional approaches, such as phytoplankton or chlorophyll‐a; (b) augmentation of daily oceanographic parameters from the data collected every 2 weeks is based on MMRD. The proposed method solves the problem of unavailability of daily oceanographic parameters in practice; (c) BPMRS captures the unobservable factors which affect algal bloom occurrence and therefore improve forecast accuracy. We use panel data collected in Tolo Harbour, Hong Kong, to validate the model. The model demonstrates good forecasting for out‐of‐sample rolling forecasts, especially for algal bloom appearing for a longer period, which severely damages fisheries and the marine environment.  相似文献   

20.
为了在构建三维矿床模型过程中,形成最佳逼近的煤层界面模型,将移动拟合法引入到煤层界面插值中。根据煤层的赋存条件以及原始采样数据的分布特点提出了参数优化选择的有效方法及适用条件,并通过数学方法改善病态矩阵,使待估区域中的插值结果精度更高。以某矿的煤层界面数据为样本,通过交叉检验确定权函数,进行空间插值以及三维可视化,比较真实地反映了矿床的基本特征,为矿床地质模型的建立提供了有力的数据支撑。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号