首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 985 毫秒
1.
Interest in online auctions has been growing in recent years. There is an extensive literature on this topic, whereas modeling online auction price process constitutes one of the most active research areas. Most of the research, however, only focuses on modeling price curves, ignoring the bidding process. In this paper, a semiparametric regression model is proposed to model the online auction process. This model captures two main features of online auction data: changing arrival rates of bidding processes and changing dynamics of prices. A new inference procedure using B‐splines is also established for parameter estimation. The proposed model is used to forecast the price of an online auction. The advantage of this proposed approach is that the price can be forecast dynamically and the prediction can be updated according to newly arriving information. The model is applied to Xbox data with satisfactory forecasting properties. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

2.
Modeling online auction prices is a popular research topic among statisticians and marketing analysts. Recent research mainly focuses on two directions: one is the functional data analysis (FDA) approach, in which the price–time relationship is modeled by a smooth curve, and the other is the point process approach, which directly models the arrival process of bidders and bids. In this paper, a novel model for the bid arrival process using a self‐exciting point process (SEPP) is proposed and applied to forecast auction prices. The FDA and point process approaches are linked together by using functional data analysis technique to describe the intensity of the bid arrival point process. Using the SEPP to model the bid arrival process, many stylized facts in online auction data can be captured. We also develop a simulation‐based forecasting procedure using the estimated SEPP intensity and historical bidding increment. In particular, prediction interval for the terminal price of merchandise can be constructed. Applications to eBay auction data of Harry Potter books and Microsoft Xbox show that the SEPP model provides more accurate and more informative forecasting results than traditional methods. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

3.
This paper uses non-linear methodologies to follow the synchronously reported relationship between the Nordic/Baltic electric daily spot auction prices and geographical relevant wind forecasts in MWh from early 2013 to 2020. It is a well-known market (auctions) microstructure fact that the daily wind forecasts are information available to the market before the daily auction bid deadline at 11 a.m. The main objective is therefore to establish conditional and marginal step ahead spot price density forecast using a stochastic representation of the lagged, synchronously reported and stationary spot price and wind forecast movements. Using an upward expansion path applying the Schwarz (Bayesian information criterion [BIC]) criterion and a battery of residual test statistics, an optimal maximum likelihood process density is suggested. The optimal specification reports a significant negative covariance between the daily price and wind forecast movements. Conditional on bivariate lags from the SNP information and using the known market information for wind forecast movements at t1, the paper establishes one-step-ahead bivariate and marginal day-ahead spot price movement densities. The result shows that wind forecasts significantly influence the synchronously reported spot price densities (means and volatilities). The paper reports day-ahead bivariate and marginal densities for spot price movements conditional on several very plausible price and wind forecast movements. The paper suggests day-ahead spot price predictions from conditional and synchronously reported wind forecasts movements. The information should increase market participants spot market insight and consequently make spot price predictions more accurate and the confidence interval considerably narrower.  相似文献   

4.
We propose an innovative approach to model and predict the outcome of football matches based on the Poisson autoregression with exogenous covariates (PARX) model recently proposed by Agosto, Cavaliere, Kristensen, and Rahbek (Journal of Empirical Finance, 2016, 38(B), 640–663). We show that this methodology is particularly suited to model the goal distribution of a football team and provides a good forecast performance that can be exploited to develop a profitable betting strategy. This paper improves the strand of literature on Poisson‐based models, by proposing a specification able to capture the main characteristics of goal distribution. The betting strategy is based on the idea that the odds proposed by the market do not reflect the true probability of the match because they may also incorporate the betting volumes or strategic price settings in order to exploit betters' biases. The out‐of‐sample performance of the PARX model is better than the reference approach by Dixon and Coles (Applied Statistics, 1997, 46(2), 265–280). We also evaluate our approach in a simple betting strategy, which is applied to English football Premier League data for the 2013–2014, 2014–2015, and 2015–2016 seasons. The results show that the return from the betting strategy is larger than 30% in most of the cases considered and may even exceed 100% if we consider an alternative strategy based on a predetermined threshold, which makes it possible to exploit the inefficiency of the betting market.  相似文献   

5.
The multinomial probit model introduced here combines heterogeneity across households with flexibility of the (deterministic) utility function. To achieve flexibility deterministic utility is approximated by a neural net of the multilayer perceptron type. A Markov Chain Monte Carlo method serves to estimate heterogeneous multinomial probit models which fulfill economic restrictions on signs of (marginal) effects of predictors (e.g., negative for price). For empirical choice data the heterogeneous multinomial probit model extended by a multilayer perceptron clearly outperforms all the other models studied. Moreover, replacing homogeneous by heterogeneous reference price mechanisms and thus allowing price expectations to be formed differently across households also leads to better model performance. Mean utility differences and mean elasticities w.r.t. price and price deviation from reference price demonstrate that models with linear utility and nonlinear utility approximated by a multilayer perceptron lead to very different implications for managerial decision making. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

6.
Online search data provide us with a new perspective for quantifying public concern about animal diseases, which can be regarded as a major external shock to price fluctuations. We propose a modeling framework for pork price forecasting that incorporates online search data with support vector regression model. This novel framework involves three main steps: that is, formulation of the animal diseases composite indexes (ADCIs) based on online search data; forecast with the original ADCIs; and forecast improvement with the decomposed ADCIs. Considering that there are some noises within the online search data, four decomposition techniques are introduced: that is, wavelet decomposition, empirical mode decomposition, ensemble empirical mode decomposition, and singular spectrum analysis. The experimental study confirms the superiority of the proposed framework, which improves both the level and directional prediction accuracy. With the SSA method, the noise within the online search data can be removed and the performance of the optimal model is further enhanced. Owing to the long-term effect of diseases outbreak on price volatility, these improvements are more prominent in the mid- and long-term forecast horizons.  相似文献   

7.
We consider the linear time‐series model yt=dt+ut(t=1,...,n), where dt is the deterministic trend and ut the stochastic term which follows an AR(1) process; ut=θut−1t, with normal innovations ϵt. Various assumptions about the start‐up will be made. Our main interest lies in the behaviour of the l‐period‐ahead forecast yn+1 near θ=1. Unlike in other studies of the AR(1) unit root process, we do not wish to ask the question whether θ=1 but are concerned with the behaviour of the forecast estimate near and at θ=1. For this purpose we define the sth (s=1,2) order sensitivity measure λl(s) of the forecast yn+1 near θ=1. This measures the sensitivity of the forecast at the unit root. In this study we consider two deterministic trends: dtt and dtttt. The forecast will be the Best Linear Unbiased forecast. We show that, when dtt, the number of observations has no effect on forecast sensitivity. When the deterministic trend is linear, the sensitivity is zero. We also develop a large‐sample procedure to measure the forecast sensitivity when we are uncertain whether to include the linear trend. Our analysis suggests that, depending on the initial conditions, it is better to include a linear trend for reduced sensitivity of the medium‐term forecast. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

8.
A new clustered correlation multivariate generalized autoregressive conditional heteroskedasticity (CC‐MGARCH) model that allows conditional correlations to form clusters is proposed. This model generalizes the time‐varying correlation structure of Tse and Tsui (2002, Journal of Business and Economic Statistics 20 : 351–361) by classifying the correlations among the series into groups. To estimate the proposed model, Markov chain Monte Carlo methods are adopted. Two efficient sampling schemes for drawing discrete indicators are also developed. Simulations show that these efficient sampling schemes can lead to substantial savings in computation time in Monte Carlo procedures involving discrete indicators. Empirical examples using stock market and exchange rate data are presented in which two‐cluster and three‐cluster models are selected using posterior probabilities. This implies that the conditional correlation equation is likely to be governed by more than one set of decaying parameters. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

9.
We analyse the price movement of the S&P 500 futures market for violations of the efficient market hypothesis on a short-term basis. To assess market inefficiency we construct a model and find that the returns, i.e. the difference in the logarithm of closing prices on consecutive days, exhibit the usual conditional heteroscedasticity behaviour typical of long series of financial data. To account for this non-linear behaviour we scale the returns by a volatility factor which depends on the daily high, low, and closing price. The rescaled series, which may be interpreted as the trend-countertrend component of the time series, is modelled using Box and Jenkins techniques. The resulting model is an ARMA(1,1). The scale factors are assumed to form a time series and are modelled using a semi-non-parametric method which avoids the restrictive assumptions of most ARCH or GARCH models. Using the combined model we perform 1000 simulations of market data, each simulation comprising 250 days (approximately one year). We then formulate a naive trading strategy which is based on the ratio of the one-day-ahead expected return to its one-day-ahead expected conditional standard deviation. The trading strategy has four adjustable parameters which are set to maximize profits for the simulation data. Next, we apply the trading strategy to one year of recent out-of-sample data. Our conclusion is that the S&P 500 futures market exhibits only slight inefficiencies, but that there exist, in principle, better trading strategies which take account of risk than the benchmark strategy of buy-and-hold. We have also constructed a linear model for the return series. Using the linear model, we have simulated returns and determined the optimum values for the adjustable parameters of the trading strategy. In this case, the optimum trading strategy is the same as the benchmark strategy, buy-and-hold. Finally, we have compared the profitability of the optimized trading strategy, based on the non-linear model, to three ad hoc trading strategies using the out-of-sample data. The three ad hoc strategies are more profitable than the optimized strategy.  相似文献   

10.
We used an enhanced luminescence technique to study the response of rat tissues, such as liver, heart, muscle and blood, to oxidative stress and to determine their antioxidant capacity. As previously found for liver homogenate, the intensity of light emission (E) of tissue homogenates and blood samples, stressed with sodium perborate, is dependent on concentration, and the dose-response curves can be described by the equation E=a·C/exp(b·C). Theb value depends on the antioxidant defence capability of the tissues. In fact, it increases when homogenates are supplemented with an antioxidant, and is correlated with tissue antioxidant capacity, evaluated by two previously set up methods both using the same luminescence technique. Our results indicate that the order of antioxidant capacity of the tissues is liver>blood>heart>muscle. Thea value depends on the systems catalysing the production of radical species. In fact, it is related to the tissue level of hemoproteins, which are known to act as catalysts in radical production from hydroperoxides. The equation proposed to describe the dose-response relation is simple to handle and permits an immediate connection with the two characteristics of the systems analysed which determine their response to the pro-oxidant treatment. However, the equation which best describes the above relation for all the tissues is the following: E=·C/exp(·C). The parameter assumes values smaller than 1, which seem to depend on relative amounts of tissue hemoproteins and antioxidants. The extension of the analysis to mitochondria shows that they respond to oxidative stress in a way analogous to the tissues, and that the adherence of the dose-response curve to the course predicted from the equation E=a·C/exp(b·C) is again dependent on hemoprotein content.  相似文献   

11.
For improving forecasting accuracy and trading performance, this paper proposes a new multi-objective least squares support vector machine with mixture kernels to forecast asset prices. First, a mixture kernel function is introduced into taking full use of global and local kernel functions, which is adaptively determined following a data-driven procedure. Second, a multi-objective fitness function is proposed by incorporating level forecasting and trading performance, and particle swarm optimization is used to synchronously search the optimal model selections of least squares support vector machine with mixture kernels. Taking CO2 assets as examples, the results obtained show that compared with the popular models, the proposed model can achieve higher forecasting accuracy and higher trading performance. The advantages of the mixture kernel function and the multi-objective fitness function can improve the forecasting ability of the asset price. The findings also show that the models with a high-level forecasting accuracy cannot always have a high trading performance of asset price forecasting. In contrast, high directional forecasting usually means a high trading performance.  相似文献   

12.
13.
We introduce a versatile and robust model that may help policymakers, bond portfolio managers and financial institutions to gain insight into the future shape of the yield curve. The Burg model forecasts a 20‐day yield curve, which fits a pth‐order autoregressive (AR) model to the input signal by minimizing (least squares) the forward and backward prediction errors while constraining the autoregressive parameters to satisfy the Levinson–Durbin recursion. Then, it uses an infinite impulse response prediction error filter. Results are striking when the Burg model is compared to the Diebold and Li model: the model not only significantly improves accuracy, but also its forecast yield curves stick to the shape of observed yield curves, whether normal, humped, flat or inverted. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

14.
The paper develops an oil price forecasting technique which is based on the present value model of rational commodity pricing. The approach suggests shifting the forecasting problem to the marginal convenience yield, which can be derived from the cost‐of‐carry relationship. In a recursive out‐of‐sample analysis, forecast accuracy at horizons within one year is checked by the root mean squared error as well as the mean error and the frequency of a correct direction‐of‐change prediction. For all criteria employed, the proposed forecasting tool outperforms the approach of using futures prices as direct predictors of future spot prices. Vis‐à‐vis the random‐walk model, it does not significantly improve forecast accuracy but provides valuable statements on the direction of change. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

15.
The ontological model framework provides a rigorous approach to address the question of whether the quantum state is ontic or epistemic. When considering only conventional projective measurements, auxiliary assumptions are always needed to prove the reality of the quantum state in the framework. For example, the Pusey–Barrett–Rudolph theorem is based on an additional preparation independence assumption. In this paper, we give a new proof of ψ-ontology in terms of protective measurements in the ontological model framework. The proof does not rely on auxiliary assumptions, and it also applies to deterministic theories such as the de Broglie–Bohm theory. In addition, we give a simpler argument for ψ-ontology beyond the framework, which is based on protective measurements and a weaker criterion of reality. The argument may be also appealing for those people who favor an anti-realist view of quantum mechanics.  相似文献   

16.
本文在You-Kaveh模型的基础上,提出了一个新的拉普拉斯算子,使其能够包括更多的图像信息,同时改进了拉普拉斯算子的离散形式,得到了一个新的四阶偏微分方程。实验结果表明,新模型和You-Kaveh模型相比,不仅能够更好地去除高斯噪声,而且视觉效果更加理想。  相似文献   

17.
Screening for differentially expressed genes is a straightforward approach to study the molecular basis for changes in gene expression. Differential display analysis has been used by investigators in diverse fields of research since it was developed. Differential display has also been the approach of choice to investigate changes in gene expression in response to various biological challenges in invertebrates. We review the application of differential display analysis of gene expression in invertebrates, and provide a specific example using this technique for novel gene discovery in the nematode Caenorhabditis elegans.  相似文献   

18.
Volatility plays a key role in asset and portfolio management and derivatives pricing. As such, accurate measures and good forecasts of volatility are crucial for the implementation and evaluation of asset and derivative pricing models in addition to trading and hedging strategies. However, whilst GARCH models are able to capture the observed clustering effect in asset price volatility in‐sample, they appear to provide relatively poor out‐of‐sample forecasts. Recent research has suggested that this relative failure of GARCH models arises not from a failure of the model but a failure to specify correctly the ‘true volatility’ measure against which forecasting performance is measured. It is argued that the standard approach of using ex post daily squared returns as the measure of ‘true volatility’ includes a large noisy component. An alternative measure for ‘true volatility’ has therefore been suggested, based upon the cumulative squared returns from intra‐day data. This paper implements that technique and reports that, in a dataset of 17 daily exchange rate series, the GARCH model outperforms smoothing and moving average techniques which have been previously identified as providing superior volatility forecasts. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

19.
20.
Financial market time series exhibit high degrees of non‐linear variability, and frequently have fractal properties. When the fractal dimension of a time series is non‐integer, this is associated with two features: (1) inhomogeneity—extreme fluctuations at irregular intervals, and (2) scaling symmetries—proportionality relationships between fluctuations over different separation distances. In multivariate systems such as financial markets, fractality is stochastic rather than deterministic, and generally originates as a result of multiplicative interactions. Volatility diffusion models with multiple stochastic factors can generate fractal structures. In some cases, such as exchange rates, the underlying structural equation also gives rise to fractality. Fractal principles can be used to develop forecasting algorithms. The forecasting method that yields the best results here is the state transition‐fitted residual scale ratio (ST‐FRSR) model. A state transition model is used to predict the conditional probability of extreme events. Ratios of rates of change at proximate separation distances are used to parameterize the scaling symmetries. Forecasting experiments are run using intraday exchange rate futures contracts measured at 15‐minute intervals. The overall forecast error is reduced on average by up to 7% and in one instance by nearly a quarter. However, the forecast error during the outlying events is reduced by 39% to 57%. The ST‐FRSR reduces the predictive error primarily by capturing extreme fluctuations more accurately. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号