2013-14 Program of Econometrics & Statistics Seminars

2013-14

Prof. Anurag BANERJEE (Durham Business School, University of Durham, U.K.)
Tuesday, May 27, 2014 at 4.30 p.m. in room N305 (Cergy)

The Endgame

On December 1st, 2009 President Obama announced that the U.S. troops would have started leaving Afghanistan on July 2011. Rather than simply waiting for the U.S. troops to withdraw, the Taliban forces responded to the announcement with a surge in attacks followed by a decline as the withdrawal date approached. In order to better understand these, at first, counterintuitive phenomena, this paper addresses the question of how knowing versus concealing the exact length of a strategic interaction changes the optimal equilibrium strategy by studying a two-player, zero-sum game of known and unknown duration. We show why the equilibrium dynamics is non-stationery under known duration and stationary otherwise. We show that when the duration is known the performance and effort might be increased or impaired depending on the length itself and on the nature of the interaction. We then test the model by using data available for soccer matches in the major European leagues. Most importantly, we exploit the change in rule adopted by FIFA in 1998 requiring referees to publicly disclose the length of the added time at the end of the 90 minutes of play. We study how the change in rule has affected the probability of scoring both over time and across teams' relative performance and find that the rule's change led to a 28% increase of goals during the added time.
It is a joint seminar with Economics Department.

Prof. Pieter KROONENBERG (Leiden University, The Netherlands)
Tuesday, April 29, 2014 at 10 a.m. in room N516 (Cergy)

Exploratory Analysis of Multivariate Longitudinal Data 

Longitudinal data are more often than not multivariate and one can therefore 
represent such data as a data cube of individuals × variables × time points. To 
analyse such data one has the choice of modelling them with stochastic models or 
carrying out an exploratory analysis in which one uses time for the interpretation 
rather than for modelling. In particular, one treats the subject mode as a fixed one just 
like the variables and the occasions. There are a considerable number of exploratory 
techniques to treat cubic data sets, but in this presentation the emphasis will be on
principal component based models, but a number of associated techniques are
touched upon as well.


Prof. Allan TIMMERMANN ( RADY, University of California San Diego – School of Management)
Monday, March 31,2014 at 3:00 pm, EEE - La Défense, Room 220 

Modeling Bond Return Predictability 

We evaluate the economic and statistical evidence on out-of-sample return predictability in the U.S. Treasury bond market across 2-5 year maturities. Our analysis accounts for parameter estimation and model uncertainty, changing parameters, and time-varying volatility. Our results suggest the importance of allowing for parameter uncertainty in generating forecasts and for stochastic volatility for generating density forecasts used in economic measures based on expected utility. A three-factor model comprising the Fama-Bliss (1987) forward spread, the Cochrane-Piazzesi (2005) combination of forward rates and theLudvigson-Ng (2009) macro factor generates notable gains in out-of-sample forecast accuracy compared with a model based on the expectations hypothesis. Moreover, we find that such gains can be converted into higher risk-adjusted portfolio returns. Model combinations are found to perform very well according to both statistical and economic criteria.

It is a joint work with Antonio Gargano & David Pettenuzzo.



Prof. Abderrahim TAAMOUTI (University Carlos III of Madrid) 

Wednesday, March 19, 2014 at 12:30 pm, EEE - La Défense, Room 102


Maps Measuring Nonlinear Granger Causality in Mean


We propose model-free measures for Granger causality-in-mean. Unlike the existing measures, ours are able to detect and quantify nonlinear Granger causality-in-mean between random variables. The new measures are based on nonparametric regressions and defined as a logarithmic function of restricted and unrestricted mean square forecast errors. They are easily and consistently estimated by replacing the unknown mean square forecast errors by their nonparametric kernel estimates. We establish the asymptotic normality of the nonparametric estimator of causality measures, which we use to build tests for their statistical significance. A desirable property of those tests is that they have nontrivial power against square-root-T-local alternatives, where T is the sample size. Monte Carlo simulations reveal that the tests behave very well and have good finite sample size and power properties for a variety of typical data generating processes and different sample sizes. Moreover, since testing that the value of the measure is equal to zero is equivalent to testing for non-causality-in-mean, we consider an additional simulation exercise to compare the empirical size and power of our tests with those of nonparametric test of Granger causality-in-mean proposed by Nishiyama et al. (2011). Simulation results indicate that our tests have comparable size, but better power than Nishiyama et al. (2011)’s test. We also establish the validity of smoothed local bootstrap that one can use in finite sample settings to perform statistical tests.

Finally, the empirical importance of measuring nonlinear causality-in-mean is also illustrated. We quantify the degree of nonlinear predictability of equity risk premium using variance risk premium. Our empirical results show that the variance risk premium is a very good predictor of equity risk premium at horizons less than six months. We find that there is a high degree of predictability at horizon one-month which can be attributed to a nonlinear causal effect.




Prof. Rajendra BHANSALI (University of Liverpool)
Wednesday, March 5, 2014 at 12:30 pm, EEE - La Défense, Room 203

Long Memory Properties Of Chaotic Intermittency Maps


Time series exhibiting long-memory, in the sense of slowly-decaying correlations, occur in many different fields of

applications; for example, the telecom network traffic, and the volatility in financial returns. However, the established

stochastic models for characterizing the slow decay of correlations are typically linear and based on a Gaussian

assumption. Thus, the popular ARFIMA family of models specify that the time series obtained after fractional differencing

follows a standard ARMA model. There has lately been much development in Dynamical Systems Theory on chaotic

intermittency maps which also realize long-range dependence. The talk will provide a review of some recent work on

chaotic intermittency maps.

The talk is based on the paper : R.J. Bhansali and M.P. Holland (2007), Frequency Analysis of Chaotic Intermittency Maps

with Slowly Decaying Correlations.



Prof. Serge DAROLLES (Université Paris Dauphine)
Wednesday, February 19, 2014 at 12:30 pm, EEE - La Défense, Room 138

Liquidity Risk Estimation in Conditional Volatility


Until recently, the liquidity of financial assets has typically been viewed as a second-order consideration in the asset-management industry. Liquidity was frequently associated with simple transaction costs that impose little effect, temporary if any, on asset prices and whose shocks could be easily diversified away. Yet, the evidence, especially the recent liquidity crisis, suggests that liquidity is now a primary concern. This paper aims at proposing a static liquidity risk measure leading to a better evaluation of the latter risk by distinguishing the market volatility shocks with persistent effects from liquidity shocks with temporary effects. This approach will allow isolating the liquidity risk even in the case where volumes are not observed.Exchange traded funds (or ETFs) are mutual funds that are traded on an exchange. 



Thierry RONCALLI (Head of Research & Development Lyxor Asset Management & Evry University)

Wednesday, February 5, 2014 at 12:30 pm, EEE - La Défense, Room 138


Improving the Efficiency of the European ETF Market: Implications for Regulators, Providers and Investors

Exchange traded funds (or ETFs) are mutual funds that are traded on an exchange. They generally replicate the performance of an index. This explains that index-tracking ETFs represent a large part of the assets. They allow investors to gain exposure to several asset classes on a real-time basis, meaning that shares in an ETF can be bought and sold throughout a broker-dealer like stocks. This ability to provide intraday exposure explains the incredible growth of the ETF market since ten years. According to ETFGI (2014), the global ETF industry had 3 594 ETFs, with assets of 2 254 B$ at the end of 2013.

However, this success must be mitigated especially in Europe. First, replicating the performance of the index does not mean that there is no risk with respect to the index. Second, liquidity in the ETF market is a fuzzy notion.

In this presentation, we define a framework to compute the efficiency of an ETF. This allows measuring the risk of the ETF with respect to the replicated index. We then consider the liquidity issue in an intraday basis. The main result is that liquidity among ETF providers is heterogeneous. The fragmentation of the ETF market, the cross-listing of ETFs and the low transparency of the European ETF market are certainly the most important factors, which explain this situation. In the last part of the talk, we propose different ways to improve the efficiency of the European ETF Market.

The talk is based on the following papers: Hassine M. and Roncalli T. (2013), Measuring Performance of Exchange Traded Funds, Journal of Index Investing & Roncalli T. and Zheng B. (2014), Measuring the Liquidity of ETFs: An Application to the European Market, SSRN, forthcoming.



Prof. Tolga CENESIZOGLU (HEC Montreal)

Wednesday, January 22, 2014 at 12:30 pm, EEE - La Défense, Room 103


Return Decomposition over the Business Cycle


Campbell and Shiller (1988) have suggested decomposing unexpected stock returns into unexpected changes in investors’ beliefs about future cash flows (cash flow news) and discount rates (discount rate news) to analyze the relative importance of each component in determining the observed variation in stock prices. Based on a generalization of this approach to a framework with regime-switching parameters and variances, we analyze the decomposition of the unconditional and conditional variances of returns on the S&P 500 index over the business cycle. In expansions, the conditional variance of cash flow news is almost always higher than that of discount rate news, and thus, contributes more to the conditional variance of returns. In recessions, the conditional variances of both cash flow and discount rate news (as well as their conditional covariance) increase but the conditional variance of discount rate news increases more than that of cash flow news, and thus, contributes more to the conditional variance of returns. Given that the economy is often in a expansion period, this implies that cash flow news is more important than discount rate news in determining the unconditional variance of returns in contrast to the standard Campbell and Shiller approach. We show that these results are broadly consistent with the implications of a stylized asset pricing model in which the growth rates of dividends and consumption take on different values depending on the underlying state of the economy.



Prof. Guillaume CHEVILLON (ESSEC Business School)

Wednesday, January 8, 2014 at 12:30 pm, EEE - La Défense, Room 103


Detecting and Forecasting Large Deviations and Bubbles in a Near-Explosive Random Coefficient Model


A Near Explosive Random-Coefficient autoregressive model  is proposed for asset pricing. It accommodates both the fundamental asset value and the recurrent presence of autonomous deviations or bubbles. Such a process can be stationary with or without fat tails, unit-root non-stationary or exhibit temporary exponential growth. The methodology we adopt allows to detect the presence of bubbles and establish probability statements on their apparition and devolution. As an illustration, we study the dynamics of the Case-Shiller index of U.S. house prices. Focusing in particular on the change in the price level, we provide an early detection device for turning points of booms and bust of the housing market.

It is a joined work with A. Banerjee (Durham Business School, UK) and M. Kratz (ESSEC Business School, CREAR).



Prof. David VEREDAS (ULB , Solvay Brussels School of Economics and Management - ECARES)

Thursday, November 7, 2013 at 12:30 pm, room N231 (Le Club) 


Systemic Risk: Liquidity Risk, Governance and Financial Stability, Forthcoming 


To measure the systemic risk in financial markets, and rank systemically important financial institutions (SIFIs), we propose a methodology based on the Google Page Rank algorithm. We understand the economic system as interconnected risk shocks of firms in both the financial sector and the real economy. By taking into account both sectors, we demonstrate the efficacy of intervention programs, such as the TARP, as circuit breakers in the propagation of crises - something not evident in applications which address only financial firms.

Keywords: Systemic risk, ranking, financial institutions



George A. MARCOULIDES (University of California, Riverside)

Thursday, February, 14th, 2013



Automated Structural Equation Modeling Strategies
 
There is currently tremendous interest within the structural equation modeling communities on developing and applying automated strategies for the analysis of data. Some researchers refer to such automated strategies as “discovering structure in data” or “learning from data” while others simply call them “data mining”.  Structural equation modeling (SEM) has for several decades enjoyed widespread popularity in a variety of areas as one of the fastest growing and dominant multivariate statistical techniques. A major reason for this popularity is that SEM permits researchers to study complex multivariate relationships among observed and latent variables whereby both direct and indirect effects can be evaluated. Although in principle researchers should fully specify and deductively hypothesize a specific model prior to data collection and testing, in practice this is often not possible, either because a theory is poorly formulated or perhaps even nonexistent. Consequently, another aspect of SEM is the exploratory mode, in which theory development can occur. Fitting a model to empirical data can be difficult, particularly when the number of variables is large. This presentation will introduce some new automated SEM strategies and elaborate on some conceptual and methodological details related to their application in a variety of settings and situations. 


 

Andreas PICK (Erasmus University Rotterdam)

Tuesday, June, 19th, 2012


Optimal Forecasts in the Presence of Structural Breaks

This paper considers the problem of forecasting under continuous and discrete structural breaks and  proposes weighting observations to obtain optimal forecasts in the MSFE sense. We derive optimal weights for  continuous and discrete break processes. Under continuous breaks, our approach recovers exponential smoothing weights. Under discrete breaks, we provide analytical expressions for the weights in models with a single regressor and asympotically for larger models. It is shown that in these cases the value of the optimal weight is the same across observations within a given regime and differs only across regimes. In practice, where information on structural breaks is uncertain a forecasting procedure based on robust weights is proposed. Monte Carlo experiments and an empirical application to the predictive power of the yield curve analyze the performance of our approach relative to other forecasting methods.


Shubho DAS (Indian Istitute of Management Bangalore, India)

Friday, May, 25th, 2012


Breaking the Barrier of Likert Scale in Basin Statistical Inference

 
Survey data is often collected in Likert scale. While purists argue that the resultant data is only ordinal, in practice, statistical tools are widely and indiscriminately used as if the data is in interval or ratio scale. In this work, we adopt a middle path, treating the data as fuzzy and examine the validity or inaccuracy of statistical inference. Of particular focus are three basic parameters, namely mean, variance and proportion, either considering a single population or while comparing them across two populations.



Xavier BRY (Université de Montpellier 2)

Friday, March, 16th, 2012


THEME-SEER: a multidimensional exploratory technique to analyze a structural model using an extended covariance criterion

 

We present a new path-modeling technique, based on an extended multpple covariance criterion: System Extended Multiple Covariance (SEMC). SEMC is suitable to measure the quality of any Structural Equations System. We show why SEMC may be preferred to criteria based on usual covariance of components, and also to criterai based on Residual Sums of Squares (RSS). Maximizing SEMC is not straightforward. We give a fast puirsuit algorithm ensuring that the criterion increases and converges. When one wishes to extract more than one component per variable group, a problem arises of component hierarchy. To solve it, we define a local nesting principle of component models that makes the role of each component statistically clear. We then embed the pursuit algorithm in a more general algorithm that extracts sequences of locally nested models. We finally provide a specific component backward selection strategy.




Shubho DAS (Indian Institute of Management Bangalore)

Wednesday, April, 20th, 2011 at 2 pm, Nautile # N305 -  Cergy Campus


On few Measurement Problems in Sports

 

Application of statistical and OR methods in the domain of sports has been on steady rise, leading to academic conferences and journals dwelling exclusively on this domain.  In this talk, we would discuss approaches to solving couple of such problems. The up-to-date position of competing teams based on points obtained by them in the middle of any round-robin (stage of) tournament may inadequately reflect their actual relative position, because of the strength of the opposition faced till that stage. To help the followers of the game, as well as to possibly help the teams to strategize, a simple probably matrix based approach followed up by computation of the expected points may easily bring clarity to the situation. While an unstructured or unconstrained way of updating these probabilities, reflecting individual perspective, at successive stages of the tournament may be ok, it being ad-hoc, also suffers from arbitrariness and may lack consistency. In that context, we explore how a model based Bayesian adaptation can work effectively. In cricket, batting average has always been used as primary measure of performance of a batsman. But traditional batting average exhibits serious limitation in reflecting true performance of a batsman in light of notout innings. Treating notouts as censored data, adaptation of Kaplan-Meir estimator provides a more reasonable solution, but it still suffers both from conceptual as well as operational problems at certain situations. A generalized class of geometric distribution (GGD) is proposed in this work to model the runs scored by individual batsmen, with the generalization coming in the form of hazard of getting out changing from one score to another.  We consider the change points as the known or specified parameters and derive the general expressions for the restricted maximum likelihood estimators of the hazard rates under the generalized structure considered. Given the domain context, we propose and test ten different variations of the GGD model and carry out the test across the nested models using the asymptotic distribution of the likelihood ratio statistic. We propose two alternative approaches for improved estimation of batting average on the basis of the above modeling.



Eric SOUTIF (CNAM, Paris)
Wednesday, December, 15th, 2010 at at 11:15 am, Nautile # N305 -  Cergy Campus

A framework to solve quadratic optimization problems with integer decision  variables - An application to portfolio management

 

This presentation is devoted to the solution of quadratic optimization problems with integer decision variables. A financial application of these optimization problems is portfolio management which attempts to maximize portfolio expected return and minimize portfolio risk, by carefully choosing the various assets that constitute the portfolio. Our study concerns convex integer quadratic problems where the objective function is non separable. The aims of this work are twofold. Firstly, we propose a new linearization scheme for non separable integer quadratic problems. Secondly we show how to use this linearization to solve the initial problem and present numerical results.



Anurag Narayan BANERJEE (Durham University, UK)
Wednesday, October, 13th, 2010 at 11:15 am, Nautile # N305 -  Cergy Campus
  

Informed Momentum Trading versus Uninformed “Naïve” Investors Strategies

 

We construct a zero-net-worth uninformed "naive investor" who uses a random portfolio allocation strategy. We then compare the returns of the momentum strategist to the return distribution of naive investors. For this purpose we reward momentum profits relative to the return quintiles of the naive investors with scores that are symmetric around the median. The score function thus constructed is invariant and robust to risk factor models. We find that the average scores of the momentum strategies are close to zero (the score of the median) and statistically insignificant over the sample period between 1926 and 2005, various sub-sample periods including the periods examined in Jegadeesh and Titman (1993 and 2001). The findings are robust with respect to sampling or period-specific effects in our simulations where we randomly select 10 years for 100 times.




Claes FORNELL (University of Michigan, USA)
Thursday, September 23rd, 2010 at 11 am -  Cergy Campus - seminar oganized jointly with the Department of Marketing


The (Enormous) Cumulative Economic Relevance of Customer Satisfaction - A PLS Based Approach


Claes Fornell presents research and results, from an actual stock portfolio, which seem contradictory to conventional wisdom.  First, it is possible to consistently outperform the stock market.  Second, this can be done without taking on more risk.  While inconsistent with financial theory, the results are consistent with SCIENCE in the more general sense: Knowledge can be used – in just about any context - in order do better (than competition) and it can be used to reduce risk.

In order to gauge the probable future benefit of any corporate economic asset – such as customer satisfaction – it is necessary to examine both contemporaneous and intertemporal effects.  The latter effects have been overlooked in previous studies, but it turns out that they are quite sizable and have about the same size as the contemporaneous ones.  They are obtained from the power of customer satisfaction portfolio returns to predict future excess market returns. If investors would pay more attention to the primary sources of future cash flows – and most of them come from repeat consumer spending - aggregate consumer utility would increase and equity markets would benefit from greater efficiency.




Gilbert SAPORTA (CNAM, Paris)
October 14, 2009 at 4:30 pm - room N305


Comments