Sessi TOKPAVI
Publications
"Are ESG Ratings Informative to Forecast Idiosyncratic Risk?" (with C. Boucher, W. le Lann and S. Matton),
Finance, Forthcoming, 2024.
This paper develops a backtesting procedure that evaluates how well ESG ratings help in predicting a company's idiosyncratic risk. Technically, the inference is based on extending the conditional predictive ability test of Giacomini and White (2006) to a panel data setting. We apply our methodology to the forecasting of stock returns idiosyncratic volatility and compare two ESG rating systems from Sustainalytics and Asset4 across three investment universes (Europe, North America, and the Asia-Pacific region). The results show that the null hypothesis of no informational content in ESG ratings is strongly rejected for firms located in Europe, whereas results appear mixed in the other regions. In most configurations, we find a negative relationship between ESG ratings and idiosyncratic risk, with higher ratings predicting lower levels of idiosyncratic volatility. Furthermore, the predictive accuracy gains are generally higher when assessing the environmental dimension of the ratings. Importantly, applying the test only to firms over which there is a high degree of consensus between the ESG rating agencies leads to higher predictive accuracy gains for all three universes. Beyond providing insights into the accuracy of each of the ESG rating systems, this last result suggests that information gathered from several ESG rating providers should be cross-checked before ESG is integrated into investment processes.
"Conditional mean reversion of financial ratios and the predictability of returns," (with C. Boucher, A. Jasinski),
Journal of International Money and Finance, 137(1-2), 102907, 2023.
While traditional predictive regressions for stock returns using financial ratios are empirically proven to be valuable at long-term horizons, evidence of predictability at few-month horizons is still weak. In this paper, based on the empirical regularity of a typical dynamic of stock returns following the occurrence of a mean reversion in the US Shiller CAPE ratio when the latter is high, we propose a new predictive regression model that exploits this stylized fact. In-sample regressions approximating the occurrence of mean reversion by the smoothed probability from a regime-switching model show superior predictive powers of the new specification at few-month horizons. These results also hold out-of-sample, exploiting the link between the term spread and the credit spread as business cycle variables and the occurrence of mean reversion in the US Shiller CAPE ratio. For instance, the out-of-sample R-squared of the new predictive regression model is ten (four) times bigger than that of the traditional predictive model at a 6 (12) month horizon. Our results are robust with respect to the choice of the valuation ratio (CAPE, excess CAPE or dividend yield), and countries (Canada, Germany and the UK). We also conduct a mean–variance asset allocation exercise which confirms the superiority of the new predictive regression in terms of utility gain.
"Machine learning for credit Scoring: Improving logistic regression with non-linear decision-tree effects," (with E. Dumitrescu, S. Hué, C. Hurlin),
European Journal of Operational Research, 297(3), 1178-1192, 2022.
In the context of credit scoring, ensemble methods based on decision trees, such as the random forest method, provide better classification performance than standard logistic regression models. However, logistic regression remains the benchmark in the credit risk industry mainly because the lack of interpretability of ensemble methods is incompatible with the requirements of financial regulators. In this paper, we propose a high-performance and interpretable credit scoring method called penalised logistic tree regression (PLTR), which uses information from decision trees to improve the performance of logistic regression. Formally, rules extracted from various short-depth decision trees built with original predictive variables are used as predictors in a penalised logistic regression model. PLTR allows us to capture non-linear effects that can arise in credit scoring data while preserving the intrinsic interpretability of the logistic regression model. Monte Carlo simulations and empirical applications using four real credit default datasets show that PLTR predicts credit risk significantly more accurately than logistic regression and compares competitively to the random forest method.
"Smart Alpha: active management with unstable and latent factors," (with C. Boucher, A. Jasinski, P. Kouontchou),
Quantitative Finance, 21(6), 893-909, 2021.
Factor investing has attracted increasing interest in the investment industry because purely active and passive solutions have underperformed. Its success depends critically on identifying the factors involved and timing this well, but this is hard to do because there is such a zoo of factors, and those factors and their loadings are time-varying. We thus propose an investment rule that we call ‘Smart Alpha’, which avoids betting on a-priori factors but focuses instead on an active approach that minimises the exposure of the portfolio to systematic sources of risk while maximising its alpha. This means our choice is to bet on alphas instead of alternative betas. We use stocks in the European STOXX 600 universe to show empirically that the Smart Alpha portfolio dominates many popular European factor investing indexes and smart beta strategies.
"Stocks and bonds: Flight-to-safety for ever?," (with C. Boucher),
Journal of International Money and Finance, 95, 27-43, 2019.
This paper gives new insights about flight-to-safety from stocks to bonds, asking whether the strength of this phenomenon remains the same in the current environment of low yields. The motivations lie in the conjecture that when yields are low, the traditional motives of flight-to-safety (wealth protection, liquidity) could not be sufficient, inducing weaker flight-to-safety events. Empirical applications using data for U.S. government bonds and the S&P 500 index, show indeed that when yields are low, the strength of flight-to-safety from stocks to bonds weakens. This result holds, even when controlling for the effects of traditional flight-to-safety factors including the VIX, the TED spreads and the overall level of illiquidity in the stock market. Moreover, we develop a bivariate model of flight-to-safety transfers that measures to what extent the strength of flight-to-safety from stocks to bonds is related to the strength of flight-to-safety from stocks to other safe haven assets (gold and currencies). Results show that when the strength of flight-to-safety from stocks to bonds decreases the strength of flight-to-safety from stocks to these safe haven assets increases. This result holds only in the low-yield environment, suggesting a kind of substitution effect of save haven assets, similar to the reaching for yield behavior.
"Measuring network systemic risk contributions: A leave-one-out approach," (with S. Hué and Y. Lucotte),
Journal of Economic Dynamics and Control, 100, 86-114, 2019.
The aim of this paper is to propose a new network measure of systemic risk contributions that combines the pair-wise Granger causality approach with the leave-one-out concept. This measure is based on a conditional Granger causality test and consists of measuring how far the proportion of statistically significant connections in the system breaks down when a given financial institution is excluded. We analyse the performance of our measure of systemic risk by considering a sample of the largest banks worldwide over the 2003–2018 period. We obtain three important results. First, we show that our measure is able to identify a large number of banks classified as global systemically important banks (G-SIBs) by the Financial Stability Board (FSB). Second, we find that our measure is a robust and statistically significant early-warning indicator of downside returns during the last financial crisis. Finally, we investigate the potential determinants of our measure of systemic risk and find similar results to the existing literature. In particular, our empirical results suggest that the size and the business model of banks are significant drivers of systemic risk.
"Quand l'Union Fait la Force: Un Indice de Risque Systémique," (avec P. Kouontchou, B. Maillet et A. Modesto),
Revue Economique, 68, 87-106, 2017.
À la suite de la dernière crise financière sévère, plusieurs mesures de risque systémique ont été proposées pour quantifier l'état de stress du système financier. Dans cet article, nous proposons un indice agrégé de mesure de risque systémique financier basé sur une analyse en composantes principales dite « parcimonieuse ». Cette méthodologie permet d'obtenir un indice agrégé plus parcimonieux et plus stable dans le temps. L'application de la méthodologie à douze mesures de risque systémique global en utilisant des données des titres du marché financier américain confirme cette propriété. Il apparaît par ailleurs que les mouvements extrêmes positifs de l'indice de risque systémique ainsi construit peuvent être considérés comme des anticipations des périodes de forte contraction de l'activité économique
"A Nonparametric Test for Granger Causality in Distribution with Application to Financial Contagion," (with B. Candelon),
Journal of Business & Economic Statistics, 34(2), 240-253, 2016.
This article introduces a kernel-based nonparametric inferential procedure to test for Granger causality in distribution. This test is a multivariate extension of the kernel-based Granger causality test in tail event. The main advantage of this test is its ability to examine a large number of lags, with higher-order lags discounted. In addition, our test is highly flexible because it can be used to identify Granger causality in specific regions on the distribution supports, such as the center or tails. We prove that the test converges asymptotically to a standard Gaussian distribution under the null hypothesis and thus is free of parameter estimation uncertainty. Monte Carlo simulations illustrate the excellent small sample size and power properties of the test. This new test is applied to a set of European stock markets to analyze spillovers during the recent European crisis and to distinguish contagion from interdependence effects.
"Forecasting High-Frequency Risk Measures," (with D. Banulescu, G. Colletaz and C. Hurlin), Journal of Forecasting, 35(3), 224-249, 2016.
This article proposes intraday high‐frequency risk (HFR) measures for market risk in the case of irregularly spaced high‐frequency data. In this context, we distinguish three concepts of value‐at‐risk (VaR): the total VaR, the marginal (or per‐time‐unit) VaR and the instantaneous VaR. Since the market risk is obviously related to the duration between two consecutive trades, these measures are completed with a duration risk measure, i.e. the time‐at‐risk (TaR). We propose a forecasting procedure for VaR and TaR for each trade or other market microstructure event. Subsequently, we perform a backtesting procedure specifically designed to assess the validity of the VaR and TaR forecasts on irregularly spaced data. The performance of the HFR measure is illustrated in an empirical application for two stocks (Bank of America and Microsoft) and an exchange‐traded fund based on Standard & Poor's 500 index. We show that the intraday HFR forecasts capture accurately the volatility and duration dynamics for these three assets.
"Global Minimum Variance Portfolio under some Model Risk: A robust Regression-based Approach," (with B. Maillet and B. Vaucher),
European Journal of Operational Research, 244, 289-299, 2015.
The global minimum variance portfolio computed using the sample covariance matrix is known to be negatively affected by parameter uncertainty, an important component of model risk. Using a robust approach, we introduce a portfolio rule for investors who wish to invest in the global minimum variance portfolio due to its strong historical track record, but seek a rule that is robust to parameter uncertainty. Our robust portfolio corresponds theoretically to the global minimum variance portfolio in the worst-case scenario, with respect to a set of plausible alternative estimators of the covariance matrix, in the neighbourhood of the sample covariance matrix. Hence, it provides protection against errors in the reference sample covariance matrix. Monte Carlo simulations illustrate the dominance of the robust portfolio over its non-robust counterpart, in terms of portfolio stability, variance and risk-adjusted returns. Empirically, we compare the out-of-sample performance of the robust portfolio to various competing minimum variance portfolio rules in the literature. We observe that the robust portfolio often has lower turnover and variance and higher Sharpe ratios than the competing minimum variance portfolios.
"Testing for Granger-Causality in Distribution Tails: An Application to Oil Markets Integration," (with B. Candelon and M. Joëts),
Economic Modelling, 31, 276-285, 2013.
This paper proposes an original procedure which allows for testing of Granger-causality for multiple risk levels across tail distributions, hence extending the procedure proposed by Hong et al. (2009). Asymptotic and finite sample properties of the test are considered. This new Granger-causality framework is applied for a set of regional oil markets series. It helps to tackle two main questions 1) Whether oil markets are more or less integrated during periods of extreme energetic prices movements and 2) Whether price-setter markets change during such periods. Our findings indicate that the integration level between crude oil markets tends to decrease during extreme periods and that price-setter markets also change. Such results have policy implication and stress the importance of an active energetic policy during episode of extreme movements.
"Sampling Error and Double Shrinkage Estimation of Minimum Variance Portfolios," (with B. Candelon and C. Hurlin), Journal of Empirical Finance, 19, 511-527, 2012.
Shrinkage estimators of the covariance matrix are known to improve the stability over time of the Global Minimum Variance Portfolio (GMVP), as they are less error-prone. However, the improvement over the empirical covariance matrix is not optimal for small values of n, the estimation sample size. For typical asset allocation problems, with n small, this paper aims at proposing a new method to further reduce sampling error by shrinking once again traditional shrinkage estimators of the GMVP. First, we show analytically that the weights of any GMVP can be shrunk – within the framework of the ridge regression – towards the ones of the equally-weighted portfolio in order to reduce sampling error. Second, Monte Carlo simulations and empirical applications show that applying our methodology to the GMVP based on shrinkage estimators of the covariance matrix, leads to more stable portfolio weights, sharp decreases in portfolio turnovers, and often statistically lower (resp. higher) out-of-sample variances (resp. Sharpe ratios). These results illustrate that double shrinkage estimation of the GMVP can be beneficial for realistic small estimation sample sizes.
"Backtesting Value-at-Risk: A GMM Duration-Based Test," (with B. Candelon, G. Colletaz and C. Hurlin), Journal of Financial Econometrics, 9(2), 314-343, 2011.
This paper proposes a new duration-based backtesting procedure for value-at-risk (VaR) forecasts. The GMM test framework proposed by Bontemps (2006) to test for the distributional assumption (i.e., the geometric distribution) is applied to the case of the VaR forecasts validity. Using simple J-statistic based on the moments defined by the orthonormal polynomials associated with the geometric distribution, this new approach tackles most of the drawbacks usually associated to duration-based backtesting procedures. An empirical application for Nasdaq returns confirms that using GMM test leads to major consequences for the expost evaluation of the risk by regulation authorities.
"Une Evaluation des Procédures de Backtesting: Tout va pour le mieux dans le meilleur des mondes," (avec C. Hurlin)
Finance, 29(1), 53-80, 2008.
Dans cet article, nous proposons une démarche originale visant à évaluer la capacité des tests usuels de backtesting à discriminer différentes prévisions de Value at Risk (VaR) ne fournissant pas la même évaluation ex-ante du risque. Nos résultats montrent que, pour un même actif, ces tests conduisent très souvent à ne pas rejeter la validité, au sens de la couverture conditionnelle, de la plupart des six prévisions de VaR étudiées, même si ces dernières sont sensiblement différentes. Autrement dit, toute prévision de VaR a de fortes chances d’être validée par ce type de procédure.
"Backtesting Value-at-Risk accuracy: a simple new test," (with C. Hurlin), The Journal of Risk, 9(2), 19-37, 2007.
This paper proposes a new test of value-at-risk (VAR) validation. Our test exploits the idea that the sequence of VAR violations (hit function) – taking value 1–\alpha if there is a violation, and –\alpha otherwise – for a nominal coverage rate á verifies the properties of a martingale difference if the model used to quantify risk is adequate (Berkowitz et al., 2005). More precisely, we use the multivariate portmanteau statistic of Li and McLeod (1981), an extension to the multivariate framework of the test of Box and Pierce (1970), to jointly test the absence of autocorrelation in the vector of hit sequences for various coverage rates considered relevant for the management of extreme risks. We show that this shift to a multivariate dimension appreciably improves the power properties of the VAR validation test for reasonable sample sizes.
Editorial Introduction
Comovement and Contagion in Financial Markets," (with C. Kyrtsou and V. Mignon), International Review of Financial Analysis, 33, 2014.
Working papers/Works in progress
"Testing for Extreme Volatility Transmission with Realized Volatility Measures," (with C. Boucher, G. de Truchis and E. Dumitrescu), Working paper,
EconomiX, 2017-20.
"Measuring Network Systemic Risk Contributions: A Leave-one-out Approach," (with S. Hué and Y. Lucotte), SSRN Working paper.
"Credit Scoring: Improving Logistic Regression with Non Linear Decision Tree Effects," (with E. Dumitrescu, S. Hué and C. Hurlin), in progress.
"Smart Alpha: a Post Factor Investing Paragdim," (with C. Boucher, A. Jasinski and P. Kouontchou), in progress.
"Backtesting Expected Shortfall: a Model-Free Approach," (with F. T. Doko), in progress.
"Portfolio Optimization under Full Model Uncertainty: a min-max Opportunity Cost Approach," in progress.
"Testing for the Systemically Important Financial Institutions: a Conditional Approach," EconomiX Working Paper, 2013-27.
"Minimum Variance Portfolio Optimization under Parameter Uncertainty: A Robust Control Approach," (with Bertrand Maillet and Benoit Vaucher)
EconomiX Working Paper, 2013-28.
"Testing for crude oil markets globalization during extreme price movements," (with Bertrand Candelon and Marc Joëts) EconomiX Working Paper, 2012-28.
"Asset Allocation with Aversion to Parameter Uncertainty: A Minimax Regression Approach," EconomiX Working Paper, 2011-01.
"Sampling Error and Double Shrinkage Estimation of Minimum Variance Portfolios," (with Bertrand Candelon and Christophe Hurlin) Working paper,
METEOR, University of Maastricht , RM/11/02, 2010.