Gradient boosting for extreme quantile regression
09/12/2020 Wednesday 9th December 2020, 13:00 ()
More
Sebastian Engelke, University of Geneva
Quantile regression relies on minimizing the conditional quantile loss, which is based on the quantile check function. This has been extended to flexible regression functions such as the quantile regression forest (Meinshausen, 2006) and the gradient forest (Athey et al., 2019). These methods break down if the quantile of interest lies outside of the range of the data. Extreme value theory provides the mathematical foundation for estimation of such extreme quantiles. A common approach is to approximate the exceedances over a high threshold by the generalized Pareto distribution. For conditional extreme quantiles, one may model the parameters of this distribution as functions of the predictors. Up to now, the existing methods are either not flexible enough (e.g., linear methods) or do not generalize well in higher dimensions (e.g., kernel based methods). We develop a new approach based on gradient boosting for extreme quantile regression that estimates the parameters of the generalized Pareto distribution in a flexible way even in higher dimensions. We discuss cross-validation of the tuning parameters and show how the importance of the different predictors can be measured. Our estimator outperforms classical quantile regression methods and methods from extreme value theory in simulations studies. We study an application to forecasting of extreme precipitation in statistical post-processing.
This is joint work with Jasper Velthoen, Clement Dombry and Juan-Juan Cai.
|