par Denuit, Michel ;Charpentier, Arthur;Trufin, Julien
Référence Insurance. Mathematics & economics, 101, page (485-497)
Publication Publié, 2021-12-15
Article révisé par les pairs
Résumé : Boosting techniques and neural networks are particularly effective machine learning methods for insurance pricing. Often in practice, the sum of fitted values can depart from the observed totals to a large extent. The possible lack of balance when models are trained by minimizing deviance outside the familiar GLM with canonical link setting has been documented in Wüthrich (2019, 2020, 2021). The present paper aims to further study this phenomenon when learning proceeds by minimizing Tweedie deviance. It is shown that minimizing deviance involves a trade-off between the integral of weighted differences of lower partial moments and the bias measured on a specific scale. Hence, there is no guarantee that the sum of fitted values stays close to observed totals if the latter bias term is dominated by the former one entering deviance. Autocalibration is then proposed as a remedy. This new method to correct for bias adds an extra local GLM step to the analysis with the output of the first step as only predictor. Theoretically, it is shown that it implements the autocalibration concept in pure premium calculation and ensures that balance also holds on a local scale, not only at portfolio level as with existing bias-correction techniques.