BEGIN:VCALENDAR VERSION:2.0 PRODID:-//132.216.98.100//NONSGML kigkonsult.se iCalcreator 2.20.4// BEGIN:VEVENT UID:20251109T051309EST-6378lSKe9b@132.216.98.100 DTSTAMP:20251109T101309Z DESCRIPTION:Title: Autocalibration and Tweedie-dominance for Insurance Pric ing with Machine Learning.\n\nAbstract: Boosting techniques and neural net works are particularly effective machine learning methods for insurance pr icing. Often in practice\, there are nevertheless endless debates about th e choice of the right loss function to be used to train the machine learni ng model\, as well as about the appropriate metric to assess the performan ces of competing models. Also\, the sum of fitted values can depart from t he observed totals to a large extent and this often confuses actuarial ana lysts. The lack of balance inherent to training models by minimizing devia nce outside the familiar GLM with canonical link setting has been empirica lly documented in Wüthrich (2019\, 2020) who attributes it to the early st opping rule in gradient descent methods for model fitting. The present pap er aims to further study this phenomenon when learning proceeds by minimiz ing Tweedie deviance. It is shown that minimizing deviance involves a trad e-off between the integral of weighted differences of lower partial moment s and the bias measured on a specific scale. Autocalibration is then propo sed as a remedy. This new method to correct for bias adds an extra local G LM step to the analysis. Theoretically\, it is shown that it implements th e autocalibration concept in pure premium calculation and ensures that bal ance also holds on a local scale\, not only at portfolio level as with exi sting bias-correction techniques. The convex order appears to be the natur al tool to compare competing models\, putting a new light on the diagnosti c graphs and associated metrics proposed by Denuit et al. (2019) Résumé Bo osting techniques and neural networks are particularly effective machine l earning methods for insurance pricing. Often in practice\, there are never theless endless debates about the choice of the right loss function to be used to train the machine learning model\, as well as about the appropriat e metric to assess the performances of competing models. Also\, the sum of fitted values can depart from the observed totals to a large extent and t his often confuses actuarial analysts. The lack of balance inherent to tra ining models by minimizing deviance outside the familiar GLM with canonica l link setting has been empirically documented in Wüthrich (2019\, 2020) w ho attributes it to the early stopping rule in gradient descent methods fo r model fitting. The present paper aims to further study this phenomenon w hen learning proceeds by minimizing Tweedie deviance. It is shown that min imizing deviance involves a trade-off between the integral of weighted dif ferences of lower partial moments and the bias measured on a specific scal e. Autocalibration is then proposed as a remedy. This new method to correc t for bias adds an extra local GLM step to the analysis. Theoretically\, i t is shown that it implements the autocalibration concept in pure premium calculation and ensures that balance also holds on a local scale\, not onl y at portfolio level as with existing bias-correction techniques. The conv ex order appears to be the natural tool to compare competing models\, putt ing a new light on the diagnostic graphs and associated metrics proposed b y Denuit et al. (2019)\n DTSTART:20211207T203000Z DTEND:20211207T213000Z LOCATION:Room D4-2013 Campus Principal\, CA\, Sherbrooke\, Université de Sh erbrooke\, 2500 Bou. de l'Université SUMMARY:Arthur Charpentier (UQAM) URL:/mathstat/channels/event/arthur-charpentier-uqam-3 35384 END:VEVENT END:VCALENDAR