Date/Time: Wednesday, October 25, 12:30 – 14:00
Location: FORTH Central Building - Payatakis Room
Speaker: Matthias Jakobs, Lamarr Institute for ML and AI
Title: TSMS: TreeSHAP Model Selection for Time Series Forecasting
Hosts: Dr. George Tzagkarakis, and Prof. Panagiotis Tsakalides, SPL LAB / ICS FORTH
Abstract: Tree-based models have been successfully applied to a wide variety of tasks, including time series forecasting. They are increasingly in demand and widely accepted because of their comparatively high level of interpretability. However, many of them suffer from the overfitting problem, which limits their application in real-world decision-making. This problem becomes even more severe in online-forecasting settings where time series observations are incrementally acquired, and the distributions from which they are drawn may keep changing over time. In this context, we propose a novel method for the online selection of tree-based models using the TreeSHAP explainability method in the task of time series forecasting. We start with an arbitrary set of different tree-based models. Then, we outline a performance-based ranking with a coherent design to make TreeSHAP able to specialize the tree-based forecasters across different regions in the input time series. In this framework, adequate model selection is performed online, adaptively following drift detection in the time series. In addition, explainability is supported on three levels, namely online input importance, model selection, and model output explanation.
Short bio: Matthias Jakobs is a PhD student
at TU Dortmund University and works as part of the German
Competence Center "Lamarr Institute for Machine Learning and
Artificial Intelligence", one of only six Research Centers in
Germany focused on Machine Learning and AI research. His
research revolves around the utilisation of Explainable AI
Methods to explain and improve Time Series Forecasting methods
under Concept Drift. A special focus of his research is to
adaptively select or ensemble models to achieve a more robust
prediction. His other research focus is on quantitatively
evaluating explanations, specifically Shapley values. He has
been a Program Committee member of the "Workshop on
Trustworthy Artificial Intelligence" at ECML-PKDD 2022, as
well as a co-organiser of the "XAI-TS Workshop" co-located
with ECML-PKDD 2023.