‘The overall goal is to combine neural and cognitive learning and reasoning for more human-like AI systems.’
Investigators: Alessandra Mileo, Mark Keane, Rouhai Dong, Barry Smyth, Suzanne Little, Barry O Sullivan
Insight is advancing research in Explainable AI (XAI) with the goal of equipping AI systems with explanations that are interpretable and trustworthy. We combine a mix of fundamental computational work on new XAI algorithms, interdisciplinary approaches involving cognitive science, and new methods applied to specific techniques and concrete applications. Some of our foundational work focuses on post hoc explanation by example, where we are exploring three different explanation strategies (factual, counterfactual, semi-factual) across different data-types (Kenny 2021)(Keane 2021). This work has been applied to explanation in decision-support systems for farmers, with Teagasc and Accenture, as part of the VistaMilk SFI Research Centre. It has also been applied to analytics support systems for running (Feely 2020). We have also worked on a novel approach to extract knowledge from a neural network through graph analysis (Horta 2021). The overall goal is to combine neural and cognitive learning and reasoning for more human-like AI systems. This work has attracted a new project with BrainCreators (Amsterdam, NL) and resulted in invited talks, interdisciplinary seminars and outreach, e.g., a podcast for AI Ireland, an art piece as part of the Insight Artist in Residency programme. Other interesting application areas which are currently being explored include financial forecasting models (Yang 2020) and news recommendation. The links between explanations in constraint programming and other fields of AI established by the Insight team will lead to further cross-disciplinary research (Gupta 2021).
Read Dr Alessandra Mileo’s explanation of her work in Silicon Republic.