Pnt Europe
Thanks to Thanh Binh Nguyen for Orléans pictures.

AIJ        LIFOICVL

The list of talks is available.

Schedule : 9h-17h

Two invited speakers


Program

09:00-09:15 Welcome
09:15-10:15 Ultra-Strong Machine Learning – Comprehensibility of Programs Learned with ILP
Stephen Muggleton, Imperial College London. slides
10:15-10:45 Coffee break
10:45-11:10 Admissible Generalizations of Examples as Rules
Philippe Besnard (1), Thomas Guyet (2), Véronique Masson (3)
(1) CNRS-IRIT, (2) Agrocampus-ouest/IRISA, (3) Univ Rennes, Inria, CNRS, IRISA slides
11:10-11:35 A brief tour of techniques to interpret machine learning decision rules with applications to health data
M. Chiapino (1) and (2), H. Amadou-Boubacar (2), S. Clemencon (1)
(1)Telecom ParisTech, (2)Air Liquide
11:35-12:10 Instance-based Method for Post-hoc Interpretability: a Local Approach
Thibault Laugel Sorbonne Université slides
12:10-12:30 Discussion
12:30-14:00 Lunch
14:00-15:00 Interpretability in Machine Learning for Precision Medicine
Jean-Daniel Zucker, Sorbonne Université, IRD, UMMISCO.
15:00-15:30 Coffee break
15:40-16:05 Feature Selection for Unsupervised Domain Adaptation using Optimal Transport
Léo Gautheron(1), Ievgen Redko(1), Carole Lartizien(2)
(1) Univ Lyon, UJM-Saint-Etienne, CNRS, Institut d'Optique Graduate School Laboratoire Hubert Curien UMR 5516
(2) Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1206,
16:05-16:30 Descriptive clustering
Thi Bich Hanh Dao, Christel Vrain LIFO, Université d'Orléans slides
16:30-17:00 Discussion

Talks

Ultra-Strong Machine Learning – Comprehensibility of Programs Learned with ILP
Stephen Muggleton, Imperial College London.

During the 1980s Michie defined Machine Learning in terms of two orthogonal axes of performance: predictive accuracy and comprehensibility of generated hypotheses.  Since predictive accuracy was readily measurable and comprehensibility not so, later definitions in the 1990s, such as Mitchell’s, tended to use a one-dimensional approach to Machine Learning based solely on predictive accuracy, ultimately favouring statistical over symbolic Machine Learning approaches. In this paper we provide a definition of comprehensibility of hypotheses which can be estimated using human participant trials. We present two sets of experiments testing human comprehensibility of logic programs. In the first experiment we test human comprehensibility with and without predicate invention. Results indicate comprehensibility is affected not only by the complexity of the presented program but also by the existence of anonymous predicate symbols. In the second experiment we directly test whether any state-of-the-art ILP systems are ultra-strong learners in Michie’s sense, and select the Metagol system for use in humans trials. Results show participants were not able to learn the relational concept on their own from a set of examples but they were able to apply the relational definition provided by the ILP system correctly. This implies the existence of a class of relational concepts which are hard to acquire for humans, though easy to understand given an abstract explanation. We believe improved understanding of this class could have potential relevance to contexts involving human learning, teaching and verbal interaction.

Interpretability in Machine Learning for Precision Medicine
Jean-Daniel Zucker, Sorbonne Université, IRD, UMMISCO.

Interpretability of machine learning models is becoming an increasing concern especially when used for precision medicine. In the context of new regulations regarding the transparency and fairness of algorithms the societal and legal pressure for explainable AI models in medical domain is even stronger. State of the art Machine Learning algorithms are often trading accuracy  at the expense of interpretability. We will present several precision medicine questions in the field of cardio-metabolic diseases and discuss different interpretable approaches to learn from clinical, Omics and image data.


Admissible Generalizations of Examples as Rules

Philippe Besnard (1), Thomas Guyet (2), Véronique Masson (3)
(1) CNRS-IRIT, (2) Agrocampus-ouest/IRISA, (3) Univ Rennes, Inria, CNRS, IRISA

A brief tour of techniques to interpret machine learning decision rules with applications to health data
M. Chiapino (1) and (2), H. Amadou-Boubacar (2), S. Clemencon (1)
(1)Telecom ParisTech, (2)Air Liquide

Descriptive clustering
Thi Bich Hanh Dao, Christel Vrain
LIFO, Université d'Orléans

Feature Selection for Unsupervised Domain Adaptation using Optimal Transport
Léo Gautheron(1), Ievgen Redko(1), Carole Lartizien(2)
1) Univ Lyon, UJM-Saint-Etienne, CNRS, Institut d'Optique Graduate School Laboratoire Hubert Curien UMR 5516
2) Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1206,

Instance-based Method for Post-hoc Interpretability: a Local Approach
Thibault Laugel
Sorbonne Université