EuADS Summer School – Data Science for Explainable and Trustworthy AI

The European Association for Data Science organises a summer school

Data Science for Explainable and Trustworthy AI

Wednesday June 7th to Friday June 9th 2023 in Kirchberg, Luxembourg.

The summer school is preceded by a public opening event on Tuesday June 6th 2023.

 

Artificial Intelligence (AI) is being applied to an increasing number of domains. The implications of such deployments are heavily discussed – among AI professionals and, increasingly, in the general public. As the discussion in public media picks up speed, it has become clear, that the public needs to establish a well founded trust in AI systems. Such trust can be established in different ways, e.g. by positive experience with AI-systems in everyday life, certification of AI systems by trusted authorities or sound or credible communication by experts. A major obstacle for building trust in AI systems is currently a lack of understanding of the inner workings of some of the models applied in AI, even among AI experts. Efforts to illuminate, or explain, the inner workings of such systems are, thus, of critical importance.

The need for and the difficulty to obtain trust is also highly dependent on the domain of application. Image classification for sorting waste for the purposes of recycling is certainly a different story than autonomous driving or deciding on applications for probation by criminal offenders.

There are sound claims for applying very high standards in the latter domains or to ban such use entirely. On the other hand advocates of such systems argue, such systems should already been applied when they perform generally better than humans by some margin, not only when they make the perfectly justified choice in any situation. A “definition of good enough” for deploying AI systems in critical fields is in as high demand as a consensus about domains for which the use of AI should be banned entirely and for good.

Data science can play a major role in increasing transparency of and establishing trust in AI systems. By analysing the inner workings of complex AI models, by describing the emergent behaviour of such AI systems, by mapping out corridors for secure operation of AI systems or by providing reliable indicators which can be used for certification of AI systems. In recent years, governmental, non-governmental, and standards organisations have launched initiatives to establish ethical principles for the development of AI. In the EU, this step was taken by the publication of the High Level Expert Group (HLEG)’s Ethics Guidelines for Trustworthy AI, which set down seven requirements, although much remains to be done when it comes to operationalize those guidelines.

Public Event

The summer school will be preceded by a public event on Tuesday, June 6th starting 13:00 p.m.

At the heart of the public event is the Sabine-Krolak-Schwerdt-Lecture, in memoriam of EuADS’ founding president. This will be held by Wolfgang Härdle (HU Berlin, Germany)

Agenda

13h00 – 14h00 Registration and Coffee
14h00 – 15h00 Opening and Welcome
Eyke Hüllermeyer
EuADS President
15h00 – 16h30 Sabine Krolak-Schwerdt Public Lecture
Quantinar – a P2P knowledge Platform for data sciences
Wolfgang Härdle
HU Berlin, Germany
16h30 Welcome Reception

The Symposium on Tuesday is
free, but a registration is required as only 20 seats are available:contact@euads.org The venue is at STATEC, 13 rue Erasme L-1468 Luxembourg

There will be a free, live online-transmission. Details will be announced in due course, here.

Quantinar - a P2P knowledge Platform for data sciences

Wolfgang Härdle (HU Berlin, Germany)

Living in the Information Age, the power of data and correct statistical analysis has never been more prevalent. Academics, practitioners and many other professionals nowadays require an accurate application of quantitative methods. Yet many branches are subject to a crisis of integrity, which is shown in improper use of statistical models, p-hacking, HARKing or failure to replicate results. We propose the use of a peer-to-peer ecosystem based on a blockchain network, Quantinar (quantinar.com), to support quantitative analytics knowledge typically embedded with code in the form of Quantlets (quantlet.com) or software snippets. The integration of blockchain technology makes Quantinar a decentralised autonomous organisation (DAO) that ensures fully transparent and reproducible scientific research.

Speaker's Webpage

Topics and Presenters

The schedule for the summer school including speakers and topics:

Wednesday,
June 7th
9:30 a.m.
to 1 p.m.
Responsible AI: from principles to actionVirginia Dignum
(Umeå University)
 2:30 p.m.
to 6 p.m.
Explainable & Reproducible Data Science: Introduction to Data Visualisation and dynamic reportingOsama Mahmoud
(U of Essex, UK)
Thursday,
June 8th
9:30 a.m.
to 1 p.m.
A hands-on tutorial on explainable methods for machine learning with Python: applications to gender biasAurora Ramirez
(U of Cordoba)
 2:30 p.m.
to 6 p.m.
Explanation is a process: a hands-on tutorial on the Interactive Explanatory Model Analysis with examples for classification modelsPrzemysław Biecek
(Warsaw U of Technology)
Friday,
June 9th
9:30 a.m.
to 10:30 p.m.
European AI Act and forbidding the bad part but making the good part of AI not impossibleMarc Salomon, Ilker Birbil, Tabea Röber
(Amsterdam Business School, NL)
10:30 a.m. to 11:30 a.m.Counterfactual explanationsIlker Birbil (University of Amsterdam)
11:30 a.m. to 12:30 p.m.HandsOn SessionTabea Röber
(Amsterdam Business School, NL)

For details see below!

Wednesday, June 7th

9:30 a.m. to 1 p.m.

2:30 p.m. to 6 p.m.

Responsible AI: from principles to action

Virginia Dignum (Umeå University)

Every day we see news about advances and the societal im- pact of AI. AI is changing the way we work, live and solve challenges but concerns about fairness, transparency or privacy are also growing. Ensuring AI ethics is more than designing systems whose result can be trusted. It is about the way we design them, why we design them, and who is involved in designing them. In order to develop and use AI respon- sibly, we need to work towards technical, societal, institutional and legal methods and tools which provide concrete support to AI practitioners, as well as awareness and training to enable participation of all, to ensure the alignment of AI systems with our societies’ principles and values.

Speaker's Webpage

Explainable & Reproducible Data Science: Introduction to Data Visualisation and dynamic reporting

Osama Mahmoud (U of Essex, UK)

Visualisation, dynamic and interactive presentations of Data analyses are increasingly becoming vital elements of rigours and trustworthy AI systems. In this half-day course, we will cover the key aspects of data visualisation, dynamic reporting and interactive presentations of analysis results using R. We will illustrate how to dynamically embed graphical data and results of predictive models within reproducible reports to enhance the understanding of relationships and patterns in data.

Speaker's Webpage

Thursday, June 8th

9:30 a.m. to 1 p.m.

2:30 p.m. to 6 p.m.

A hands-on tutorial on explainable methods for machine learning with Python: applications to gender bias

Aurora Ramirez (U of Cordoba)

Artificial intelligence (AI) surrounds us in multiple aspects of our daily lives, making decisions that help us detect diseases, guide autonomous cars, or recommend digital content. Despite the progress made in applications such as the above, the machine learning (ML) process is still a "black box". This implies that end users may be reluctant to trust artificially generated results, as they do not understand how decisions have been made. Furthermore, ML models are not perfect: they make mistakes, and their predictions could be biased due to the underlying data used for training. Explainable artificial intelligence (XAI) comes to “open the black box” and try to solve some of these problems. XAI methods are currently used to provide additional insights about the performance and behaviour of ML models, such as the relative importance of some features in the predictions or how features values could be changed to invert predictions (what-if scenarios). In this tutorial, we will introduce the basic concepts of XAI and learn how to generate local and global explanations with practical examples in Python. Specific cases will be presented to discuss how XAI methods could help in detecting and understanding misclassifications and biased predictions, especially those related to gender bias.

Speaker's Webpage

Explanation is a process: a hands-on tutorial on the Interactive Explanatory Model Analysis with examples for classification models

Przemysław Biecek (Warsaw U of Technology)

Trustworthy predictive modeling must be based on continuous and in-depth analysis of model behavior. Explanation techniques play a key role here, allowing both a better understanding of how a model behaves and allowing comparison of two or a sequence of trained models. Such analysis requires interaction with the model, sequentially interrogating the model using a variety of explanatory techniques. In this workshop, we will review some popular explanatory techniques such as Shap, Variable Importance, Partial Dependence, Ceteris Paribus and discuss how these techniques complement each other. We will present the theoretical foundations of Interactive Explanatory Model Analysis and then discuss the application of this process using the Covid mortality prediction as an example. The lecture is supplemented by hands-on exercises, where participants will replicate model exploration in Python on their own and also experiment with other techniques and other models.

Speaker's Webpage

Friday, June 9th

9:30 a.m. to 10:30 a.m.

10:30 a.m. to 11:30 a.m.

11:30 a.m. to 12:30 p.m.

European AI Act and Forbidding the Bad Part but Making the Good Part of AI not Impossible

Marc Salomon (Amsterdam Business School, NL)

The AI Act may soon become a European law. Its goal is to safeguard Human Rights in automated decision-making processes, including those relying on AI. Here is the challenge: prohibiting certain AI applications can protect one Human Right, but may jeopardize another one. For example, in a hospital setting, gathering patient data to train AI algorithms in MRI scanners to enhance cancer detection can promote good healthcare, but at the same time, it may compromise privacy - both of which are Human Rights. The question remains: how can we make trade-offs? One critical aspect of the Act is that any algorithm must be transparent, accountable, and fair. To support this, a risk model has been developed that categorizes decision-making processes as low-risk, high-risk, or forbidden. Social scoring, such as implemented in some Chinese cities, is among the forbidden applications. Interestingly, at the time of writing, it was still being determined whether ChatGPT would be acceptable in Europe, despite the low-risk classification of chatbots in the act. Italy has even proposed forbidding ChatGPT, raising further questions about how Europe should approach AI applications and their possible restrictions. Additionally, if Europe bans certain AI applications while other parts of the world allow them, it could create a significant technological gap. Marc's talk will delve into these challenging issues and provide examples of best practices as we collectively navigate how to balance various Human Rights using AI.

Webpage

Counterfactual explanations

Ilker Birbil (University of Amsterdam)

Interpretability in machine learning is a field of ongoing research that has become increasingly popular in recent years. Among the many approaches and tools for interpretability, a counterfactual explanation, also known as algorithmic recourse, is considered especially promising due to its similarity to how we provide explanations in everyday life. To derive an explanation for a factual instance, we search for a counterfactual feature combination that describes the minimum change in the feature space necessary to flip the model prediction. This talk will provide an overview of the research on counterfactual explanations and discuss various models proposed in the literature.

Webpage

HandsOn Session

Tabea Röber (Amsterdam Business School, NL)

This hands-on session follows the previous talk and focuses on optimisation-based methods to generate (robust) counterfactual explanations. Specifically, we will use a case study from credit risk scoring to discuss one of the proposed methods. A Jupyter notebook will be used for the code demonstration and will be shared with the audience.

Webpage
 

Fees and Registration

For EuADS members the fee for participating in the summer school is 100€.

For non-members the fee is 200 € with a free EuADS membership for 2023.

To ensure an interactive experience the number of participants is limited, so early registration is strongly recommended. Please register by

1. Sending an email with your personal details to contact@euads.org, with reference to EuADS Summer School 2023 Data Science for Explainable and Trustworthy AI.

2. Transferring the amount to the Banque et Caisse d’Epargne de l’Etat, Luxembourg (BIC: BCEELULL; IBAN: LU47 0019 4655 6967 1000).

Once the personal details and registration fee have been received, you will receive an email confirming your participation.

Venue:

Conference and Training Centre at the Chambre de Commerce Luxembourg
7, Rue Alcide de Gasperi
L-2981 Luxembourg Kirchberg

Accommodations:

From the following hotels you can walk to the location:
Meliá Luxembourg
Coque Hôtel
Hôtel Novotel Luxembourg Kirchberg
Sofitel Luxembourg Europe Hotel

Of, for smaller budgets:

Luxembourg Youth Hostel

Organisers:

  • Serge Allegrezza (STATEC, Luxembourg; EuADS Treasurer)
  • Mohsen Farid (U of Derby, UK)
  • Peter Flach (U of Bristol, UK; EuADS Vice-President)
  • Tim Friede (U of Göttingen, Germany)
  • Nils Hachmeister (Germany; EuADS Vice-President)
  • Austin Haffenden (U of Luxembourg, Luxemborg)
  • Eyke Hüllermeier (LMU Munich, Germany; EuADS President)
  • Sylvain Kubler (U of Luxembourg)
  • Berthold Lausen (U of Essex, UK)
  • Victoria Lopéz Lopéz (CUNEF Universidad, Spain)
  • José Raúl Romero (U of Cordoba)
  • Denise Schroeder (STATEC, Luxembourg)
 

Sponsors