The European Association for Data Science organises a summer school
Data Science for Explainable and Trustworthy AI
Wednesday June 7th to Friday June 9th 2023 in Kirchberg, Luxembourg.
The summer school is preceded by a public opening event on Tuesday June 6th 2023.
Artificial Intelligence (AI) is being applied to an increasing number of domains. The implications of such deployments are heavily discussed – among AI professionals and, increasingly, in the general public. As the discussion in public media picks up speed, it has become clear, that the public needs to establish a well founded trust in AI systems. Such trust can be established in different ways, e.g. by positive experience with AI-systems in everyday life, certification of AI systems by trusted authorities or sound or credible communication by experts. A major obstacle for building trust in AI systems is currently a lack of understanding of the inner workings of some of the models applied in AI, even among AI experts. Efforts to illuminate, or explain, the inner workings of such systems are, thus, of critical importance.
The need for and the difficulty to obtain trust is also highly dependent on the domain of application. Image classification for sorting waste for the purposes of recycling is certainly a different story than autonomous driving or deciding on applications for probation by criminal offenders.
There are sound claims for applying very high standards in the latter domains or to ban such use entirely. On the other hand advocates of such systems argue, such systems should already been applied when they perform generally better than humans by some margin, not only when they make the perfectly justified choice in any situation. A “definition of good enough” for deploying AI systems in critical fields is in as high demand as a consensus about domains for which the use of AI should be banned entirely and for good.
Data science can play a major role in increasing transparency of and establishing trust in AI systems. By analysing the inner workings of complex AI models, by describing the emergent behaviour of such AI systems, by mapping out corridors for secure operation of AI systems or by providing reliable indicators which can be used for certification of AI systems. In recent years, governmental, non-governmental, and standards organisations have launched initiatives to establish ethical principles for the development of AI. In the EU, this step was taken by the publication of the High Level Expert Group (HLEG)’s Ethics Guidelines for Trustworthy AI, which set down seven requirements, although much remains to be done when it comes to operationalize those guidelines.
The summer school will be preceded by a public event on Tuesday, June 6th starting 13:00 p.m.
At the heart of the public event is the Sabine-Krolak-Schwerdt-Lecture, in memoriam of EuADS’ founding president. This will be held by Wolfgang Härdle (HU Berlin, Germany)
|13h00 – 14h00||Registration and Coffee|
|14h00 – 15h00||Opening and Welcome
|15h30 – 17h00||Sabine Krolak-Schwerdt Public Lecture
Quantinar – a P2P knowledge Platform for data sciences
HU Berlin, Germany
The Symposium on Tuesday is
free, but a registration is required as only 20 seats are available:firstname.lastname@example.org The venue is at STATEC, 13 rue Erasme L-1468 Luxembourg
There will be a free, live online-transmission. Details will be announced in due course, here.
Quantinar - a P2P knowledge Platform for data sciences
Wolfgang Härdle (HU Berlin, Germany)
Living in the Information Age, the power of data and correct statistical analysis has never been more prevalent. Academics, practitioners and many other professionals nowadays require an accurate application of quantitative methods. Yet many branches are subject to a crisis of integrity, which is shown in improper use of statistical models, p-hacking, HARKing or failure to replicate results. We propose the use of a peer-to-peer ecosystem based on a blockchain network, Quantinar (quantinar.com), to support quantitative analytics knowledge typically embedded with code in the form of Quantlets (quantlet.com) or software snippets. The integration of blockchain technology makes Quantinar a decentralised autonomous organisation (DAO) that ensures fully transparent and reproducible scientific research.Speaker's Webpage
Topics and Presenters
The schedule for the summer school including speakers and topics:
to 1 p.m.
|European AI Act and forbidding the bad part but making the good part of AI not impossible||Marc Salomon
(Amsterdam Business School, NL)
to 6 p.m.
|Explainable & Reproducible Data Science: Introduction to Data Visualisation and dynamic reporting||Osama Mahmoud
(U of Essex, UK)
to 1 p.m.
|A hands-on tutorial on explainable methods for machine learning with Python: applications to gender bias||Aurora Ramirez
(U of Cordoba)
to 6 p.m.
|Explanation is a process: a hands-on tutorial on the Interactive Explanatory Model Analysis with examples for classification models||Przemysław Biecek
(Warsaw U of Technology)
to 1 p.m.
|Responsible AI: from principles to action||Virginia Dignum
For details see below!
Wednesday, June 7th
9:30 a.m. to 1 p.m.
2:30 p.m. to 6 p.m.
European AI Act and Forbidding the Bad Part but Making the Good Part of AI not Impossible
Marc Salomon (Amsterdam Business School, NL)
Explainable & Reproducible Data Science: Introduction to Data Visualisation and dynamic reporting
Osama Mahmoud (U of Essex, UK)
Visualisation, dynamic and interactive presentations of Data analyses are increasingly becoming vital elements of rigours and trustworthy AI systems. In this half-day course, we will cover the key aspects of data visualisation, dynamic reporting and interactive presentations of analysis results using R. We will illustrate how to dynamically embed graphical data and results of predictive models within reproducible reports to enhance the understanding of relationships and patterns in data.Speaker's Webpage
Thursday, June 8th
9:30 a.m. to 1 p.m.
2:30 p.m. to 6 p.m.
A hands-on tutorial on explainable methods for machine learning with Python: applications to gender bias
Aurora Ramirez (U of Cordoba)
Artificial intelligence (AI) surrounds us in multiple aspects of our daily lives, making decisions that help us detect diseases, guide autonomous cars, or recommend digital content. Despite the progress made in applications such as the above, the machine learning (ML) process is still a "black box". This implies that end users may be reluctant to trust artificially generated results, as they do not understand how decisions have been made. Furthermore, ML models are not perfect: they make mistakes, and their predictions could be biased due to the underlying data used for training. Explainable artificial intelligence (XAI) comes to “open the black box” and try to solve some of these problems. XAI methods are currently used to provide additional insights about the performance and behaviour of ML models, such as the relative importance of some features in the predictions or how features values could be changed to invert predictions (what-if scenarios). In this tutorial, we will introduce the basic concepts of XAI and learn how to generate local and global explanations with practical examples in Python. Specific cases will be presented to discuss how XAI methods could help in detecting and understanding misclassifications and biased predictions, especially those related to gender bias.Speaker's Webpage
Explanation is a process: a hands-on tutorial on the Interactive Explanatory Model Analysis with examples for classification models
Przemysław Biecek (Warsaw U of Technology)
Friday, June 9th
9:30 a.m. to 1 p.m.
Responsible AI: from principles to action
Virginia Dignum (Umeå University)
Every day we see news about advances and the societal im- pact of AI. AI is changing the way we work, live and solve challenges but concerns about fairness, transparency or privacy are also growing. Ensuring AI ethics is more than designing systems whose result can be trusted. It is about the way we design them, why we design them, and who is involved in designing them. In order to develop and use AI respon- sibly, we need to work towards technical, societal, institutional and legal methods and tools which provide concrete support to AI practitioners, as well as awareness and training to enable participation of all, to ensure the alignment of AI systems with our societies’ principles and values.
Fees and Registration
For EuADS members the fee for participating in the summer school is 100€.
For non-members the fee is 200 € with a free EuADS membership for 2023.
To ensure an interactive experience the number of participants is limited, so early registration is strongly recommended. Please register by
1. Sending an email with your personal details to email@example.com, with reference to EuADS Summer School 2023 Data Science for Social Media.
2. Transferring the amount to the Banque et Caisse d’Epargne de l’Etat, Luxembourg (BIC: BCEELULL; IBAN: LU47 0019 4655 6967 1000).
Once the personal details and registration fee have been received, you will receive an email confirming your participation.
Conference and Training Centre at the Chambre de Commerce Luxembourg
7, Rue Alcide de Gasperi
L-2981 Luxembourg Kirchberg
From the following hotels you can walk to the location:
Hôtel Novotel Luxembourg Kirchberg
Sofitel Luxembourg Europe Hotel
- Serge Allegrezza (STATEC, Luxembourg; EuADS Treasurer)
- Mohsen Farid (U of Derby, UK)
- Peter Flach (U of Bristol, UK; EuADS Vice-President)
- Tim Friede (U of Göttingen, Germany)
- Nils Hachmeister (Germany; EuADS Vice-President)
- Austin Haffenden (U of Luxembourg, Luxemborg)
- Eyke Hüllermeier (LMU Munich, Germany; EuADS President)
- Sylvain Kubler (U of Luxembourg)
- Berthold Lausen (U of Essex, UK)
- Victoria Lopéz Lopéz (CUNEF Universidad, Spain)
- José Raúl Romero (U of Cordoba)
- Denise Schroeder (STATEC, Luxembourg)