26 Nov 2021 15:00

Algorithmic Fairness: AI, data-based decision making, and social justice

On-line / Presence Ca' Bottacin

"Algorithmic Fairness: AI, data-based decision making, and social justice"

Christoph Heitz
Professor at the Institute of Data Analysis and Process Design (IDP), School of Engineering, Zurich University of Applied Sciences  (ZHAW) . Visiting Scholar at ECLT.

26th November 2021, 3 PM CET

Hyrbid event in precence at Ca' Bottacin Aula A-B / Online
>> Go to this link to participate through Zoom Meeting!

>> Please fill the form for booking your place at Ca' Bottacin Aula A-B.


The 21th century is shaped by the ever-increasing use of digital data for getting new insights and making better decisions. Our personal data is no exception: in every domain of our lives, data-based algorithms are used increasingly for deciding who gets a loan, who is accepted for a job, who is being admitted for a renounced study program, who is released from prison, and so on. The list is infinite. Our societies are increasingly shaped by such systems – and it turns out that this creates problems: it has been shown in many applications that these algorithms tend to produce unintended discrimination and social injustice, a phenomenon which has been called “algorithmic bias”, or “algorithmic fairness”.

As researchers and professionals in fields such as data science or AI, our work often has the potential of such negative consequences. They are never intended – but they do happen, and they do impact the social fabric of our society. What does this mean for our profession?

In my talk, I will give an introduction into the rather young research field of Algorithmic Fairness, touching on questions such as: 

- What is the reason that data-based recommendation or decision systems naturally tend to be racist and sexist? 

- How do we measure discrimination or fairness in practice? Why are there so many different fairness metrics, and is there a best one?

- Are there ethical responsibilities of computer scientist and data scientists who are building predictive models, and of which sort are they? 

- What can be done to make sure that such systems are fair in a well-specified way? Which part of the solution has to be provided by the computer scientists, and how to integrate ethical considerations into such solutions?

Short CV:
Christoph Heitz
holds a PhD of Theoretical Physics (Univ. Freiburg i.Br., Germany). After some years in industry, he joined the Zurich University of Applied Sciences in 2000. He is founding member and part of the executive board of the Institute of Data Analysis and Process Design (IDP). He is also co-founder and president of the data innovation alliance, a large Swiss national innovation network for pushing innovation in the field of data-based value creation. 

Since more than 20 years, his research is focused data analysis of and decision support in complex socio-technical systems, especially for business-related applications in both service and manufacturing sector. Since several years, he is intensively working on ethical questions of data-based business and digital services, and is currently leading two large research projects in the field of Algorithmic Fairness. 


L'evento si terrà in inglese


ECLT European Centre for Living Technology (ECLT)


Cerca in agenda