back to all news
Feedback on the workshop “Data biases, symptoms, diagnosis and remedies”
This workshop focused on a major issue in the development of ethical and trustworthy artificial intelligence: the insidious presence of biases that often lead to discriminatory decision making.
Nathan Noiry and Yannick Guyonvarch, both post-doctoral students at Telecom Paris, developed the following topics:
- Definition of the different types of biases: cognitive (tendency to detect non-existent correlations, confirmation bias, etc.), algorithmic (poor choice of metrics) and statistical (mismatch between source and target data).
- Relationships between bias and fairness, a recent field of machine learning, which consists in setting up algorithms that are judged to be efficient with respect to certain criteria (e.g. gender bias).
- Methods to detect representativeness biases and to correct them (imputation, correlation, weighting…)
You can consult here their notebook presenting a re-weighting method using synthetic data. During the discussion that followed the presentation, some participants proposed to test these functions on real data and to adapt them as an open-source library.
Laure Leter, résidente datacraft
Recent Comments