Stella Katsarou

Data Scientist @ PELTARION

It has been observed in various use cases that specific groups can be marginalized due to the way downstream tasks are handled by modern NLP models, establishing a behavior similar to that of stereotypically biased conduct. To responsibly direct actions that will combat this problem, it is of crucial importance that we manage to detect and quantify it first. In this talk, we will focus on bias detection with respect to gender in a Transformer model’s representations, as well as quantifying and measuring such biases when they manifest on a downstream task.

Stella is a member of the Peltarion AI Research team working as a data scientist. She is a recent graduate of the Machine Learning Master’s program at KTH. Her Thesis Project focused on improving multilingual Language Models for low resource languages and measuring Gender-Bias in contextualized embeddings. At Peltarion, she mainly works on research projects focusing on Natural Language Processing. She has a strong interest in Bias, Fairness, and Explainability in AI, and is currently researching methods of measuring Gender Bias in Transformer Models.