Social media platforms employ inferential analytics methods to guess user preferences and sensitive attributes such as race, gender, sexual orientation, and opinions. These methods are often opaque, can predict behaviors for marketing purposes, influence behavior for profit, serve attention economics, and reinforce existing biases such as gender stereotyping. Although two international human rights treaties include express obligations relating to harmful and wrongful stereotyping, these stereotypes persist online and offline, as if platforms failed to understand that gender is not merely being a 'man' or a 'woman,' but a social construct. Our study investigates the impact of algorithmic bias on inadvertent privacy violations and the reinforcement of social prejudices of gender and sexuality through a multidisciplinary perspective, including legal, computer science, and queer media viewpoint. We conducted an online survey to understand whether Twitter inferred the gender of users and whether that was correct. Beyond Twitter's binary understanding of gender and the inevitability of the gender inference as part of Twitter's personalization trade-off, the results show that, in nearly 20% of the cases (N=109), it misgendered users. Although not apparently correlated, only 8% of the straight male respondents were misgendered, compared to 25% of gay males and 16% of straight females. Our contribution shows how the lack of attention to gender in gender classifiers exacerbates existing biases and affects marginalized communities. With our paper, we hope to promote the online account for privacy, diversity, inclusion, and advocate for the freedom of identity that everyone should have online and offline.
|Title of host publication||BIAS 2020: Bias and Fairness in AI Workshop at the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD)|
|Number of pages||9|
|Publication status||Published - 2020|
|Event||BIAS 2020: Bias and Fairness in AI workshop : At the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD) - Online|
Duration: 14 Sep 2020 → 18 Sep 2020
https://sites.google.com/view/bias-2020/ (BIAS 2020 website)
|Workshop||BIAS 2020: Bias and Fairness in AI workshop|
|Period||14/09/20 → 18/09/20|
|Other||The BIAS 2020 workshop was held as part of the ECML PKDD 2020 virtual conference.|
AI techniques based on big data and algorithmic processing are increasingly used to guide decisions in important societal spheres, including hiring decisions, university admissions, loan granting, and crime prediction. They are applied by search engines, Internet recommendation systems and social media bots, influencing our perceptions of political developments and even of scientific findings. However, there are growing concerns with regard to the epistemic and normative quality of AI evaluations and predictions. In particular, there is strong evidence that algorithms may sometimes amplify rather than eliminate existing bias and discrimination, and thereby have negative effects on social cohesion and on democratic institutions.
Scholarly reflection of these issues has begun and despite the large volume of related research lately a lot of work remains to be done. In particular, we still lack a comprehensive understanding of how pertinent concepts of bias or discrimination should be interpreted in the context of AI and which technical options to combat bias and discrimination are both realistically possible and normatively justified. The workshop will discuss these issues based on the shared research question: How can standards of unbiased attitudes and non-discriminatory practices be met in (big) data analysis and algorithm-based decision-making?