Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


CIFellows Spotlight: Towards Fair and Interpretable Language Processing Models and their Applications

April 6th, 2021 / in CIFellows, CIFellows Spotlight, research horizons, Research News / by Maddy Hunter

Sunipa Dev began her CIFellowship in January 2021 after  receiving her PhD from University of Utah in Fall of 2020. Dev is at the University of California, Los Angeles  (UCLA) working with Kai-Wei Chang, Assistant Professor of Computer Science at UCLA.

Dev recently co-presented a tutorial at AAAI 2021 which highlights using an interactive, visual tool, how language representations carry social biases and the ways in which we can mitigate the same. Details can be found here. She is also organizing a workshop on Responsible AI at KDD 2021.

Current Project

Language representations are ubiquitously used in language processing and generation tasks, which in turn are key in a variety of AI driven tasks. As a result, if social biases are propagated, they also get carried over to the same multitude of tasks. The motivation thus, is to make language representations and language processing tasks more inclusive, less biased and more interpretable.

My current research aims to further the understanding of the social biases captured in language representations so as to better mitigate its effects in the form of representational and allocation harms downstream. My work identifies, quantifies and mitigates such stereotypes and harms. It also develops means to make language processing techniques more inclusive with different protected attributes, such as gender in non-binary forms.

Impact

NLP and its tasks are widely used across several applications, real world scenarios, and in multiple vital industries including finance and healthcare. Social biases when propagated in an unchecked manner through these tasks can have severe implications. AI, when used ethically, has the potential to make the world as we see it fairer for all and my current work aims to help realize that potential.

Other Research 

Natural language processing and generation along with ensuring fairness, robustness and interpretability in their pursuits are my core areas of interest. I also am engaged in the pursuit of responsible ML applications in healthcare and industry.

Data mining and geometric data analysis such as big data, its representations and geometry are other areas I actively work towards understanding.

 

CIFellows Spotlight: Towards Fair and Interpretable Language Processing Models and their Applications

Comments are closed.