Computing + Society + Justice
Various Presenters (Allen School)
Colloquium
Thursday, October 15, 2020, 3:30 pm
Zoom Meeting
Abstract
Society and technology are deeply intertwined in ways that are both beneficial and not. Technologies sometimes interact with the world and with people in the ways that designers intend. Technologies also interact with the world and with people in the ways that designers do not intend, i.e., technologies can have unintended consequences. Further, some technologies are designed with intents that are not universally perceived as positive. As computer scientists and engineers, we must think responsibly about the technologies that we create and use. This colloquium presents Allen School research focused at the intersection of technology, society, and justice.
Presenters: Saadia Gabriel, Suchin Gururangan, Dhruv “DJ” Jain, Neil Ryan.
RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models
Suchin Gururangan
Pretrained neural language models (LMs) are prone to generating racist, sexist, or otherwise toxic language which hinders their safe deployment. We investigate the extent to which pretrained LMs can be prompted to generate toxic language, and the effectiveness of controllable text generation algorithms at preventing such toxic degeneration. We create and release RealToxicityPrompts, a dataset of 100K naturally occurring, sentence-level prompts derived from a large corpus of English web text, paired with toxicity scores from a widely-used toxicity classifier. Using RealToxicityPrompts, we find that pretrained LMs can degenerate into toxic text even from seemingly innocuous prompts. We empirically assess several controllable generation methods, and find that while data- or compute-intensive methods (e.g., adaptive pretraining on non-toxic data) are more effective at steering away from toxicity than simpler solutions (e.g., banning "bad" words), no current method is failsafe against neural toxic degeneration. To pinpoint the potential cause of such persistent toxic degeneration, we analyze two web text corpora used to pretrain several LMs (including GPT-2; Radford et. al, 2019), and find a significant amount of offensive, factually unreliable, and otherwise toxic content. Our work provides a test bed for evaluating toxic generations by LMs and stresses the need for better data selection processes for pretraining.
Social Bias Frames: Reasoning about Social and Power Implications of Language
Presenter: Saadia Gabriel
Language has the power to reinforce stereotypes and project social biases onto others. At the core of the challenge is that it is rarely what is stated explicitly, but rather the implied meanings, that frame people's judgments about others. For example, given a statement that "we shouldn't lower our standards to hire more women," most listeners will infer the implicature intended by the speaker -- that "women (candidates) are less qualified." Most semantic formalisms, to date, do not capture such pragmatic implications in which people express social biases and power differentials in language.
We introduce Social Bias Frames, a new conceptual formalism that aims to model the pragmatic frames in which people project social biases and stereotypes onto others. In addition, we introduce the Social Bias Inference Corpus to support large-scale modelling and evaluation with 150k structured annotations of social media posts, covering over 34k implications about a thousand demographic groups. Our study motivates future work that combines structured pragmatic inference with commonsense reasoning on social implications. We will conclude with initial proposals for how the Social Bias Frames formalism can be used to both encourage more nuanced classification of human-written toxic language and mitigate the effects of machine-generated toxic language exhibited by neural language models.
Navigating Graduate School with a Disability
Presenter: Dhruv “DJ” Jain
Link to the slides: https://www.dropbox.com/s/n753m6y0y98ads3/ASSETS2020-NavGrad.pptx?dl=0
In graduate school, people with disabilities use disability accommodations to learn, network, and do research. However, these accommodations, often scheduled ahead of time, do not work in many situations due to uncertainty and spontaneity of the graduate experience. In this talk, I will present a longitudinal account of our graduate school experiences as people with disabilities, highlighting nuances and tensions of situations when our pre-planned accommodations did not work and the use of alternative in-situ coping strategies. Through a three-person autoethnography, our experiences reveal the impact of our self-image, relationships, technologies, and infrastructure on our disabled experience. Using post-hoc reflection on our experiences, I will then close with discussing personal and situated ways in which peers, faculty members, universities, and technology designers could improve the graduate school experiences of people with disabilities.
Vocational Fulfillment of CS Students
Presenter: Neil Ryan
The majority of Allen School students find industry employment upon graduation, with the Big Four (Amazon, Facebook, Microsoft, and Google) employing the majority of graduates. While our graduating senior surveys report that our graduates sign offers with $150k-$200k total compensation, student satisfaction and their perceived fulfillment is unreported. In this talk, I offer preliminary results from ongoing work that examines the departmental role in shaping the initial career choice of computing students through an anecdote of one Allen School student's experience and struggle searching for post-graduation employment, with an emphasis on their personal fulfillment.