Full text Read free Blue print
Cokley, Kevin, Awad, Germine H.. In Defense of Quantitative Methods: Using the “Master’s Tools” to Promote Social Justice
2013, Journal for Social Action in Counseling and Psychology 5 (2)
Expand entry
Added by: Tomasz Zyglewicz, Shannon Brick, Michael Greer
Abstract: Empiricism in the form of quantitative methods has sometimes been used by researchers to thwart human welfare and social justice. Some of the ugliest moments in the history of psychology were a result of researchers using quantitative methods to legitimize and codify the prejudices of the day. This has resulted in the view that quantitative methods are antithetical to the pursuit of social justice for oppressed and marginalized groups. While the ambivalence toward quantitative methods by some is understandable given their misuse by some researchers, we argue that quantitative methods are not inherently oppressive. Quantitative methods can be liberating if used by multiculturally competent researchers and scholar-activists committed to social justice. Examples of best practices in social justice oriented quantitative research are reviewed.

Comment (from this Blueprint): Cokley and Awad are both psychologists, whose work seeks to redress the wrongs of past injustices against marginalized groups, and who both use quantitative methods to do so. In this article, they sketch some of the historical reasons why members of marginalized groups are sometimes rightly suspicious of the use of quantative techniques. However, they both argue that quantitative methods are not necessarily oppressive, but can be put to good use provided their practioners are committed to social justice. They offer some examples, from their own work, of how this sort of quantitative work can help to further the cause of social justice.

Share on Facebook Share on LinkedIn Share by Email
Read free Blue print
Narayanan, Arvind. The Limits of the Quantitative Approach to Discrimination
2022, James Baldwin Lecture Series
Expand entry
Added by: Tomasz Zyglewicz, Shannon Brick, Michael Greer
Introduction: Let’s set the stage. In 2016, ProPublica released a ground-breaking investigation called Machine Bias. You’ve probably heard of it. They examined a criminal risk prediction tool that’s used across the country. These are tools that claim to predict the likelihood that a defendant will reoffend if released, and they are used to inform bail and parole decisions.

Comment (from this Blueprint): This is a written transcript of the James Baldwin lecture, delivered by the computer scientist Arvind Narayanan, at Princeton in 2022. Narayanan's prior research has examined algorithmic bias and standards of fairness with respect to algorithmic decision making. Here, he engages critically with his own discipline, suggesting that there are serious limits to the sorts of quantitative methods that computer scientists recruit to investigate the potential biases in their own tools. Narayanan acknowledges that in voicing this critique, he is echoing claims by feminist researchers from fields beyond computer science. However, his own arguments, centered as they are on the details of the quantitative methods he is at home with, home in on exactly why these prior criticisms hold up in a way that seeks to speak more persuasively to Narayanan's own peers in computer science and other quantitative fields.

Share on Facebook Share on LinkedIn Share by Email
Can’t find it?
Contribute the texts you think should be here and we’ll add them soon!