Full text
Vredenburgh, Kate. Freedom at Work: Understanding, Alienation, and the AI-Driven Workplace
2022, Canadian Journal of Philosophy 52 (1):78-92.
Expand entry
Added by: Deryn Mair Thomas

This paper explores a neglected normative dimension of algorithmic opacity in the workplace and the labor market. It argues that explanations of algorithms and algorithmic decisions are of noninstrumental value. That is because explanations of the structure and function of parts of the social world form the basis for reflective clarification of our practical orientation toward the institutions that play a central role in our life. Using this account of the noninstrumental value of explanations, the paper diagnoses distinctive normative defects in the workplace and economic institutions which a reliance on AI can encourage, and which lead to alienation.

Comment: This paper offers a novel approach to the exploration of alienation at work (i.e., what makes work bad) from an algorithmic ethics perspective. It relies on the noninstrumental value of explanation to make its central argument, and grounds this value in the role that explanation plays in our ability to form a practical orientation towards our scoial world. In this sense, it examines an interesting, and somewhat underexplored, connection between algorithmic ethics, justice, the future of work, and social capabilities. As such, it could be useful in a wide range of course contexts. This being said, the central argument is fairly complex, and relies on some previous understanding of analytic political philosophy and philosophy of AI. It also employs technical language from these domains, and therefore would be best utilised for masters-level or other advanced philosophical courses and study.

Export citation in BibTeX format
Export text citation
View this text on PhilPapers
Export citation in Reference Manager format
Export citation in EndNote format
Export citation in Zotero format
Share on Facebook Share on LinkedIn Share by Email
Can’t find it?
Contribute the texts you think should be here and we’ll add them soon!