Ajunwa, Ifeoma
An Auditing Imperative for Automated Hiring Systems Journal Article
In: Harvard Journal of Law & Technology (Harvard JOLT), vol. 34, no. 2, pp. 621–700, 2020.
@article{ajunwa_auditing_2020,
title = {An Auditing Imperative for Automated Hiring Systems},
author = {Ifeoma Ajunwa},
url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3437631},
year = {2020},
date = {2020-01-01},
urldate = {2024-10-08},
journal = {Harvard Journal of Law & Technology (Harvard JOLT)},
volume = {34},
number = {2},
pages = {621–700},
abstract = {The high bar of proof to demonstrate either a disparate treatment or disparate impact cause of action under Title VII of the Civil Rights Act, coupled with the “black box” nature of many automated hiring systems, renders the detection and redress of bias in such algorithmic systems difficult. This Article, with contributions at the intersection of administrative law, employment & labor law, and law & technology, makes the central claim that the automation of hiring both facilitates and obfuscates employment discrimination. That phenomenon and the deployment of intellectual property law as a shield against the scrutiny of automated systems combine to form an insurmountable obstacle for disparate impact claimants.
To ensure against the identified “bias in, bias out” phenomenon associated with automated decision-making, I argue that the employer’s affirmative duty of care as posited by other legal scholars creates “an auditing imperative” for algorithmic hiring systems. This auditing imperative mandates both internal and external audits of automated hiring systems, as well as record-keeping initiatives for job applications. Such audit requirements have precedent in other areas of law, as they are not dissimilar to the Occupational Safety and Health Administration (OSHA) audits in labor law or the Sarbanes-Oxley Act audit requirements in securities law.
I also propose that employers that have subjected their automated hiring platforms to external audits could receive a certification mark, “the Fair Automated Hiring Mark,” which would serve to positively distinguish them in the labor market. Labor law mechanisms such as collective bargaining could be an effective approach to combating the bias in automated hiring by establishing criteria for the data deployed in automated employment decision-making and creating standards for the protection and portability of said data. The Article concludes by noting that automated hiring, which captures a vast array of applicant data, merits greater legal oversight given the potential for “algorithmic blackballing,” a phenomenon that could continue to thwart many applicants’ future job bids.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
To ensure against the identified “bias in, bias out” phenomenon associated with automated decision-making, I argue that the employer’s affirmative duty of care as posited by other legal scholars creates “an auditing imperative” for algorithmic hiring systems. This auditing imperative mandates both internal and external audits of automated hiring systems, as well as record-keeping initiatives for job applications. Such audit requirements have precedent in other areas of law, as they are not dissimilar to the Occupational Safety and Health Administration (OSHA) audits in labor law or the Sarbanes-Oxley Act audit requirements in securities law.
I also propose that employers that have subjected their automated hiring platforms to external audits could receive a certification mark, “the Fair Automated Hiring Mark,” which would serve to positively distinguish them in the labor market. Labor law mechanisms such as collective bargaining could be an effective approach to combating the bias in automated hiring by establishing criteria for the data deployed in automated employment decision-making and creating standards for the protection and portability of said data. The Article concludes by noting that automated hiring, which captures a vast array of applicant data, merits greater legal oversight given the potential for “algorithmic blackballing,” a phenomenon that could continue to thwart many applicants’ future job bids.
Hirsch, Jeffrey M.
Future Work Journal Article
In: University of Illinois Law Review, vol. 2020, no. 3, pp. 889–958, 2020.
@article{hirsch_future_2020,
title = {Future Work},
author = {Jeffrey M. Hirsch},
url = {https://heinonline.org/HOL/P?h=hein.journals/unilllr2020&i=903},
year = {2020},
date = {2020-01-01},
urldate = {2024-10-21},
journal = {University of Illinois Law Review},
volume = {2020},
number = {3},
pages = {889–958},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Bales, Richard A.; Stone, Katherine V. W.
The Invisible Web at Work: Artificial Intelligence and Electronic Surveillance in the Workplace Journal Article
In: Berkeley Journal of Employment and Labor Law, vol. 41, no. 1, pp. 1–62, 2020.
@article{bales_invisible_2020,
title = {The Invisible Web at Work: Artificial Intelligence and Electronic Surveillance in the Workplace},
author = {Richard A. Bales and Katherine V. W. Stone},
url = {https://heinonline.org/HOL/P?h=hein.journals/berkjemp41&i=7},
year = {2020},
date = {2020-01-01},
urldate = {2024-10-21},
journal = {Berkeley Journal of Employment and Labor Law},
volume = {41},
number = {1},
pages = {1–62},
abstract = {Employers and others who hire or engage workers to perform services use a dizzying array of electronic mechanisms to make personnel decisions about hiring, worker evaluation, compensation, discipline, and retention. These electronic mechanisms include electronic trackers, surveillance cameras, metabolism monitors, wearable biological measuring devices, and implantable technology. These tools enable employers to record their workers’ every movement, listen in on their conversations, measure minute aspects of performance, and detect oppositional organizing activities. The data collected is transformed by means of artificial intelligence (A-I) algorithms into a permanent electronic resume that can identify and predict an individual’s performance as well as their work ethic, personality, union proclivity, employer loyalty, and future health care costs. The electronic resume produced by A-I will accompany workers from job to job as they move around the boundaryless workplace. Thus A-I and electronic monitoring produce an invisible electronic web that threatens to invade worker privacy, deter unionization, enable subtle forms of employer blackballing, exacerbate employment discrimination, render unions ineffective, and obliterate the protections of the labor laws.
This article describes the many ways A-I is being used in the workplace and how its use is transforming the practices of hiring, evaluating, compensating, controlling, and dismissing workers. It then focuses on four areas of law in which A-I threatens to undermine worker protections: anti-discrimination law, privacy law, antitrust law, and labor law. Finally, this article maps out an agenda for future law reform and research.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
This article describes the many ways A-I is being used in the workplace and how its use is transforming the practices of hiring, evaluating, compensating, controlling, and dismissing workers. It then focuses on four areas of law in which A-I threatens to undermine worker protections: anti-discrimination law, privacy law, antitrust law, and labor law. Finally, this article maps out an agenda for future law reform and research.
Ajunwa, Ifeoma
The Paradox of Automation as Anti-Bias Intervention Journal Article
In: Cardozo Law Review, vol. 41, no. 5, pp. 1671–1742, 2019.
@article{ajunwa_paradox_2019,
title = {The Paradox of Automation as Anti-Bias Intervention},
author = {Ifeoma Ajunwa},
url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2746078},
year = {2019},
date = {2019-01-01},
urldate = {2024-10-08},
journal = {Cardozo Law Review},
volume = {41},
number = {5},
pages = {1671–1742},
abstract = {A received wisdom is that automated decision-making serves as an anti-bias intervention. The conceit is that removing humans from the decision-making process will also eliminate human bias. The paradox, however, is that in some instances, automated decision-making has served to replicate and amplify bias. With a case study of the algorithmic capture of hiring as heuristic device, this Article provides a taxonomy of problematic features associated with algorithmic decision-making as anti-bias intervention and argues that those features are at odds with the fundamental principle of equal opportunity in employment. To examine these problematic features within the context of algorithmic hiring and to explore potential legal approaches to rectifying them, the Article brings together two streams of legal scholarship: law and technology studies and employment & labor law.
Counterintuitively, the Article contends that the framing of algorithmic bias as a technical problem is misguided. Rather, the Article’s central claim is that bias is introduced in the hiring process, in large part, due to an American legal tradition of deference to employers, especially allowing for such nebulous hiring criterion as “cultural fit.” The Article observes the lack of legal frameworks that take into account the emerging technological capabilities of hiring tools which make it difficult to detect disparate impact. The Article thus argues for a re-thinking of legal frameworks that take into account both the liability of employers and those of the makers of algorithmic hiring systems who, as brokers, owe a fiduciary duty of care. Particularly related to Title VII, the Article proposes that in legal reasoning corollary to extant tort doctrines, an employer’s failure to audit and correct its automated hiring platforms for disparate impact could serve as prima facie evidence of discriminatory intent, leading to the development of the doctrine of discrimination per se. The article also considers other approaches separate from employment law such as establishing consumer legal protections for job applicants that would mandate their access to the dossier of information consulted by automated hiring systems in making the employment decision.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Counterintuitively, the Article contends that the framing of algorithmic bias as a technical problem is misguided. Rather, the Article’s central claim is that bias is introduced in the hiring process, in large part, due to an American legal tradition of deference to employers, especially allowing for such nebulous hiring criterion as “cultural fit.” The Article observes the lack of legal frameworks that take into account the emerging technological capabilities of hiring tools which make it difficult to detect disparate impact. The Article thus argues for a re-thinking of legal frameworks that take into account both the liability of employers and those of the makers of algorithmic hiring systems who, as brokers, owe a fiduciary duty of care. Particularly related to Title VII, the Article proposes that in legal reasoning corollary to extant tort doctrines, an employer’s failure to audit and correct its automated hiring platforms for disparate impact could serve as prima facie evidence of discriminatory intent, leading to the development of the doctrine of discrimination per se. The article also considers other approaches separate from employment law such as establishing consumer legal protections for job applicants that would mandate their access to the dossier of information consulted by automated hiring systems in making the employment decision.
Bent, Jason R.
Is Algorithmic Affirmative Action Legal Journal Article
In: Georgetown Law Journal, vol. 108, no. 4, pp. 803–854, 2019.
@article{bent_is_2019,
title = {Is Algorithmic Affirmative Action Legal},
author = {Jason R. Bent},
url = {https://heinonline.org/HOL/P?h=hein.journals/glj108&i=815},
year = {2019},
date = {2019-01-01},
urldate = {2024-10-21},
journal = {Georgetown Law Journal},
volume = {108},
number = {4},
pages = {803–854},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Johnson, Kristin N.
Automating the Risk of Bias Journal Article
In: George Washington Law Review, vol. 87, no. 5, pp. 1214–1271, 2019.
@article{johnson_automating_2019,
title = {Automating the Risk of Bias},
author = {Kristin N. Johnson},
url = {https://heinonline.org/HOL/P?h=hein.journals/gwlr87&i=1288},
year = {2019},
date = {2019-01-01},
urldate = {2024-10-21},
journal = {George Washington Law Review},
volume = {87},
number = {5},
pages = {1214–1271},
abstract = {Artificial intelligence (“AI”) is a transformative technology that has radically altered decisionmaking processes. Evaluating the case for algorithmic or automated decision-making (“ADM”) platforms requires navigating the tensions between two normative concerns. On one hand, relying on ADM platforms may lead to more efficient, accurate, and objective decisions. On the other hand, early and disturbing evidence suggests that ADM platform results may demonstrate biases, undermining proponents’ claims that this special class of algorithms will democratize markets and increase inclusion.
State law assigns decision-making authority to the boards of directors of corporations. State courts and lawmakers accord significant deference to the board in the execution of its duties. Among its duties, a board must employ effective oversight policies and procedures to manage known risks. The board of directors and senior management of firms integrating ADM platforms must monitor operations to mitigate litigation, reputation, compliance and regulatory risks that arise as a result of the integration of algorithms.
Evidence demonstrates that heterogeneous teams may identify and mitigate risks more successfully than homogeneous teams. These teams overcome cognitive biases such as confirmation, commitment, overconfidence and relational biases. In the wake of the recent financial crisis firms adopted structural and procedural governance methods adopted by firms to mitigate various types of risks; these approaches may prove valuable in mitigating the risk of algorithmic bias. More specifically, this Article explores the literature on gender balance in leadership and on boards and proposes that AI firms increasing gender diversity among developers, senior managers, and members of the boards of technology firms in an effort to mitigate the risk of bias. Building such teams in firms developing or integrating AI will require creative and thoughtful recruiting and retention strategies. While improving gender balance may not alleviate cognitive biases that influence AI development and management teams, this Article concludes that integrating diverse may mitigate exposure to risks engendered by integrating complex AI models.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
State law assigns decision-making authority to the boards of directors of corporations. State courts and lawmakers accord significant deference to the board in the execution of its duties. Among its duties, a board must employ effective oversight policies and procedures to manage known risks. The board of directors and senior management of firms integrating ADM platforms must monitor operations to mitigate litigation, reputation, compliance and regulatory risks that arise as a result of the integration of algorithms.
Evidence demonstrates that heterogeneous teams may identify and mitigate risks more successfully than homogeneous teams. These teams overcome cognitive biases such as confirmation, commitment, overconfidence and relational biases. In the wake of the recent financial crisis firms adopted structural and procedural governance methods adopted by firms to mitigate various types of risks; these approaches may prove valuable in mitigating the risk of algorithmic bias. More specifically, this Article explores the literature on gender balance in leadership and on boards and proposes that AI firms increasing gender diversity among developers, senior managers, and members of the boards of technology firms in an effort to mitigate the risk of bias. Building such teams in firms developing or integrating AI will require creative and thoughtful recruiting and retention strategies. While improving gender balance may not alleviate cognitive biases that influence AI development and management teams, this Article concludes that integrating diverse may mitigate exposure to risks engendered by integrating complex AI models.
Stefano, Valerio De
Automation, Artificial Intelligence, and Labor Protection Automation, Artificial Intelligence, & Labor Law: Introduction Journal Article
In: Comparative Labor Law & Policy Journal, vol. 41, no. 1, pp. 3–14, 2019.
@article{de_stefano_automation_2019,
title = {Automation, Artificial Intelligence, and Labor Protection Automation, Artificial Intelligence, & Labor Law: Introduction},
author = {Valerio De Stefano},
url = {https://heinonline.org/HOL/P?h=hein.journals/cllpj41&i=15},
year = {2019},
date = {2019-01-01},
urldate = {2024-10-21},
journal = {Comparative Labor Law & Policy Journal},
volume = {41},
number = {1},
pages = {3–14},
abstract = {The Comparative Labor Law and Policy Journal is publishing a collection of articles on "Automation, Artificial Intelligence, and Labour Protection" edited by Valerio De Stefano (KU Leuven). This collection gathers contributions from several labour lawyers and social scientists to provide an interdisciplinary overview of how new technologies, including smart robots, artificial intelligence and machine learning, and business practices such as People Analytics, management-by-algorithm, and the use of big data in workplaces, far from merely displacing jobs, profoundly affect the quality of work. The authors argue that these issues depend, and can be affected by, policy choices - since they are not just the "natural" result of technological innovations - and call for adequate regulation of these phenomena. Contributing authors are Antonio Aloisi, Ilaria Armaroli, Fernanda Bárcia de Mattos, Janine Berg, Miriam Cherry, Emanuele Dagnino, Valerio De Stefano, Elena Gramano, Matt Finkin, Marianne Furrer, Frank Hendrickx, Parminder Jeet Singh, David Kucera, Phoebe Moore, Jeremias Prassl, and Uma Rani. This article introduces this collection and gives an overview of the issues discussed by the authors.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Levy, Frank
Computers and populism: artificial intelligence, jobs, and politics in the near term Journal Article
In: Oxford Review of Economic Policy, vol. 34, no. 3, pp. 393–417, 2018, ISSN: 0266-903X.
@article{levy_computers_2018,
title = {Computers and populism: artificial intelligence, jobs, and politics in the near term},
author = {Frank Levy},
url = {https://doi.org/10.1093/oxrep/gry004},
doi = {10.1093/oxrep/gry004},
issn = {0266-903X},
year = {2018},
date = {2018-07-01},
urldate = {2024-10-21},
journal = {Oxford Review of Economic Policy},
volume = {34},
number = {3},
pages = {393–417},
abstract = {I project the near-term future of work to ask whether job losses induced by artificial intelligence will increase the appeal of populist politics. The paper first explains how computers and machine learning automate workplace tasks. Automated tasks help to both create and eliminate jobs and I show why job elimination centres in blue-collar and clerical work—impacts similar to those of manufactured imports and offshored services. I sketch the near-term evolution of three technologies aimed at blue-collar and clerical occupations: autonomous long-distance trucks, automated customer service responses, and industrial robotics. I estimate that in the next 5–7 years, the jobs lost to each of these technologies will be modest but visible. I then outline the structure of populist politics. Populist surges are rare but a populist candidate who pits ‘the people’ (truck drivers, call centre operators, factory operatives) against ‘the elite’ (software developers, etc.) will be mining many of the US regional and education fault lines that were part of the 2016 presidential election.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Ajunwa, Ifeoma
In: Saint Louis University Law Journal, vol. 63, no. 1, pp. 21–54, 2018.
@article{ajunwa_algorithms_2018,
title = {Algorithms at Work: Productivity Monitoring Applications and Wearable Technology as the New Data-Centric Research Agenda for Employment and Labor Law Symposium: Law, Technology, and the Organization of Work},
author = {Ifeoma Ajunwa},
url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3247286},
year = {2018},
date = {2018-01-01},
urldate = {2024-10-08},
journal = {Saint Louis University Law Journal},
volume = {63},
number = {1},
pages = {21–54},
abstract = {Recent work technology advancements such as productivity monitoring platforms and wearable technology have given rise to new organizational behavior regarding the management of employees and also prompt new legal questions regarding the protection of workers’ privacy rights. In this Essay, I argue that the proliferation of productivity monitoring applications and wearable technologies will lead to new legal controversies for employment and labor law. In Part I, I assert that productivity monitoring applications will prompt a new reckoning of the balance between the employer’s pecuniary interests in monitoring productivity and the employees’ privacy interests. Ironically, such applications may also be both shield and sword in regards to preventing or creating hostile work environments. In Part II of this Essay, I note the legal issues raised by the adoption of wearable technology in the workplace, notably: privacy concerns; the potential for wearable tech to be used for unlawful employment discrimination; and worker safety and workers’ compensation issues. Finally, in Part III, I chart a research agenda for privacy law scholars, particularly in defining “a reasonable expectation of privacy” for employees and in deciding legal questions over employee data collection and use.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Estlund, Cynthia
What Should We Do after Work: Automation and Employment Journal Article
In: Yale Law Journal, vol. 128, no. 2, pp. 254–327, 2018.
@article{estlund_what_2018,
title = {What Should We Do after Work: Automation and Employment},
author = {Cynthia Estlund},
url = {https://heinonline.org/HOL/P?h=hein.journals/ylr128&i=282},
year = {2018},
date = {2018-01-01},
urldate = {2024-10-21},
journal = {Yale Law Journal},
volume = {128},
number = {2},
pages = {254–327},
abstract = {Will advances in robotics, artificial intelligence, and machine learning put vast swaths of the labor force out of work or into fierce competition for the jobs that remain? Or, as in the past, will new jobs absorb workers displaced by automation? These hotly debated questions have profound implications for the fortress of rights and benefits that the law of work has constructed on the foundation of the employment relationship. This Article charts a path for reforming the law of work in the face of both justified anxiety and uncertainty about the future impact of automation on jobs.
Automation is driven largely by the same forces that drive firms’ decisions about “fissuring,” or replacing employees with outside contractors. Fissuring has already transformed the landscape of work and contributed to weaker labor standards and growing inequality. A sensible response to automation should have in mind the adjacent problem of fissuring, and vice versa. Unfortunately, the dominant legal responses to fissuring—which aim to extend firms’ legal responsibility for the workers whose labor they rely on—do not meet the distinctive challenge of automation, and even modestly exacerbate it. Automation offers the ultimate exit from the costs and risks associated with human labor. As technology becomes an ever-more-capable and cost-effective substitute for human workers, it enables firms to circumvent prevailing legal strategies for protecting workers and shoring up the fortress of employment.
The question is how to protect workers’ rights and entitlements while reducing firms’ incentive both to replace employees with contractors and to replace human workers with machines. The answer, I argue, lies in separating the issue of what workers’ entitlements should be from the issue of where their economic burdens should fall. Some worker rights and entitlements necessarily entail employer duties and burdens. But for those that do not, we should look for ways to shift their costs beyond employer payrolls, or to extend the entitlements themselves beyond employment. The existing fortress of employment-based rights and benefits is under assault from fissuring and automation; it is failing to protect those who remain outside its walls, and erecting barriers to some who seek to enter. We need to dismantle some of its fortifications and construct a broader foundation of economic security for all, including those who cannot or do not make their living through steady employment.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Automation is driven largely by the same forces that drive firms’ decisions about “fissuring,” or replacing employees with outside contractors. Fissuring has already transformed the landscape of work and contributed to weaker labor standards and growing inequality. A sensible response to automation should have in mind the adjacent problem of fissuring, and vice versa. Unfortunately, the dominant legal responses to fissuring—which aim to extend firms’ legal responsibility for the workers whose labor they rely on—do not meet the distinctive challenge of automation, and even modestly exacerbate it. Automation offers the ultimate exit from the costs and risks associated with human labor. As technology becomes an ever-more-capable and cost-effective substitute for human workers, it enables firms to circumvent prevailing legal strategies for protecting workers and shoring up the fortress of employment.
The question is how to protect workers’ rights and entitlements while reducing firms’ incentive both to replace employees with contractors and to replace human workers with machines. The answer, I argue, lies in separating the issue of what workers’ entitlements should be from the issue of where their economic burdens should fall. Some worker rights and entitlements necessarily entail employer duties and burdens. But for those that do not, we should look for ways to shift their costs beyond employer payrolls, or to extend the entitlements themselves beyond employment. The existing fortress of employment-based rights and benefits is under assault from fissuring and automation; it is failing to protect those who remain outside its walls, and erecting barriers to some who seek to enter. We need to dismantle some of its fortifications and construct a broader foundation of economic security for all, including those who cannot or do not make their living through steady employment.
Grimmelmann, James; Westreich, Daniel
Incomprehensible Discrimination Journal Article
In: California Law Review Online, vol. 7, pp. 164, 2017.
@article{grimmelmann_incomprehensible_2017,
title = {Incomprehensible Discrimination},
author = {James Grimmelmann and Daniel Westreich},
url = {https://www.californialawreview.org/online/incomprehensible-discrimination},
year = {2017},
date = {2017-03-01},
journal = {California Law Review Online},
volume = {7},
pages = {164},
abstract = {The following (fictional) opinion of the (fictional) Zootopia Supreme Court of the (fictional) State of Zootopia is designed to highlight one particularly interesting issue raised by Solon Barocas and Andrew Selbst in Big Data’s Disparate Impact. Their article discusses many ways in which data-intensive algorithmic methods can go wrong when they are used to make employment and other sensitive decisions. Our vignette deals with one in particular: the use of algorithmically derived models that are both predictive of a legitimate goal and have a disparate impact on some individuals. Like Barocas and Selbst, we think it raises fundamental questions about how antidiscrimination law works and about what it ought to do. But we are perhaps slightly more optimistic than they are that the law already has the doctrinal tools it needs to deal appropriately with cases of this sort.
After the statement of facts and procedural history, you will be given a chance to pause and reflect on how the case ought to be decided under existing United States law. Zootopia is south of East Dakota and north of West Carolina. It is a generic law-school hypothetical state, where federal statutes and caselaw apply, but without distracting state-specific variations. The citations to articles, statutes, regulations, and cases are real; RDL v. ZPD and Hopps v. Lionheart are not. Otherwise, life in Zootopia is much like life here, with one exception: It is populated entirely by animals.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
After the statement of facts and procedural history, you will be given a chance to pause and reflect on how the case ought to be decided under existing United States law. Zootopia is south of East Dakota and north of West Carolina. It is a generic law-school hypothetical state, where federal statutes and caselaw apply, but without distracting state-specific variations. The citations to articles, statutes, regulations, and cases are real; RDL v. ZPD and Hopps v. Lionheart are not. Otherwise, life in Zootopia is much like life here, with one exception: It is populated entirely by animals.
Bodie, Matthew T.; Cherry, Miriam A.; McCormick, Marcia L.; Tang, Jintong
The Law and Policy of People Analytics Journal Article
In: University of Colorado Law Review, vol. 88, no. 4, pp. 961–1042, 2017.
@article{bodie_law_2017,
title = {The Law and Policy of People Analytics},
author = {Matthew T. Bodie and Miriam A. Cherry and Marcia L. McCormick and Jintong Tang},
url = {https://heinonline.org/HOL/P?h=hein.journals/ucollr88&i=997},
year = {2017},
date = {2017-01-01},
urldate = {2024-10-21},
journal = {University of Colorado Law Review},
volume = {88},
number = {4},
pages = {961–1042},
abstract = {Leading technology companies such as Google and Facebook have been experimenting with people analytics, a new data-driven approach to human resources management. People analytics is just one example of the new phenomenon of “big data,” in which analyses of huge sets of quantitative information are used to guide decisions. Applying big data to the workplace could lead to more effective outcomes, as in the Moneyball example, where the Oakland Athletics baseball franchise used statistics to assemble a winning team on a shoestring budget. Data may help firms determine which candidates to hire, how to help workers improve job performance, and how to predict when an employee might quit or should be fired. Despite being a nascent field, people analytics is already sweeping corporate America.
Although cutting-edge businesses and academics have touted the possibilities of people analytics, the legal and ethical implications of these new technologies and practices have largely gone unexamined. This Article provides a comprehensive overview of people analytics from a law and policy perspective. We begin by exploring the history of prediction and data collection at work, including psychological and skills testing, and then turn to new techniques like data mining. From that background, we examine the new ways that technology is shaping methods of data collection, including innovative computer games as well as ID badges that record worker locations and the duration and intensity of conversations. The Article then discusses the legal implications of people analytics, focusing on workplace privacy and employment discrimination law. Our article ends with a call for additional disclosure and transparency regarding what information is being collected, how it should be handled, and how the information is used. While people analytics holds great promise, that promise can only be fulfilled if employees participate in the process, understand the nature of the metrics, and retain their identity and autonomy in the face of the data’s many narratives.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Although cutting-edge businesses and academics have touted the possibilities of people analytics, the legal and ethical implications of these new technologies and practices have largely gone unexamined. This Article provides a comprehensive overview of people analytics from a law and policy perspective. We begin by exploring the history of prediction and data collection at work, including psychological and skills testing, and then turn to new techniques like data mining. From that background, we examine the new ways that technology is shaping methods of data collection, including innovative computer games as well as ID badges that record worker locations and the duration and intensity of conversations. The Article then discusses the legal implications of people analytics, focusing on workplace privacy and employment discrimination law. Our article ends with a call for additional disclosure and transparency regarding what information is being collected, how it should be handled, and how the information is used. While people analytics holds great promise, that promise can only be fulfilled if employees participate in the process, understand the nature of the metrics, and retain their identity and autonomy in the face of the data’s many narratives.
Dau-Schmidt, Kenneth G.
In: University of Chicago Legal Forum, vol. 2017, pp. 63–94, 2017.
@article{dau-schmidt_impact_2017,
title = {The Impact of Emerging Information Technologies on the Employment Relationship: New Gigs for Labor and Employment Law Law and the Disruptive Workplace},
author = {Kenneth G. Dau-Schmidt},
url = {https://heinonline.org/HOL/P?h=hein.journals/uchclf2017&i=69},
year = {2017},
date = {2017-01-01},
urldate = {2024-10-21},
journal = {University of Chicago Legal Forum},
volume = {2017},
pages = {63–94},
abstract = {The technology of production has always shaped the employment relationship and the issues that are important in labor and employment law. Since at least the late 1970s the American economy has adopted information technology that promises to change the employment relationship in ways at least as profound as those wrought by the other revolutions in general production technology, such as the adoption of steam power, electricity, or methods of mass production. The global network of programmable machines of the information age allows us to communicate and process much more information, much more quickly than ever previously imagined. This increased informational capacity has remade every aspect of the employment relationship including: job search, the organization of production, the methods of production, and the size of the relevant market. With the new information technology, we have progressed from a system of manual production in a single physical location serving regional or national markets, to one of highly automated production drawing on and serving a global economy. We have also progressed to the point where information technology can replicate some higher-order thinking through the rote analysis of data, yielding “artificial intelligence” that can displace human intelligence in the work place.
In this article, I examine how information technology has remade the employment relationship and the legal issues these changes have raised. I begin by chronicling those changes, their economic implications, and the legal issues they raise in job search, the organization of production, the demand for human skills, and participation in the global economy. I examine some now familiar problems including telecommuting, outsourcing, and international trade, but also analyze some more recent topics including using “big data” for “talent matching,” “work on demand apps,” “crowd-sourcing,” “job polarization,” and “artificial intelligence.” Although I hope that my economic analysis outlines and clarifies many of the labor and employment law issues the new technology raises, it is beyond the scope of this essay to attempt to resolve all of these issues for the reader. I leave the debate on at least some of these issues to the other authors in this volume, save that I venture the outline of an argument on what has emerged as the quintessential question: whether the new production relationships developed using information technology constitute employment relationships for the purpose of coverage under the web of protective legislation known as labor and employment law. I argue that we need to abandon outmoded legal definitions of who is an employee and who is an “independent contractor.” In their place we should adopt two unifying principles for coverage: the avoidance of “regulatory arbitrage” so that decisions on the organization of production are made on the basis of real economic advantages rather than just on the basis of avoiding legislative responsibility; and the assignment of responsibility for the provision of benefits under protective legislation to the cheapest cost avoider so as to minimize the burden of fulfilling the promises of protective legislation. These principles argue for broad, perhaps universal, coverage for workers under protective legislation, and that responsibility for garnering the money necessary to pay for these benefits generally be with the large corporations who organize production in the new economic environment.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
In this article, I examine how information technology has remade the employment relationship and the legal issues these changes have raised. I begin by chronicling those changes, their economic implications, and the legal issues they raise in job search, the organization of production, the demand for human skills, and participation in the global economy. I examine some now familiar problems including telecommuting, outsourcing, and international trade, but also analyze some more recent topics including using “big data” for “talent matching,” “work on demand apps,” “crowd-sourcing,” “job polarization,” and “artificial intelligence.” Although I hope that my economic analysis outlines and clarifies many of the labor and employment law issues the new technology raises, it is beyond the scope of this essay to attempt to resolve all of these issues for the reader. I leave the debate on at least some of these issues to the other authors in this volume, save that I venture the outline of an argument on what has emerged as the quintessential question: whether the new production relationships developed using information technology constitute employment relationships for the purpose of coverage under the web of protective legislation known as labor and employment law. I argue that we need to abandon outmoded legal definitions of who is an employee and who is an “independent contractor.” In their place we should adopt two unifying principles for coverage: the avoidance of “regulatory arbitrage” so that decisions on the organization of production are made on the basis of real economic advantages rather than just on the basis of avoiding legislative responsibility; and the assignment of responsibility for the provision of benefits under protective legislation to the cheapest cost avoider so as to minimize the burden of fulfilling the promises of protective legislation. These principles argue for broad, perhaps universal, coverage for workers under protective legislation, and that responsibility for garnering the money necessary to pay for these benefits generally be with the large corporations who organize production in the new economic environment.
Kim, Pauline T.
Data-Driven Discrimination at Work Journal Article
In: William & Mary Law Review, vol. 58, no. 3, pp. 857–936, 2016.
@article{kim_data-driven_2016,
title = {Data-Driven Discrimination at Work},
author = {Pauline T. Kim},
url = {https://heinonline.org/HOL/P?h=hein.journals/wmlr58&i=887},
year = {2016},
date = {2016-01-01},
urldate = {2024-10-21},
journal = {William & Mary Law Review},
volume = {58},
number = {3},
pages = {857–936},
abstract = {A data revolution is transforming the workplace. Employers are increasingly relying on algorithms to decide who gets interviewed, hired, or promoted. Although data algorithms can help to avoid biased human decision-making, they also risk introducing new sources of bias. Algorithms built on inaccurate, biased, or unrepresentative data can produce outcomes biased along lines of race, sex, or other protected characteristics. Data mining techniques may cause employment decisions to be based on correlations rather than causal relationships; they may obscure the basis on which employment decisions are made; and they may further exacerbate inequality because error detection is limited and feedback effects compound the bias. Given these risks, I argue for a legal response to classification bias—a term that describes the use of classification schemes, such as data algorithms, to sort or score workers in ways that worsen inequality or disadvantage along the lines of race, sex, or other protected characteristics.
Addressing classification bias requires fundamentally rethinking antidiscrimination doctrine. When decision-making algorithms produce biased outcomes, they may seem to resemble familiar disparate impact cases; however, mechanical application of existing doctrine will fail to address the real sources of bias when discrimination is data-driven. A close reading of the statutory text suggests that Title VII directly prohibits classification bias. Framing the problem in terms of classification bias leads to some quite different conclusions about how to apply the antidiscrimination norm to algorithms, suggesting both the possibilities and limits of Title VII’s liability-focused model.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Addressing classification bias requires fundamentally rethinking antidiscrimination doctrine. When decision-making algorithms produce biased outcomes, they may seem to resemble familiar disparate impact cases; however, mechanical application of existing doctrine will fail to address the real sources of bias when discrimination is data-driven. A close reading of the statutory text suggests that Title VII directly prohibits classification bias. Framing the problem in terms of classification bias leads to some quite different conclusions about how to apply the antidiscrimination norm to algorithms, suggesting both the possibilities and limits of Title VII’s liability-focused model.
Chander, Anupam
The Racist Algorithm 2017 Survey of Books Related to the Law: Reviews Journal Article
In: Michigan Law Review, vol. 115, no. 6, pp. 1023–1046, 2016.
@article{chander_racist_2016,
title = {The Racist Algorithm 2017 Survey of Books Related to the Law: Reviews},
author = {Anupam Chander},
url = {https://heinonline.org/HOL/P?h=hein.journals/mlr115&i=1081},
year = {2016},
date = {2016-01-01},
urldate = {2024-10-21},
journal = {Michigan Law Review},
volume = {115},
number = {6},
pages = {1023–1046},
abstract = {Are we on the verge of an apartheid by algorithm? Will the age of big data lead to decisions that unfairly favor one race over others, or men over women? At the dawn of the Information Age, legal scholars are sounding warnings about the ubiquity of automated algorithms that increasingly govern our lives. In his new book, The Black Box Society: The Hidden Algorithms Behind Money and Information, Frank Pasquale forcefully argues that human beings are increasingly relying on computerized algorithms that make decisions about what information we receive, how much we can borrow, where we go for dinner, or even whom we date. Pasquale’s central claim is that these algorithms will mask invidious discrimination, undermining democracy and worsening inequality. In this review, I rebut this prominent claim. I argue that any fair assessment of algorithms must be made against their alternative. Algorithms are certainly obscure and mysterious, but often no more so than the committees or individuals they replace. The ultimate black box is the human mind. Relying on contemporary theories of unconscious discrimination, I show that the consciously racist or sexist algorithm is less likely than the consciously or unconsciously racist or sexist human decision-maker it replaces. The principal problem of algorithmic discrimination lies elsewhere, in a process I label viral discrimination: algorithms trained or operated on a world pervaded by discriminatory effects are likely to reproduce that discrimination.
I argue that the solution to this problem lies in a kind of algorithmic affirmative action. This would require training algorithms on data that includes diverse communities and continually assessing the results for disparate impacts. Instead of insisting on race or gender neutrality and blindness, this would require decision-makers to approach algorithmic design and assessment in a race and gender conscious manner.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
I argue that the solution to this problem lies in a kind of algorithmic affirmative action. This would require training algorithms on data that includes diverse communities and continually assessing the results for disparate impacts. Instead of insisting on race or gender neutrality and blindness, this would require decision-makers to approach algorithmic design and assessment in a race and gender conscious manner.
Cherry, Miriam A.
The Gamification of Work Ideas Journal Article
In: Hofstra Law Review, vol. 40, no. 4, pp. 851–858, 2011.
@article{cherry_gamification_2011,
title = {The Gamification of Work Ideas},
author = {Miriam A. Cherry},
url = {https://heinonline.org/HOL/P?h=hein.journals/hoflr40&i=873},
year = {2011},
date = {2011-01-01},
urldate = {2024-10-21},
journal = {Hofstra Law Review},
volume = {40},
number = {4},
pages = {851–858},
abstract = {In the language of cyberspace, introducing elements of fun or game-playing into everyday tasks or through simulations is known as the process of “gamification.” The idea that people could be working while they play a video game – in some instances without even knowing that they are working – is becoming part of our reality. Gamification is an important element of what in previous writing I have termed “virtual work,” that is, work that is taking place wholly online, in crowdsourcing arrangements, or in virtual worlds. The gamification of work is an important trend with important implications for employment law. This short “Idea” essay begins to describe and formulate theories for thinking about these new forms of work.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}