The workforce is digitizing. Leading consultancies estimate that algorithmic systems will replace 45 percent of human-held jobs by 2030. One feature that algorithms share with the human employees they are replacing is their capacity to cause harm. Even today, corporate algorithms discriminate against loan applicants, manipulate stock markets, collude over prices, and cause traffic deaths. Ordinarily, corporate employers would be responsible for these injuries, but the rules for assessing corporate liability arose at a time when only humans could act on behalf of corporations. Those rules apply awkwardly, if at all, to silicon. Some corporations have already discovered this legal loophole and are rapidly automating business functions to limit their own liability risk.
This Article seeks a way to hold corporations accountable for the harms of their digital workforce: some algorithms should be treated, for liability purposes, as corporate employees. Drawing on existing functional characterizations of employment, the Article defines the concept of an “employed algorithm” as one over which a corporation exercises substantial control and from which it derives substantial benefits. If a corporation employs an algorithm that causes criminal or civil harm, the corporation should be liable just as if the algorithm were a human employee. Plaintiffs and prosecutors could then leverage existing, employee-focused liability rules to hold corporations accountable when the digital workforce transgresses.
Mihailis E. Diamantis,
Employed Algorithms: A Labor Model of Corporate Liability for AI,
72 Duke Law Journal
Available at: https://scholarship.law.duke.edu/dlj/vol72/iss4/2