The Legal Reason AI-layoffs are Less Likely Than You Think

Humans can be sued — computers can’t.

Matthew Barad
4 min readMay 5, 2023

With the meteoric rise of ChatGPT and other generative artificial neural networks, social media has filled with dire warnings of the future to come. “AI is going to nuke the bottom third of performers in jobs done on computers — even creative ones — in the next 24 months” one tweeter warns, while others insist that every kind of worker, from artist to model, will be removed from the workforce.

Although nobody knows the future, I tend to doubt these predictions. For one thing, this generation of deep learning neural networks still have obvious flaws, some of which they may not be able to overcome. But for another, and for one I’ve rarely heard considered, human beings still have one significant, possibly insurmountable advantage: we can be sued.

Self-driving cars, one of the oldest hyped applications of advanced artificial intelligence, have faced a legal question from its inception. If (or when) a self-driving car crashes, who is liable? At one time the obvious answer would be to blame the coder that wrote the faulty lines of code, but today even the code itself is (in a manner of speaking) machine written. The human input is now limited to setting up the “brain” to learn, and feed it information. Even if the exact flaw can be located, it would be extremely difficult to attribute it to either the initial neural network or the training data, much less the humans who provided them.

As a practical matter, the legal question of liability for automated self driving has not been resolved so much as avoided. Tesla’s “full self driving” mode mandates that the driver sign a waver and agree to be fully capable to take over at any moment. Some have suggested that Teslas, when detecting an imminent crash, shut down their automation precisely so that the driver, not the AI nor corporation, is liable at the moment of the accident. Even in Tesla’s Vegas Loop, where cars have essentially no ability to maneuver or crash, humans are still made to “drive.”

Though “full self-driving” AI vehicles are perhaps the most imminent and easy to grasp example of AI corporations deciding to rely on human actors for liability reasons, it is certainly not the only example. I have previously written about the implications of AI algorithmic trading in US and global securities class actions; in legal venues where human actors must attest in court to having read or relied on certain information to claim they were damaged by fraud, human traders may be kept on staff for the sole purpose of attesting to “reliance” in these fraud cases, even (or especially) when all the trading is done by non-human actors. An AI can fairly easily analyze the market and execute trades, but it cannot testify in court.

Many similar cases exist.

It may be better for a hospital to have a doctor “review” an AI diagnosis in order to protect the institution from malpractice claims. A law firm may prefer to have lawyers read and approve AI-written filings for the same reason. Quality assurance departments have existed for decades in manufacturing for that reason — to either find mistakes before they reach customers, or perhaps more importantly, to localize blame when those mistakes inevitably breach containment.

This paradigm can be repeated across the entire economy. From making coffee to diagnosing diseases, there are clear legal advantages to having a human actor employed to “review” and “approve” AI decisions. Because our legal system, and indeed the very idea of liability, is built on the assumption that humans are the decision makers, it may be easier for corporations to maintain a staff of button-pushing “AI reviewers” even as more and more tasks are, in reality, done by AI. To some extent, corporate hierarchies are already designed for this purpose — to locate the exact source of liability and excise it if necessary; fitting AI in this structure may be both legally safer and practically simpler than attempting to create truly automated corporations.

At this point, I feel obligated to point out what a grim and pointless future that would be. Even with the empty monotony of most modern labor, at least human beings are involved in creative and social acts. Whether working in a Dunkin Donuts or a major bank, the social and psychological benefits of working towards goals with others are very real. The introduction of full AI systems may not destroy the economy altogether, but the likely alternatives are not much better. A workforce filled with truly useless jobs, existing only to shield business owners from liability — a workforce replete with millions of jobs consisting of staring at screens and pressing “approve” — is in many ways a nightmare. Whatever value we once found in our work would be gone.

As we rush into the brave new world of advanced AI systems, I expect we will find much stranger realities than mass layoffs and poverty. This legal question is only one narrow look at the weird problems we will face as human societies are contorted to accommodate AI economies. We still have time to preempt disaster, but we must not delay nor limit the scope of our risk analysis.

If we fail, economic collapse may seem merciful in comparison.

--

--