The Dangers of Dumb AI

--

SkyNet isn’t coming for you — but excel might be

AlphaMorph AI designed Airship

A few years ago, I was working on an artificial intelligence startup called AlphaMorph. We worked on a kind of AI known as genetic algorithms (think evolution, not brains) and were using the wonderful little game Airships: Conquer the Skies as a testing environment. The goal of the game was to design digital airships and command them to victory. Each ship was made of dozens of components (cannons, sails, balloons) which the player could link up.

In one of our trials, we gave the algorithm a simple instruction: defeat as many enemy vessels as possible (in 1v1 rounds) with the cheapest possible airship. After generations of trial and error, and days of computer time, AlphaMorph produced its answer: a tiny ship which would fly above enemies, attach itself via harpoon, and fire missiles point blank into their hull.

This strategy was devastatingly effective — but it was also suicidal.

In nearly every trial, the AI-produced ship would destroy itself only seconds after the enemy fell. To our human eyes, this appears to be a failure. Afterall, in both the game and real life, a “victory” which requires self-destruction is hardly a victory at all. But to the AI, this solution was ideal. Its instructions were to destroy as many ships as possible as cheaply as possible. It did that — and only that.

Self-survivability was not included in the instructions, and so it was ignored.

Big Data Jobs

In the world of AI, the definition of “intelligence” is fairly controversial. To most people, intelligence means adaptability, some degree of self-awareness, and the ability to carry prior knowledge into new situations. If applied to AI, however, barely any (if any at all) machine intelligences would meet those requirements.

From an enemy in a video game, to an excel model, to a genetic algorithm, the vast majority of “AIs” in use today are what could be called “dumb.” Most every video game AI simply runs down a checklist of “if-then” reactions to the player, most excel algorithms are little more than equations calculated in series, and genetic algorithms are no more intelligent than Darwinian evolution. In short, most AIs cannot adapt, they cannot change their program, and they cannot truly learn or remember; it’s certainly artificial, but it’s not particularly intelligent after all.

But that doesn’t mean “dumb AI” is weak — or safe.

Trending AI Articles:

1. Why Corporate AI projects fail?

2. How AI Will Power the Next Wave of Healthcare Innovation?

3. Machine Learning by Using Regression Model

4. Top Data Science Platforms in 2021 Other than Kaggle

Every day, across the world, millions of decisions are outsourced to algorithms (one kind of “dumb AI” we discussed above). Excel spreadsheets run numbers and tell analysts how an advertisement, service, downturn, or market shift impacted business. And, because nearly every institution we have is dedicated to generating profit, most of these dumb AIs spend their days telling human beings what is or is not profitable.

Moral considerations never enter these models. Algorithms, like all dumb AI, are only able to play by the rules we write and strive for the goals we set. If a corporation uses algorithms to increase profits, they will do just that — no matter the cost.

This is dangerous for two reasons. For one, it means giving immense power to undemocratic, unintelligent, and truly amoral machines. For another, it absolves individual humans of having to face moral choices. A rideshare company’s pricing algorithm can determine, rightly or wrongly, that a surge in requests for rides is an opportunity for profit, and increase the cost as a result. Done without direct human oversight, this allows companies like Uber to extract as much profit as possible from, say, a concert which has recently ended and generated a large pool of demand in one area. No person is asked to justify the increases, and if they were, they could just point to the math and shrug.

In most situations, exporting such a decision to an algorithm (or any form of dumb AI) is mostly just annoying. Sure it’s no fun to pay more to leave, but it’s hardly life and death. Except, one day, it was.

After a group of knife-wielding terrorists drove into crowds on London Bridge, Uber’s algorithm noticed an opportunity. Hundreds of requests came surging in for rides away from the danger and its aftermath, and the algorithm dutifully increased rates as a result. What had been an annoyance in one situation (likely the situation Uber’s developers had considered when programming this AI) became a robbery in another — your money or your life.

After this incident, there was a massive and understandable backlash against Uber. It’s not clear whether their surge pricing algorithm was changed in any way, but at least the harm it caused was noticed, highlighted, and organized against. But this is an extreme example of harm done by dumb AIs. The vast majority of similar decisions, each with the capacity to do damage and each with little direct human oversight, are never acknowledged by their victims or perpetrators.

What about a salary sheet that recommends who gets raises? A productivity algorithm that leaves a warehouse understaffed? A supply chain monitor that proposes leveling another acre for palm oil? Who objects to, or even notices, the AIs making those calls?

Millions of little choices, all made by dispassionate, unintelligent, and unbiased machines completing their instructions perfectly every day. Each one is told to maximize shareholder profit and each does so immaculately — until one day we are left with an uninhabitable planet, replete with suffering, and containing whatever remains of the servers housing those imagined profits.

The idea of a human-like intelligence deciding humanity should be destroyed is romantic. It’s roughly akin to a malevolent god deciding to punish humanity for our sins.

The more probable horror, however, is far less story-esque. Dumb algorithms — those which may adapt but ultimately seek to achieve very simply instructions — are far more likely to be our downfall. The instruction “generate profit” or “create paperclips,” if given without sufficient limitations (limitations humans may not be able to predict, much less implement) could result in any number of horrors all wrought in pursuit of the simplest goals. Human lives, a habitable climate, or concern for ecologies do not matter to algorithms by default. They have to be coded in, either as an instruction or as a rule — and if we aren’t comprehensive enough, they may well be ignored.

AlphaMorph was not evil or even wrong, it was misled. I misled it. By providing incomplete instructions, I left the algorithm with an opportunity to find “solutions” no human would accept.

Similarly, it’s not that excel, or evolution, or programming, or math is evil, it’s that we do not understand the power we’ve given it, the holes in our instructions, or the limitations of our code. Any human future will need some kind of dumb AI assistance. Whether post-apocalyptic or hyper-futuristic, dumb AIs are excellent at completing repeated tasks or helping humans find optimal solutions. But they are a tool, not a cure-all.

Intelligence has the capacity for fundamental re-evaluation and change. Algorithms do not. An intelligence might be wrong, or even evil, but “dumb algorithms” are something much more dangerous.

They are inevitable.

If given the power, they will simply and unthinkingly execute their instructions; if they execute us in the process, they won’t even notice.

Special thanks to Phasma Landrum for suggesting this topic.

Don’t forget to give us your 👏 !

--

--