Remember a few months ago when Microsoft produced a friendly AI chatbot named Tay, designed to interact like a 19-year-old? It was a social and marketing experiment that quickly morphed into a social nightmare. Within a day, bombarded by hateful Twitter trolls, Tay had turned into a white supremacist—tweeting racist and offensive statements—and Microsoft had to shut it down.
The Tay failure was a bug in the system. The consequences weren’t major, but it was just one example of how AI can act in ways even its creators don’t anticipate.
It’s a story cited by Samuel Arbesman in his new book Overcomplicated: Technology at the Limits of Comprehension. In it, he elucidates how the same technological advances that have improved our lives are also making the world harder and harder to comprehend. We see these in the bugs,accidents, and “flukes” that often arise with little notice, such as the July 8th glitch in the New York Stock Exchange that suspended trading for several hours, or the failures in Toyota cars that led the vehicle to accelerate when the driver hit the brakes.
Arbesman, who is a “complexity scientist” and the scientist in residence at Lux Capital, argues that complexity is everywhere—in obvious places, like a powerful computer, but also in our household appliances, and in the tens of millions of lines of code in our cars.
Everyday consumers are often shielded from this complexity by intuitive user interfaces and may not be aware of it until something goes wrong. But to Arbesman, what’s scary is not that we don’t understand the systems and machinery that are at this point responsible for society’s function and our individual safety. It’s that even those who are supposed to understand them often don’t.