Navigating Our Imminent Future: A Conversation With Mike Kuniavsky
"I want to see the positive side of it, and I want to see where we really can give people superpowers—but with the knowledge that significant challenges happen with every single innovation."
Before he became interested in UX research or machine learning, Mike Kuniavsky was interested in engines. Or, to be more specific, he was interested in the effects of automation on people like his father, an engineer at Ford Motor Company who tuned engines. Testing for various conditions that might stress those engines was conducted manually until the mid-’80s when it became computerized. The implications for the humans who once ran these tests were profound. The implications for Kuniavsky were monumental too. He wrote a high school honors thesis on the impacts of computerization on middle management. That thesis was the genesis of a career centered on the nexus of humans and the technologies that would become an indelible part of their lives.
Recognizing our need to be able to easily find information in the vast new landscape known as the World Wide Web, Kuniavsky was part of the team that designed the early search engine HotBot. Identifying the need for more satisfying interactions once we found what we were looking for, he helped pioneer the field of UX research and design, writing the widely used text Observing the User Experience. And seeing that our relationship to nearly every object we use in our everyday lives was shifting dramatically, he wrote another book, Smart Things, and shifted his focus to product research and development. Currently, Kuniavsky is running an R&D lab focused on people’s experiences of emerging technologies. Kuniavsky and his team takes a three to five-year outlook, firmly situating him in a middle space between heads-down developers and twenty-years-out futurists.
From that middle space, Kuniavsky sees potential where many others see pending doom. This, it seems, is the crux of his three decades of work—from helping fellow students figure out their Mac Pluses at the college computer lab to helping Fortune 500 companies devise sensor-enabled shipping labels—Mike identifies in new technologies the societal good many of us can’t see through our fear. At the same time, he recognizes the validity of our fears and reflects them back to developers so they might best respond to our needs—practical, emotional, and ethical.
The aim of machine learning, Kuniavsky says, is not to take over people’s lives by doing things for them. Rather, “it’s to help them do those things better with either more information or with better tools that allow people to do things with their hands and their minds and their bodies that they would like to be able to but currently can't.” There will, inevitably, be fallout—the world is going to look a lot different for an anesthesiologist in the next 10 years. And it’s not that Kuniavsky doesn’t see risks as computers do more and more of our thinking for us. It’s just that he understands that human decision-making is risky, too.
“If you go to Wikipedia and look at the list of cognitive biases, human brains are not these perfect things. They're far from it,” he says. “They’re really, really flawed in many ways—interesting ways—but the way that our brains have evolved, there's all kinds of things we're really terrible at.”
Statistics, for instance. “When we see something happen, we want to find the single root cause of that thing. Well, in fact, in the world, most of the time there isn't a single root cause. It's an emergent property of a whole bunch of different things going on at the same time. So looking for that single root cause is sending you down the wrong path, “ he says, “Similarly we're always looking at, ‘This thing happened and therefore it's really important and therefore it happens a lot.’ We almost never think about all the things that didn't happen at that same point. Our brains are really bad at that.”
Computers don’t have these problems, he argues, and our need to have a human in charge often boils down to comfort—and culpability. “This is why we have pilots in airplanes but not drivers in little airport shuttle trains, even though computers have been able to take off and land airplanes without having a pilot on board for years—in many cases better than pilots can,” he says. “The reason we don't do that is we want someone to blame. We want someone there that we feel is like us, who will have our values, who will be, essentially, reflecting our needs.”
"Navigating our imminent future will require profound shifts in perspective."
For all of human existence, we’ve expected that our activities will be governed by human decision-making. There is an entire literary genre devoted to our anxieties about ceding that control. Kuniavsky says navigating our imminent future will require profound shifts in perspective. “It’s essentially a cultural negotiation that we have with technology,” he says.
The unsettling irony, of course, is that as humans are, quite literally, driving things less and less, digital culture is enabling us to project ourselves everywhere. Kuniavsky sees an antecedent in Father Coughlin, the priest from suburban Detroit who, in the 1930s, used the emerging technology of radio to broadcast his antisemitic views across the country. “I think we're literally in an analogous version of that—a situation where technology creates a fundamentally different level of access for every kind of ideology and we're not really built for dealing with that,” he says.
In other words, we’ve been here before. It’s fraught. But it is happening. Rigorous checks and balances are required—AI ethics is a major part of Kuniavsky’s work. But he believes society will be better in the end. “You know, we are all kind of roiled by these waves all the time,” he says. “It's a really challenging position to be in, but it is the human condition, unfortunately, since perhaps enclosure started in the UK in the 17th century. I want to see the positive side of it, and I want to see where we really can give people superpowers—but with the knowledge that significant challenges happen with every single innovation.”