The Secret UX Issues That Will Make (Or Break) Self-Driving Cars
Self-driving cars went viral again recently, when Tesla dropped a $2,500 software update on its customers that promised a new “autopilot” feature. The videos are fascinating to watch, mostly because of what’s not happening. There’s one, titled “Tesla Autopilot tried to kill me!” where a guy drives with his hands off the wheel for the first time. He hasn’t replaced driving with, say, watching a movie or relaxing—instead, he’s replaced the stress of driving with something worse. (…)
Somewhere in between where we stand now, annoyed at how much time we waste sitting in traffic, and the future, where we’re driven around by robots, there will be hundreds of new cars. Their success doesn’t simply depend on engineering. The success depends on whether we, the people, understand what some new button in our brand-new car can do. Can we guess how to use it, even if we’ve never used it before? Do we trust it? Getting this right isn’t about getting the technology right—the technology exists, as the Tesla example proved so horribly. The greater challenge lies in making these technologies into something we understand—and want to use.
Great piece about one of the most compelling questions around the advent of self-driving cars: how do we build trust in a machine?
via fastcodesign.com