Jul 042020
 

Few things are as annoying as when you have a device that tries to be helpful and is completely, and utterly, wrong. Not only are you not gaining any convenience, because you still have to go back and fix it, but psychologically it feels worse because instead of making things better and doing things automatically for you, you’re having to go back and do it yourself anyways. If these were settings users can easily change, then it’s a minor inconvenience because we can update a configuration and be done with it. Sadly, that’s not the case, especially with devices that are supposed to be “smart” and thus figure it our without us having to do anything, or software that doesn’t expose these settings to users.

The constant push to be “smarter” is understandable. After all, the demos of “smart” technology in any context look amazing, and the potential value it brings to people is aspirational. The amount of brand loyalty that could come from being 1 of the first companies in an industry to successfully incorporate “smart” into what they do could translate into millions of dollars over a customer’s lifetime, not to mention the fact that being “smart” is going to be standard basically everywhere, and that standard’s coming sooner before later, so you might as well get on that bandwagon now. Even if you can’t make truly “smart” things, there’s still an increasing expectation that you make the “dumb” things you’re building now seem smarter. The assumption that people seem to be making on that front is that the way to making dumb things seem smarter is to make assumptions about how people use your products and then hard-code that assumption is as the default behavior (also known as “‘just like Apple products’ syndrome”) . The thing to remember is that everything I just typed is based on 1 key assumption, that the “smarter” settings and actions you’re designing work.

Here’s a real-world example of this problem. Most of the time I’m in my car, I connect my phone via Bluetooth. This works exactly the way I’d expect and want – whatever I’m listening to at the time just starts playing through the car stereo. Sometimes, my wife connects her phone to play something through the car’s sound system. Now, somewhere in the design process, someone decided that if someone plugs in their phone, rather than just playing the sound that otherwise would have come out of the phone’s speakers to instead come out of the car’s speakers, that it also meant that the phone should start playing it’s entire music library, in alphabetical order, starting from the beginning. It doesn’t matter if she had a whole other song queued up and ready to play, or if she was actively playing a song already, plugging her phone directly into the car is going to change the song to the first 1 in her library (by title), and automatically play it instead. She could even have a podcast that she wants to play – meaning she’s trying to play something through a completely different app. Nope, stop that, turn on music, play the first song by title. We go through this frustration because someone, somewhere, decided that they would take this opportunity to be “smart,” except instead I think they’re an idiot who’s made anyone trying to play anything from their phone through anything other than Bluetooth unusable.

Another example of the problem in action is Apple’s Siri. Over the years Apple has worked hard to add more capabilities to Siri (or as they like to say, making it “smarter”). On the surface, and the keynote demo stage, that means Siri’s getting better. In reality, Siri has become increasingly unusable over the years. In addition to periods where Siri can’t transcribe what I’m saying correctly no matter how I phrase it or how many times I try to say it (something that hasn’t gotten smarter in the nearly 9 years Siri’s been released) – commands that have previously worked no longer do so. Instead, I get other responses or options that aren’t relevant, and that’s being charitable. For instance, my son loves the Ghostbusters theme song from the original 1984 movie. When Siri was “dumber,” just saying “Play Ghostbusters” would cause it to play the song – it’s the only thing with that name I had downloaded on the phone, so it’s pretty much the only option. Over years, Siri’s “improved” so much, that just saying “Play Ghostbusters” now asks me which version of Ghostbusters I want to play – giving me the option of streaming the original 1984 movie, the 2016 remake, and apparently there’s TV shows that I can stream now. The 1 thing not on that list, is the song with that name that’s sitting on my phone already. Now, I have to specifically ask for the song, with the name of the artist – “Play the song ‘Ghostbusters'” doesn’t work, I have to say “Play ‘Ghostbusters’ by Ray Parker, Jr.”

These settings and actions are being coded into software, and the idea that you can just work on something for a really long time, and then ship software that works perfectly isn’t really a thing. Oh, you can test the crap out of it and ship something that’s bug-free (assuming your test coverage is good enough), and you have a small enough user base that the use cases can be exhaustively defined before going to production. But part of this stuff working is that it has to solve real problems for real users, and in a way that is, given the nature of making things “smarter,” highly personalized to each of them individually. That requires the ability to gain feedback. That’s why smart devices like home automation tools and smart speakers tend to be popular and work well – they’re always online which enables the companies that program them to gauge user reactions to their software, make adjustments based on actual user behavior and responses to the devices in question, and push those adjustments out to their users regularly.

Where these attempts at being smarter fail is when companies either lack the ability to gather feedback indicating how useful their “smartness” was for the user, or refuse to try to critically evaluate how well their product decision-making actually worked (I’m looking at you, Apple). There’s nothing wrong with “dumb” software or devices, but you need to build them like they’re dumb. Building a dumb device like it’s smart, or at least like it already knows best, is basically pinning your entire strategy for product success on the hope that the users of whatever you’re making think exactly like the product managers who wrote the feature requirements. That’s never going to happen. So instead of trying to decide what to do for the user in every situation, consider defaulting to not actually doing anything.

At first glance, that seems like a terrible idea. Doing things that way means we’ve gone from doing something that helps some people to doing doing nothing and helpingĀ nobody. What that argument doesn’t consider is all the people for whom you’re doing the wrong thing – they’re going to have to go back behind you and fix the thing you didn’t do right. Having to fix something a piece of technology got wrong is worse than just doing it yourself from the start, a problem that’s compounded by how hard it is to adjust settings. I know this idea is wildly counter-intuitive. The user just interacted with the software, of course it should Do Something (TM) – that’s how software works, users interact and it Does Something (TM). Where people cause trouble for their users is when the software Does Something that the user didn’t actually ask it to do. The last thing that makes “default do nothing” sound like a bad idea is it runs counter to the whole point behind the push to make things “smarter.” Again, that only works if you’re always right. Instead of coding in a bunch of decisions that you wanted to make on behalf of the user, just do the thing they asked – no more, and no less. All you know is that the thing the user requested is the correct action. Stick with the known right answer, and you’ll never have to worry about being wrong.

1 of the first things people who are starting to program learn is that computers are “stupid.” They only do what you programmed, no more, and no less. At some point, we stopped thinking of this as a feature and started thinking of it as a bug that we had to code around. Unfortunately, this led to people building experiences that don’t do what users expect, thinking they’re being clever. The problem is that these “clever” people have no way of learning based on user behavior, the users don’t have a way of fixing the “clever” people’s mistakes, and any changes “clever” people could make to the software they screwed up is so difficult that corrective updates are essentially non-existent – assuming users trust these “clever” people to get those things nobody asked them to do in the first place right this time. The truth is, computers have it right, do what’s asked and nothing else. The user knows what they want better than the developer, so either let them tell you, or at least write a system smart enough to recognize a pattern and follow it.

 Posted by at 7:58 AM