Products: “Smart” or “obedient”?

 

Given current technology, I think its futile to try and design commercially-available smart products. What we should be trying to do is design better, more obedient products. In a perfect world, my dog wouldn’t fetch me a cold beer every day at 7PM or whenever the temperature goes above 85° F. No, my dog would fetch me a cold beer with a 100% success rate whenever I asked him to. 

In 2014, Palate Home set out to design a smart grill that could make great, sous-vide quality food without the need for a water bath. The company was about to graduate from a product accelerator program with prototype hardware and wanted to initiate a funding round. But the industrial design of the grill was still in flux and the team was debating which controls needed to be present in the hardware and/or a mobile app. 

The team really wanted to make the product “smart.” Maybe the app could make smart suggestions based on past meals or the current contents of the user’s refrigerator. Or the grill itself might do something smart regarding, say, cooking time and finishing. I kept an open mind about smartness, and argued we really needed to understand user desires around meal creation before we started in on smart features.

But before plotting user journeys and doing competitive analysis, I had to design a new logo. There was no way I wanted to be seen in a t-shirt sporting the original logo.

 
 
Old logo

Old logo

Better logo

Better logo

 
 

BACKGROUND
The Palate Home team, all pretty hardcore home cooks, had created some preliminary app mock ups. They had started to design a digital cooking journal—they wanted to be able to document every cooking experience, note what was cooked, how it was cooked, and, most importantly, the results.

 
 
Food_Journal.png
 
 

I consider myself a damn good cook. I take notes and try to learn from past experiences. But this was clearly an example of a product being designed by zealots for zealots. It reminded me of my college chemistry lab notebooks and was just about as inspiring. 

I (almost) never make the mistake of thinking of myself as the target user for any product I am designing, but I felt strongly about this. I needed to the team to think about creating a product that would appeal to a broader base of home cooks.

WHAT DRIVES THE USER? NOT, “WHAT DOES THE USER DRIVE?”
I rarely use personas. Not because they aren’t effective but because I find product teams often don’t want to take the time to define personas in enough detail to avoid arguments about whether “Sue, the soccer mom,” drives a minivan or a compact SUV. I use “drivers” instead—what are the driving reasons a “user,” in a general sense, would want to use our product?

We identified two, not mutually exclusive user drivers:

As a hungry user, I want to eat at a specific time.

As a hungry user, I want to cook a specific type of food.

Time and type—these became the foundations for the design of the Palate Home hardware and software user experience. 

Early in any design process I like to explore extremes. In this case I wondered what it would be like if there were no mobile app? A user has to download, configure, and pair an app.  app requires downloading, configuration, account setup, device pairing, and the like. Each step is an opportunity for something to go wrong. Getting rid of the app potentially simplified things. 

HARDWARE BASELINE
I started with a hypothetical hardware design consisting of two dials and a simple display. It’s a classic form factor.

 
 
tuner2.png
 
 

It was very easy to explore how this hardware control set supported different use cases using text-only wireframes. The text between the dashes represents a wireframe of the hypothetical screen. The “ < “ and “ > “ are meant as arrows pointing to either of the two hypothetical knobs.

 
 
 
 

BETTER WITH AN APP
It turned out that it was possible, but not optimal, to design a simple, hardware-only smart grill UX. I added a mobile app back into the larger interaction analysis. So, for a hungry user who wanted to eat at a specific time the functional flow looked like this:

 
 
 
 

As shown above, it wasn’t sufficient to know that user wanted to eat a a specific time—timing alone was never going to be the user’s sole driver unless they were big Soylent fans (which meant they weren’t part of our target market anyway). I also needed to know whether:

  • The user had strong opinions about what they wanted to eat, even if time were the primary driver.

  • Did the user have their mobile device and was the app properly installed and connected to the grill? 

  • Did the user already know how to cook?

Conversely, for a hungry user who wanted to eat a specific type of food…

 
 
 
 

…I still needed to know whether: 

  • The user had strong opinions about when they wanted to eat, even if the type of food were the primary driver.

  • Did the user have their mobile device and was the app properly installed and connected to the grill? 

  • Did the user already know how to cook?

Turned out that the right sides of both interaction flows were identical: regardless of whether you were time-constrained or wanted to cook a specific type of food, once you had selected something to cook and if you had any time constraints at all, the flows were the same. There was one especially compelling sub-flow: 

 
 
 
 
  • The user wants to eat a specific type of food.

  • Time is a factor.

  • The app is properly configured. 

  • The user doesn’t know how to cook.

In short, the final user experience should be geared towards helping inexperienced cooks create a specific meal under basic time constraints. We talked informally to potential users and were confident we were on the right track.

Did we formally test this assumption? No. No time, no money, the usual excuses.

Did we think it made enough sense to run with? Yes.

Did the product need “smart,” anticipatory features? Not really. Simply helping an inexperienced cook make a great meal would seem smart enough.

DESIGN DETAILS
Now we could get specific about UX details—what states did we have to support? What information and status were needed on the hardware and what were needed in the app?

 
 
 
 

I found that the hardware needed a power indicator but didn’t need a display. I also developed a complete interaction model for the app without regard for specific icons, wording, or graphics. 

The app home screen eventually ended up looking like this. 

 
 
 
 

The home screen showed grill status and food categories. After selecting a category, you were presented with a default option including a picture of the finished food, doneness controls, time controls, and finishing (e.g. searing).  After choosing how to cook the food, the user would press Start and grilling would begin.

 
 
 
 

The user would be notified when the the food was done. The grill would then modulate its temperature to keep the food warm but without any further cooking. The grill didn’t do anything it wasn’t told to do—the user had to pre-define and then initiate what would typically be considered the smart behavior of a device marketed as “smart.”. But once in operation, the grill did things (i.e. cook dinner) with a degree of attention and precision the user couldn’t match on their own.

That is what I call obedient behavior.

 
 
 

The shipping version of the Palate Home grill.

 
Next
Next

Square pegs, round holes