Square pegs, round holes

 

I’m concerned we UX designers are losing our ability to design products that integrate hardware and software in unique and compelling ways.

When we design modern software (mobile applications, Web sites, etc.) we design once. We assume our work will always appear on full-color, high-resolution, and high-performance displays. Our designs will work within any aspect ratio, at any resolution, and can be controlled by mouse and keyboard, touch, voice or, most likely, a combination of all four. We strive to make each design work on iOS, Android, and traditional computer operating systems with only minimal tweaks.

We design for frames.

 
 
tablet-1593045_640.png
 
 

A frame is like an empty glass waiting to be filled with water. And like water, our designs simply fill the space provided.

But not all products are frames. What happens when a product’s physical form factor is an integral part of the overall user experience?

 
 
Multiple-devices.001.png
 
 

One would hope that, for products such as these, that the development team had a single, overarching product vision and that hardware and software were designed synergistically in support of that vision.

Because of production lead times, hardware designs are typically fixed well before software requirements with production costs, not concern for the overall UX, a deciding factor. For example, a decision might be made to eliminate the physical keyboard and buttons on the next generation of a successful product line. Everyone involved in the decision (i.e. no one on the software team) assumes the current software will only require a few “tweaks” in order to work on the new form factor. 

In an overly-stretched analogy, it’s like expecting two separate development trains to arrive at the station at the same time and seamlessly merge into a bigger, better product supertrain without any collisions or loss of momentum.

Which, of course, never happens. Except when it does.

A SYNERGISTIC DESIGN CASE STUDY

What if a development team first creates a single, overarching product vision and then proceeds to synergistically design hardware and software in support of that vision? If Hell doesn’t freeze over first, some good product design might actually happen. 

I once led a product innovation team that designed a working smart speaker, hardware and software, from scratch, in four months. The innovation team was part of an IoT business unit developing an integrated IoT hardware and software platform.

My team consisted of hardware and software developers, an industrial designer, a visual designer, and a UX designer.

What are we building and for whom?

The entire team had to agree on this. We decided our hypothetical target market was busy consumers interested in a premium, wireless speaker who might find “smart” behavior useful, but it wasn’t their most important requirement. Our hypothetical competition was the Bluetooth speakers (smart and otherwise) from Sonos, Google, Amazon, and Apple.

How do we make design decisions?

We established an integrated design process to make hardware and software decisions concurrently and in context of one another. The opportunities and inevitable tradeoffs of any design decision could be analyzed before investing significant development effort.

For example: 

 

Decision: The team agreed we were designing a premium speaker, not a jack-of-all-trades smart speaker.

Result: Therefore, all development decisions were made in support of the management, selection, and playback of music. Ordering an Uber, a pizza, or controlling a thermostat were secondary functions we didn’t need to prioritize. 


Decision: Current voice interaction technology was worse than a command-line interface. Not only did users have to know which commands they could invoke, they had to remember which commands they had issued. 

Result: The speaker would support both voice and touch interactions interchangeably. On the software side, this meant the command vocabulary and the visual UX had to be complementary. For example, a user had to be able to select a song by touch but control playback by voice. Or vice versa. On the hardware side, this meant the speaker required a display, something no other smart speaker had at the time. This, of course, had an impact on battery life.


Decision: A key use case was music playback during social occasions. Finding and playing music had to be fast and easy given the distractions of cooking, answering the door, and serving guests,

Result: A large rotating ring was added to the speaker face to allow for fast scrolling without multiple finger swipes. This gave the speaker a unique and premium look but added mechanical complexity.

 
Tune-Grey-crop.png
 

Decision: To maintain a consistent visual identity, the scroll ring would frame the display. 

Result: The software UX had to work within a circular, rather than a rectangular frame. 


 

CONCLUSION

The finished prototype worked and, even with plenty of bugs and rough edges, it represented a very compelling product vision. The mechanical wheel was fun to manipulate and made scrolling through song, album, artist, and genre lists very fast. The touch interface supported interactions users were familiar with from iOS and Android devices. Voice command vocabulary and syntax were extensive, allowing for natural, relative interactions (e.g. “Next,” and “Play next song,” did the same thing when a song was playing). Voice commands were always accompanied with visual feedback, making hands-free operation much easier when the speaker display was in view. 

And I, for once, didn’t have to design for a frame.

 
Previous
Previous

Products: “Smart” or “obedient”?

Next
Next

Palm Zire 71 case study