Contact Apple Support, main page

Snackbot Sound – Thesis Project

The Snackbot is a snack delivering robot created by Carnegie Mellon University.

For my thesis project, I designed two groups of sounds to be used by the Snackbot to interact with its customers. I also designed the accompanying interactions, scenarios, head gestures and a speaker enclosure.

My goal was to use non-speech sounds to create a desirable product character that users enjoy interacting with in the long term. I then planned and executed a user study to evaluate my designs.

I co-authored a paper about this project with Dr. Jodi which I presented at the 2010 Design & Emotion conference. This poster and process book also explain the project. .


    additional details

  • Skills

    • Presentation
    • Research
    • Sound
    • Usability testing

Project Details

I was searching for a project that would allow me to explore sound as a way to make products more emotionally desirable. My thesis advisor, Dr. Jodi Forlizzi, told me about the Snackbot project, and about the unpleasant synthetic speech they used on Snackbot. We decided a great project would be a user study to evaluate the emotional impact and feasibility of utilizing non-speech sounds to communicate. Furthermore, I would be able to explore sound reinforcement techniques and design a custom speaker enclosure to fit inside the Snackbot.

Research

I attended the Snackbot team meetings, during which I asked many questions and got up to speed on the history of the project and where it was going. During a few sessions, I asked the team about the technical constraints and intended character attributes of the robot.

considerations lists

Basically, the team imagined Snackbot to be akin to a male undergraduate, outgoing, friendly, and full of school spirit.

I also held a few design sessions with the Snackbot designers, to help conceive the scenarios in which people would interact with Snackbot. We arrived at two scenarios worth exploring: delivery and stationary vending. Storyboarding these scenarios illuminated requirements for transmission of information from robot to customer and the information reception limitations of the robot. From the flows and storyboards, I was able to compile a list of interactions that needed to be facilitated by sounds.

My thesis paper is on the role of sound in the emotional experience of products. During the literature review for this paper, I read numerous studies and theoretical papers on the subject, including Klaus Krippendorf’s work on product semantics and Pieter Desmet and Paul Hekkert’s studies on emotional product perception. These sources led me to create a framework to help me break down and understand the product perception process.

Product Perception Process

I also read specifically about sound. Jan Berg and Johnny Wingstedt distilled findings into this table, which codifies things I intuitively knew from my experiences as a music student and professional bassist.

table of musical properties and emotions

This information is a rough guide, as musical perception is heavily biased by context, but serves as a framework to check sound design decisions against.

Design

Armed with interaction design requirements and theories on sound design, I composed and created a palette of melodies and sound effects. I created two instruments to play the melodies, one we called A1 or “robotic” and the other A2 or “organic.”

Listen to the difference between the two sound sets:

This is the delivery scenario. Play the audio file for a walkthrough with the sounds from set A1. delivery mode diagram

This is the stationary vending scenario. Play the audio file for a walkthrough with the sounds from set A1. stationary mode diagram

Testing with users

I designed a study with participants in order to answer these questions:

  • Do the interactions facilitate the tasks?
  • Do the sounds convey consistent character?
  • Which sound set is more pleasing?

The study protocol was as follows:

The participant enters the room and is greeted by me. We sit at table 1 and go over the Institutional Review Board (IRB) paperwork and what the study entails. I explain the scenario, setup the room, and then, depending on the scenario, bring or call the subject into the room.

study Setup

During the scenarios, I sit at table 3 to operate the robot with a keyboard that triggers sounds and head gestures and a joystick to drive the robot. The participants are led to believe the robot is autonomous; they are not aware that I am controlling it. In the delivery scenario, the robot travels from behind the filing cabinet to reach table 2. In the stationary scenario, the robot sits just in front of the filing cabinet, inline with table 2. After the scenario is complete, we sit at table 2, with the camera facing us, and I ask them a series of questions. After this, I take them back to table 1, and have them fill out a one-page written questionnaire.

This process repeats for a total of four scenarios. Each subject performs all four scenarios: delivery with sound set A1, delivery with sound set A2, stationary vending with sound set A1, stationary vending with sound set A2. I used a randomization matrix to account for order bias in the trials. I chose to run the study with eight individual participants so that each order permutation would be used twice.

Here are some highlights from the study. The video excerpts below use sound set A2 “organic”.


Study Results

From this study, I learned the following:

  • “organic” sound set is more desirable, 7 to 1
  • “robotic” sound set better expressed intended character
  • eating sounds were undesirable
  • when to take the snack needs clarification
  • need stronger confirmation that the transaction is complete

Actionable takeaways for designers

In order to help other designers begin to use sound in their work, I have distilled what I learned into these recommendations for product sound design:

  • consider the product character
  • maintain appropriate volume
  • use appropriate auditory icons
  • interweave sound with other design features
  • consider the sound reinforcement system
  • refine sounds directly on the product

Next Steps

If I were to continue work on the Snackbot, I would make the following changes to the flows, based on the study findings.

For the delivery scenario, I would remove step 6, as it proved to be unnecessary, and I would use speech to clarify the ask in step 7.

future delivery scenario

For the stationary scenario, I would remove steps 4-6, as they proved to be unnecessary, and I would use speech to clarify that the transaction is complete.

future stationary scenario