Our Pepper is learning new tricks. After becoming a TV star, Pepper decided to try a new occupation, and ventured out to the field last Friday, March 16th. You may have spotted the friendly robot on that morning at the TUT main building lobby.
Pepper’s one and only task on that day was to help out hungry lunch-goers by showing them the menus of four restaurants on the campus. People could interact with the robot via dialogue, or if it didn’t work out, via the tablet on Pepper’s chest. The test was conducted in Finnish, one person at a time or in groups of 2-4 people, and the active interaction between a human and the robot was quite brief, typically between 30-120 seconds.
The study will become a part of Antero Tossavainen’s PhD in Industrial and Information Management. The aim is to bring the voice of the customer to the fuzzy front-end of new product development. A merry group of students on the Challenge-Based Innovation class also gave many helping hands in recruiting participants and collecting questionnaire forms. And boy, did we need a hand! The trial was supposed to start at 10:30 am, but passers-by began getting interested in Pepper already around 9:30, just a few minutes after we got there and Pepper had settled in and woken up. The target number of participants was 60, but we ended up with about a double that number. So, Antero got a lot of data to analyze… we’ll surely write more about that later.
Although operating Pepper required most of my attention, I managed to jot down some observations about the ways people interacted with Pepper, noticeable issues, and comments I heard people make.
- Anthropomorphizing (treating the robot as human-like) varies by individual: for example, one person asked, “Who is he?” instead of “What is it?” when seeing the robot for the first time. Perceptions also differed: one person commented after interacting with Pepper: “It is scary!”, whereas another thought its voice was the cutest.
- Emotions: I could see lots of amusement, occasional frustration, also some apprehension. Obviously some of the emotions may have been brought up by the artificial situation and by being observed by researchers, but the first reactions to the robot were still interesting to witness.
- Expectations: Some people started the dialogue as if talking to a real person who could understand requests like “Do you have mashed potatoes?”, “Gimme everything”, or “Well let’s see, how about we check that Soossibaari at Konetalo?” Soon they realized that the robot doesn’t (yet) possess that much intelligence and its knowledge is limited, and reverted to simple and brief statements.
- Speech recognition: Challenging, especially when noisy. One of the restaurants, Newton, was hard for Pepper to understand in spoken form. I mean, really hard. Several people tried it three or four times in different ways (“Nyytton… Nyyton… Newton… Nyyttoni”) before giving up and either selecting it from the tablet or, as happened more often, saying “Reaktori” or some other restaurant. Although I had programmed five variations of the possible pronunciations of the word, it failed to recognize them most of the time. Otherwise Pepper did all right, except for occasional rudeness, such as telling “okay, bon appetit” to a person who was asking it to read the menus out loud and claimed: “I’m blind, I can’t read”. In all fairness, the person wasn’t really blind, so perhaps Pepper detected the deception attempt 😉 In any case, it seems to be quite difficult to make the robot understand foreign names in Finnish, or vice versa. (There may be some sort of a language tag that could be input into the dialogue syntax, though. Something I’ll need to look into.)
All in all, a very good experience on the field! Many thanks to the good people at Industrial and Information Management, and of course to the participants.