Workshop on "Social Norms in Robotics and HRI" at IROS 2015
Invited Speakers
-
Satoru Satake, ATR, Osaka:
How to Build a Social Robot for the Real World? Easy or Difficult?
-
Bill Smart, Oregon State University:
Social and Societal Norms: Good, Bad, or Ugly?
Abstract:
As roboticists and HRI researchers, we spend a lot of time thinking about social and societal norms. How can we get our robots to fit into these existing human social structures seamlessly? How can we use them to our advantage, allowing us to predict (or even influence) the behavior of the humans that interact without robots? How can we take advantage of them to make our robots more efficient, robust, and integrated into everyday life?
In this talk, I will give some examples of social and societal norms that you might not have thought about before, and talkabout how they potentially impact the behavior of social robots. Do they make the robot's task easier or allow it to perform more efficiently? Do they slow it down and make it less efficient? Do they make it hard to even have a robot perform the task, even though that might be the best thing to do from a utilitarian standpoint. After discussing some of these norms, I'll pose some (hopefully) provocative questions about how we, as roboticists, should be thinking about adapting to human social and societal norms.
-
Greg Trafton, Naval Research Laboratory, USA (greg.trafton {at} nrl {dot} navy {dot} mil):
How to be a hipster: Robots and social norms
Abstract:
We have been working on a cognitively plausible model of social norms.
We model an experiment from Salganik (2006, 2009) that showed the importance of social influences and social norms.
Our model uses ACT-R/E (Trafton et al., 2013) and proposes that social norms come out of a combination of social and cognitive influences.
The fundamental aspects of our model can be used by a robot to allow the robot to be influenced by others' social behavior.
-
Luis Merino, Seville University, Spain (lmercab {at} upo {dot} es):
Telepresence robots as tools to derive social norms
Abstract:
Social norms are difficult to model and derive mathematically. Thus, machine learning is commonly used as a way to extract those rules and transfer them into a robot. But not only these methods require examples and data; it is moreover complicated to determine which ones are into play in certain social settings. Telepresence robots offer an interesting opportunity to study and extract social norms, as they, by definition, consider a person engaged in social settings and controlling the robot, from which we can learn. The talk will discuss experiments carried out with a telepresence robot in the frame of the EU project TERESA (TElepresence REinforcement-learning Social Agent), and the learning techniques employed to derive and model navigation behaviors in social settings.
-
Maha Salem, University of Hertfordshire / Google:
Violating Social Norms: Effects of 'Faulty' Robot Behaviors on Perceptions of Anthropomorphism and Human-Robot Trust
Abstract:
How do humanlike communicative behaviors such as gesture and speech impact human perceptions of a social robot? And what happens when a robot exhibits some behavioral flaws while interacting with humans? Do mistakes made by the robot in collaborative tasks somehow affect its trustworthiness? While most of HRI research aims at enabling robots to increasingly comply with social norms, in this talk we will focus on the opposite, namely on what happens when such norms are violated.