Roger K. Moore

University of Sheffield (United Kingdom)

Roger Moore is one of the world's most established researchers in speech technology ad spoken language processing. As a Professor of Spoken Language Processing at the University of Sheffield, his research interests include applied Speech Technology to help those with speaking difficulties. Currently, Prof. Moore is working with bringing together multiple fields of spoken language processing in the PRESENCE project, a multidisciplinary framework "that is intended to breathe life into a new generation of research into spoken language processing".

Title: More than an 'Interface': the role of speech in human-robot interaction

Recent years have seen considerable progress in the deployment of 'intelligent' communicative agents such as Apple's Siri and Amazon’s Alexa. However, effective speech-based human-robot dialogue is less well developed; not only do the fields of robotics and spoken language technology present their own special problems, but their combination raises an additional set of issues.  As a consequence, we still seem to be some distance away from creating Autonomous Social Agents that are truly capable of conversing effectively with their human counterparts in real world situations. This talk will address these issues and will argue that we need to go far beyond our current capabilities and understanding if we are to move from developing robots that simply talk and listen to evolving intelligent communicative machines that are capable of entering into effective cooperative relationships with human beings.



Jauwairia Nasir

Universität Augsburg (Germany)

Jauwairia Nasir is a postdoctoral researcher and the Chair for Human-Centered Artificial Intelligence at the University of Augsburg, Germany. Her research interests involve socially assistive robotics, specifically applying machine learning for education and healthcare, as well as multimodal behavioural analytics and data-driven modelling for those purposes. She has recently worked on robotics for learning, and explored how engagement can be created in students in human-robot learning scenarios. 




Kyoto University (Japan)

Koji Inoue is an Assistant Professor at the Speech and Audio Processing Laboratory at the Graduate School of Informatics at Kyoto University. His research interests involve using multimodal feedback and dialogue features to create flexible and engaging conversations between robots and humans. His research group is responsible for the famous ERICA robot. Recently, his work has involved predicting future affective reactions in the robot's conversational partners using multimodal features, as well as generating empathetic responses using those same frameworks.

Title: Closing the Gap: Exploring Human-Level Interaction in Android Robot Dialogue Systems

One of the key objectives of artificial intelligence research is to achieve systems that can interact with humans naturally, just as humans do. The speaker has been engaged in studying a spoken dialogue system for an android robot that closely resembles a human. From the perspective of affordance, android robots require interaction abilities comparable to humans. Our research group initially designed specific dialogue scenarios for android robots, including attentive listening, job interviews, and lab introductions. We then developed a multimodal interaction system for each scenario, elucidating the differences between their capabilities and human interaction abilities. This talk introduces a series of the above interaction research to discuss what is required and missing for human-level interaction. Additionally, this talk also introduces essential technologies to realize human-level interactions, such as implementations of conversational functions like turn-taking systems and laughter generation. With the emergence of large-scale language models (LLM), the speaker would like to propose suggestions on the future direction of interaction research.