Interdisciplinary Perspectives on Technologies for Mingling

The Future of Conversations and Mingling

Time: 2 May 2023

Location :  Vakwerkhuis, Delft

Registration and Participation:

Introduction

Motivation

Bring your posters!

Schedule

Invited Talks

MINGLE Project Talks

Poster Presentations

Registration and Participation:

Please register here before April 25. The main idea is to use this as a networking and idea exchange opportunity so registration is free!

The main aim of this event is to bring together researchers and stakeholders who may not have become united by such a common theme before. So physical presence is strongly encouraged. However, if you would like to join remotely, here is the link:

Join Zoom Meeting

https://tudelft.zoom.us/j/97141920397?pwd=amxlMmR1d244b0piT0c0MnpXRXAxZz09

Meeting ID: 971 4192 0397

Passcode: 661783

Introduction

Attending social networking events has been correlated with career success. Yet little is known about how or why they function well or how we could make them more useful for us. This is despite the fact that we spend substantial time and money to attend them. One of the major bottlenecks has been related to difficulties in observing such behaviour systematically. This has made it hard to develop theories to fully understand what happens in these crowds. Without the possibility to analyse them, technologies cannot be built to help us to make the most out of these experiences which can sometimes be anxiety inducing for some.

During the global pandemic of recent years, these spontaneous moments of conversational interaction were lost and led to people questioning whether we should bring serendipity and spontaneity back in other forms. What makes a conversation good? What makes them interesting enough to form new bonds or foster existing connections? How does this play out in groups?

This symposium aims to crack open the mysteries of mingling behaviour from multiple different perspectives. Moreover, understanding how to build technologies for such settings would enable us to bridge the gap in understanding behaviours in similar and even more commonplace activities such as the role of spontaneous discussions by the coffee machine at work or in public spaces.

Motivation

The day aims to present an overarching view of the research results of the NWO funded Vidi project MINGLE (Modelling Social Group Dynamics and Interaction Quality in Complex Scenes using Multi-Sensor Analysis of Non-Verbal Behaviour). Whilst this is the closing event of the MINGLE project, it is also aimed as a new beginning. The research results of MINGLE will feed into a new ERC Consolidator grant funded project NEON (Nonverbal Intention Modelling) which will focus on the analysis of intention, particularly  in mingling settings. So we are looking for new perspectives to enrich the new research journey.

Since this is a first of its kind event, we want to kickstart a new kind of community that explores important research questions, solutions, and needs that can aid spontaneous social connection making.

Bring your posters!

Do you work on a related topic? topics include but are not limited to:

Schedule

Time

Speaker

9:15

Walk in: tea, coffee, and refreshments are available

Chair: Chirag Raman

9:45

Opening

Hayley Hung

10:00

Head and Body behaviour Estimation with F-formations

Stephanie Tan

10:20

Studies on Social Interaction using Wearables and Theatre

Jamie A Ward

10:55

Coffee Break/ hang poster

Chair: Hayley Hung

11:10

Data Collection and Annotation of Complex Conversational Scenes

Jose Vargas Quiros and  Chirag Raman

11:30

Towards Gaze Analysis in the Wild

Jean-Marc Odobez

12:05

Buffet Lunch

Chair: Hayley Hung

13:05

Robots within Groups of People

Xavier Alameda Pineda

13:40

F-formation Modelling and Behavioural Cue Forecasting.

Chirag Raman

14:00

Socially significant bodily rhythms

Wim Pouw

14:35

Coffee Break

Chair: Bernd Dudzik

14:50

Estimating Conversational Enjoyment and Learning Multiple Truths about Laughter

Chirag Raman and Hayley Hung

15:10

Understanding Expertise Search Strategies at Networking Events:  An Exploratory Study Using Sociometric Badges

Balint Dioszegi

15:45

The ConfFlow Application: Encouraging New Diverse Collaborations by Helping Researchers Find Each Other at a Conference

Hayley Hung

15:50

Coffee Break

Chair: Bernd Dudzik

16:05

(Panel) Discussion:  The main aim of this discussion is to reflect on the talks of the day and to discuss how to build technologies and carry out research to support spontaneous interactions

17.10

Drinks reception and poster session

18:15

Group walk to the dinner location

18:30

Dinner in town (Huszar) located 5 minutes from Delft Central Station and a 15 minute walk from Vakwerkhuis.

PhD Thesis Defense of Stephanie Tan on May 3

The PhD thesis defense of Stephanie Tan will start at 9.30 (layman’s presentation) before the defense at 10am in the Aula on the campus of TUDelft.

Invited Talks

Talk Title: Studies on Social Interaction using Wearables and Theatre
Speaker:  Dr Jamie A Ward, Senior Lecturer in Computer Science at Goldsmiths University of London

Abstract: Measuring detailed information on how people move, see, and think during realistic social situations can be a powerful method in studying social behaviour and cognition. However,  measurement-driven research can be limited by the available technology,  with bulky equipment and rigid constraints often confining such work to the laboratory, thus limiting the ecological validity of any findings.  Together with colleagues at Goldsmiths, UCL, and Keio University, I have been working on several projects that use wearable sensing to take this research out of the laboratory and into the real world -- while on the way, stopping off at the theatre. In this talk, I will give a brief overview of some of our work, and try to show how the paradigm of  'theatre as a laboratory', might provide a way forward, both for  research in social cognition, and in wearable sensing.

Talk Title: Robots within Groups of People
Speaker:  Xavier Alameda-Pineda, INRIA, France

Abstract: One of the prominent applications of understanding social human behavior is the development of systems that can interpret, react to, and synthesize behavioral cues, and therefore take and be part of social interactions. In this very general context, social autonomous systems, e.g. social robotics, are a very challenging and complex research area that has received increasing attention over the past years. In this talk, I will be discussing the conception of machine learning methods allowing to perceive, generate and synthesize certain human behavioral cues. The tackled tasks will range from speech enhancement to meta-training for social navigation, and for each of them I will focus on one technical detail that is crucial for the conception of the machine learning model and associated training algorithm.

Talk Title: Understanding Expertise Search Strategies at Networking Events:  An Exploratory Study Using Sociometric Badges
Speaker: Balint Dioszegi, University of Greenwich, UK

Abstract: In this study we ask how individuals search for experts at networking events. Building on the intuition that individuals’ propensities to engage in certain search actions, as well as their effectiveness in locating experts, will depend on the quality and salience of the metaknowledge they have about others, we conducted an expert search game as a field experiment in which we randomly assigned participants – researchers in a multinational corporation – to one of three treatment conditions, reflecting varying degrees of search planning. Based on data from sociometric badges, we derive a taxonomy of the micro-decisions individuals make at events. We find that letting others approach yields more referrals than taking the initiative in starting conversations, and that planning increases the tendency to maintain such initiative even when doing so is ineffective – a possible manifestation of the Einstellung effect.

Talk Title: Towards gaze analysis in the wild
Speaker: Jean-Marc Odobez, Idiap Research Institute and EPFL, Switzerland

Abstract: As a display of attention and interest, gaze is a fundamental cue in understanding people's activities, behaviors, and state of mind, and plays an important role in many applications and research fields, for the design of intuitive human computer or robot interfaces, or for medical diagnosis like for assessing Autism Spectrum Disorders (ASD) in children. Gaze (estimating the 3D line of sight) and Visual Focus of Attention (VFOA) estimation, however, are challenging, even for humans. It often requires not only analysing the person's face and eyes, but also the scene content including the 3D scene structure and the person’s situation to detect obstructions in the line of sight or apply attention priors that humans typically have when observing others. In this presentation, I will present methods that address these challenges: first, a method that leverages standard activity-related priors about gaze to perform online calibration; secondly, an approach for VFOA inference which casts the scene in the 3D field of view of a person, enabling the use of audio-visual information as well as dealing with an arbitrary number of targets; and third, moving towards gaze estimation in the wild, an approach for the gaze-following task explicitly leveraging derived multimodal cues like depth and pose.

Talk Title: Socially significant bodily rhythms
Speaker: Wim Pouw, Radboud University, Netherlands

Abstract: In this talk I will highlight that bodily constraints can be recruited to do communicative work. I will highlight that the process by which a body is put to work often entails deviations from endogenous rhythms that emerge in interaction with the (non-social) environment. Throughout the talk I will entertain the idea that affective communication in some sense depends on significant deviations from one's bodily stabilities. I will overview a research program called gesture-speech physics that aligns with this idea. This research is about how the pulse-quality of upper limb gestures produce forces through acceleration, thereby physically and functionally perturbing speech processes; grounding gesture’s phylogeny, ontogeny, and cognition, in physiology. I also discuss recent research with Siamang apes (Symphalangus syndactylus) which will support the argument that there is cross-species continuity in how bodies are put to work in vocal communication. I conclude therefore that expressive bodies are, it turns out, just moving about, but in more significant ways than previously thought.

MINGLE Project Talks

Talk Title: Head and Body behaviour Estimation with F-formations
Speaker:  Stephanie Tan

Abstract:

In recent years, new domains such as social signal processing and social computing have emerged at the intersection of computer science, human behavioral modeling, and robotics. The aim of these fields is to achieve machine perception of social intelligence, such as understanding behavioral cues of humans (e.g., body language) and complex social relations and attitudes (e.g., dominance, rapport). Challenges towards building such systems include data acquisition with appropriate sensing capabilities for capturing in-the-wild human data, as well as modelling approaches that account for data from multiple modalities (vision, audio, motion) and address context-awareness. In light of these challenges, I will present my work on (1) head and body orientation estimation using sparse weak labels from wearable sensors, (2) joint head orientation estimation in conversation groups, and (3) conversation group detection in social interaction scenes, in addition to an overview on the related data-oriented contributions. I motivate these three tasks from the perspective of individual-level, group-level, and scene-level behavior understanding, and conclude with some open questions related to automated analysis of social behaviors.  

Associated Paper(s):

S. Tan, D. M. J. Tax, H. Hung, Conversation group detection with spatio-temporal context,Proceedings of 2022 International Conference on Multimedia (ICMI) (2022), Pages 170–180, Oral Presentation

S. Tan, D. M. J. Tax, H. Hung, Multimodal joint head orientation estimation in interacting groups via proxemics and interaction dynamics, Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT) (2021), Vol.5, No.1, 1-22.

S. Tan, D. M. J. Tax, H. Hung, Head and body orientation estimation with sparse weak labels in free standing conversational settings, Understanding Social Behavior in Dyadic and Small Group Interactions, Proceedings of Machine Learning Research (2021), 179-203. Presented at ICCV 2021 Understanding Social Behavior in Dyadic and Small Group Interactions Workshop

Talk Title:Data Collection and Annotation of Complex Conversational Scenes
Speaker:  Jose Vargas Quiros and Chirag Raman

Abstract: TBA

Associated Paper(s):

Quiros, J. V., Tan, S., Raman, C., Cabrera-Quiros, L., & Hung, H. (2022, March). Covfee: an extensible web framework for continuous-time annotation of human behavior. In Understanding Social Behavior in Dyadic and Small Group Interactions (pp. 265-293). PMLR.

Raman, C., Tan, S., & Hung, H. (2020, October). A modular approach for synchronized wireless multimodal multisensor data acquisition in highly dynamic social settings. In Proceedings of the 28th ACM International Conference on Multimedia (pp. 3586-3594).

Raman, C., Vargas Quiros, J., Tan, S., Islam, A., Gedik, E., & Hung, H. (2022). ConfLab: A Data Collection Concept, Dataset, and Benchmark for Machine Analysis of Free-Standing Social Interactions in the Wild. Advances in Neural Information Processing Systems, 35, 23701-23715.

Raman, C., Tan, S., & Hung, H. (2020, October). A modular approach for synchronized wireless multimodal multisensor data acquisition in highly dynamic social settings. In Proceedings of the 28th ACM International Conference on Multimedia (pp. 3586-3594).

Talk Title: F-formation Modelling and Behavioural Cue Forecasting.
Speaker:  Chirag Raman

Abstract:

Associated Paper(s):

Raman, C., Hung, H., & Loog, M. (2023, February). Social processes: Self-supervised meta-learning over conversational groups for forecasting nonverbal social cues. In Computer Vision–ECCV 2022 Workshops: Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part III (pp. 639-659). Cham: Springer Nature Switzerland.

Talk Title: Estimating Conversational Enjoyment and Learning Multiple Truths about Laughter
Speaker:  Chirag Raman and Hayley Hung

Abstract: TBA

Associated Paper(s):

Raman, C., Prabhu, N. R., & Hung, H. (2023). Perceived Conversation Quality in Spontaneous Interactions. IEEE Transactions on Affective Computing

Vargas-Quiros, J., Cabrera-Quiros, L., Oertel, C., & Hung, H. (2022). Impact of annotation modality on label quality and model performance in the automatic assessment of laughter in-the-wild. arXiv preprint arXiv:2211.00794. To appear, IEEE Transactions on Affective Computing

Quiros, J. D. V., Kapcak, O., Hung, H., & Cabrera-Quiros, L. (2021). Individual and joint body movement assessed by wearable sensing as a predictor of attraction in speed dates. IEEE Transactions on Affective Computing.

Talk Title: The ConfFlow Application: Encouraging New Diverse Collaborations by Helping Researchers Find Each Other at a Conference
Speaker:  Hayley Hung

Abstract: We often find other collaborators by chance at a conference or by looking for them specifically through their papers. However, sometimes hidden potential social connections might exist between different researchers that cannot be immediately observed because the keywords we use might not always represent the entire space of similar research interests.ConfFlow is an online application that offers an alternative perspective on finding new research connections. It is designed to help researchers find others at conferences with complementary research interests for collaboration. With ConfFlow we take a data-driven approach by using something similar to the Toronto Paper Matching System (TPMS), used to identify suitable reviewers for papers, to construct a similarity embedding space for researchers to find other researchers.

Associated Paper(s):
H. Hung and E. Gedik, “Encouraging Scientific Collaborations with ConfFlow 2021”, SIGMM Records, https://records.sigmm.org/2022/04/20/encouraging-scientific-collaborations-with-confflow-2021/, 2022

H.Hung and E.Gedik, “Encouraging more Diverse Scientific Collaborations with the ConfFlow application”, SIGMM Records, https://records.sigmm.org/2021/06/10/encouraging-more-diverse-scientific-collaborations-with-the-confflow-application/, 2021

Gedik, E., & Hung, H. (2020, October). ConfFlow: A Tool to Encourage New Diverse Collaborations. In Proceedings of the 28th ACM International Conference on Multimedia (pp. 4562-4564).


Poster Presentations

Presenter name and affiliation:
Poster Title and Abstract:TBA