AI Affective Virtual Human


Research Collaborators
Steve DiPaola , Nilay Ozge Yalcin , Ulysses Bernardet , Maryam Saberi , Michael Nixon , Andrey Goncharov , Mahdi Davoodikakhki

About
Our open-source toolkit / cognitive research in AI 3D Virtual Human (embodied IVA : Intelligence Virtual Agents) : a real-time system that can converse with a human by sensing their emotions and conversation ( via facial emotion recognition, voice stress, semantics of the speech and words) and respond affectively, emotionally (voice, facial animation, gesture, etc) to a user in front of it via a host of gestural, motion and bio-sensor systems, with several in lab AI systems and give a coherent, personality-based conversational answers via speech, expression and gesture. The system uses Unity and SmartBody (USC) API who we have collaborated with for years. We use cognitive modeling, empathy modelling, NLP and a variety of AI-based modules in our system (see papers).

The Research
Our affective real-time 3D AI virtual human setup with face emotion recognition, movement recognition and data glove recognition. See overview video or specific videos or papers below.

The growing success of dialogue systems research makes conversational agents a perfect candidate for becoming a standard in human-computer interaction. The naturalness of communicative acts allows for providing a comfortable ground for the users to interact with. There have been many advances on using multiple communication channels in dialogue systems in the way of simulating humaneness in an artificial agent.

However, one issue is to be able to find a good balance of the intensity and frequency of multimodal affective feedback to guide the dialogue flow. The timing and quality of the feedback can have varying effects on different users. We use conversational mirroring mechanisms to generate baselines for interaction, that can be used to dynamically guide the user towards the end goal.

Our ongoing research project studies human behavior while interacting with these assistive technologies using natural interaction methods to create an Embodied Conversational Agent (ECA) that enables users to efficiently achieve their goals. We have ongoing studies and updates of our ECA using Deep Learning AI and NLP (Natural Language) systems – the goal is to build up from this basic system a strong conversation process that can fully understand, analyze, build a user model and with it converse effectively with users.

Downloads and Links
Papers/Posters
Conference Proceeding Link AI Avatar: BICA 2019 Conference: M-Path: A Conversational System for the Empathic Virtual Agent
Journal Link AI Avatar: Artificial Intelligence Review 2019 Journal: Modeling empathy: building a link between affective and cognitive processes
PDF: Conference Paper AI Avatar:8th International Conference on Affective Computing & Intelligent Interaction: Evaluating Empathy in Artificial Agents
PDF: Conference Paper AI Avatar: 41st Annual Meeting of the Cognitive Science Society: Evaluating Levels of Emotional Contagion with an Embodied Conversational Agent
PDF: Conference Paper AI Avatar: International Conference on Multimodal Interaction 2018 Extended Abstracts: Modeling Empathy in Embodied Conversational Agents
PDF: BICA Journal  AI Avatar : Journal: A Computational Model of Empathy for Interactive Agents (BICA 18) Winner: Research Award Paper
PDF: Stanford Poster AI Avatar : Poster from Stanford VR and Behavioral change Conference 2017.
PDF: IVA 2016a Simulink Toolbox for Real-time VirtualCharacter Control
PDF: IVA 2016b  An Architecture for Biologically Grounded Real-time Reflexive Behavior
PDF: IVA 2015  A Framework for Exogenous and Endogenous Reflexive Behavior in Virtual Characters

Additional Media and Code
GitHub: M-Path ECA agent M-Path Repository of Code for the M-Path ECA system.
Media / Code Repository Repository of Code and Media for our RealAct system