Relevant content for the tag: faceresearch
Publications

Speech Breathing in Virtual Humans: An Interactive Model and Empirical Study
Conference Proceedings: 2019 IEEE Virtual Humans and Crowds for Immersive Environments (VHCIE), March 24, 2019 2019
pp. 1-9. DOI: 10.1109/VHCIE.2019.8714737
Osaka, Japan
IEEE

An Eye Gaze Model for Controlling the Display of Social Status in Believable Virtual Humans
Conference Proceedings: IEEE Computational Intelligence and Games (CIG) , 2018

A Dynamic Speech Breathing System for Virtual Characters
Conference Proceedings: International Conference on Intelligent Virtual Agents, August 2017
pp. 43-52. Part of the Lecture Notes in Computer Science book series (LNCS, volume 10498)

Perceptual Validity in Animation of Human Motion
Journal Article: Computer Animation and Virtual Worlds Journal, March 2016
27(1), 58-71. doi: 10.1002/cav.1631
Wiley Online Library

Expressive Animated Character Sequences Using Knowledge-based Painterly Rendering
Journal Article: International Journal of Computer Games Technology, International Journal of Computer Games Technology, 2011
Vol. 2011, Article ID 164949, 7 pages
Research
Rembrandt / Vision Science Work
Using new visual computer modelling techniques, we show that artists use vision based techniques (lost and found edges, center of focus techniques) to guide the eye path of the viewer through their paintings in significant ways.
faceToolKit - A 3D Facial ToolKit
Our long range research project is a visual development system for exploring face space, both in terms of facial types and animated expressions. This development toolkit is based on a hierarchical parametric approach. This approach gives us an additive language of hierarchical expressions, emotions and lip-sync sequences.
iFACE - Comprehensive Envir for Interactive Face Anim
Rapid of growth of visual communication systems, from video phones to virtual agents in games and web services, has a brought a new generation of multimedia systems that we refer to as face-centric. Such systems are mainly concerned with multimedia representation of facial activities.
Can you extract the emotional aspects of a piece of music to animate a face. Music-driven Emotionally Expressive Face (MusicFace) is a early stage project that creates “facial choreography” driven by musical input. In addition to its artistic uses, MusicFace can be used for creating visual effects in movies and animations, and also realistic characters in computer games and virtual worlds.
GenFace - Exploring FaceSpace with Genetic Algorithms
Imagine an -dimensional space describing every conceivable humanoid face, where each dimension represents a different facial characteristic. Within this continuous space, it would be possible to traverse a path from any face to any other face, morphing through locally similar faces along that path. We will describe and demonstrate a development system we have created to explore what it means to ‘surf’through face space. We will present our investigation of the relationships between facial types and how this understanding can be used to create new communication and expression systems.