Relevant content for the tag: character3d

Publications

https://ivizlab.org/wp-content/uploads/sites/2/2018/07/engage-eva_thumb.jpg

Engagement with Artificial Intelligence through Natural Interaction Models
Conference Proceedings: , July 2017
pp. 296-303. DOI: https://dx.doi.org/10.14236/ewic/EVA2017.60

S. Salevati, N. Yalcin, S. DiPaola

London, UK

BCS Learning and Development Ltd.


Research

AI Affective Virtual Human

Our affective real-time 3D AI virtual human project with face emotion recognition, movement recognition and full AI talking, gesture and reasoning.


faceToolKit - A 3D Facial ToolKit

Our long range research project is a visual development system for exploring face space, both in terms of facial types and animated expressions. This development toolkit is based on a hierarchical parametric approach. This approach gives us an additive language of hierarchical expressions, emotions and lip-sync sequences.


iFACE - Comprehensive Envir for Interactive Face Anim

Rapid of growth of visual communication systems, from video phones to virtual agents in games and web services, has a brought a new generation of multimedia systems that we refer to as face-centric. Such systems are mainly concerned with multimedia representation of facial activities.


Virtual Beluga Project - Vancouver Aquarium

Actual screen shot from our Virtual Beluga Interactive Prototype which shows realistically swimming Beluga in a wild grouping (pod) created via 3d real-time graphics and artificial intelligence systems.


A Social Metaphor-based Virtual Communities (voiceAvatar)

The design goal of this project was to develop avatars and virtual communities where the participants sense a tele-presence – that they are really there in the virtual space with other people. This collective sense of “being-there” does not happen over the phone or with teleconferencing; it is a new and emerging phenomenon, unique to 3D virtual communities.


MusicFace

Can you extract the emotional aspects of a piece of music to animate a face. Music-driven Emotionally Expressive Face (MusicFace) is a early stage project that creates “facial choreography” driven by musical input. In addition to its artistic uses, MusicFace can be used for creating visual effects in movies and animations, and also realistic characters in computer games and virtual worlds.


Virtual CoLab Project

Virtual Colab and the real CECM Colab located at SFU.
Every smart-board computer display is matched by an in-world browser in 3d
which are able to display the same web based information for distanced collaborators.


SFU/FIT Collaborative Design Project

Collaboratively created cyber-fashion show, where sketches (white bg)
from FIT fashion designers are turned into 3d avatar models (black bg) by SFU students
– all using distance collaborative tools between two coasts and countries.


GenFace - Exploring FaceSpace with Genetic Algorithms

Imagine an -dimensional space describing every conceivable humanoid face, where each dimension represents a different facial characteristic. Within this continuous space, it would be possible to traverse a path from any face to any other face, morphing through locally similar faces along that path. We will describe and demonstrate a development system we have created to explore what it means to ‘surf’through face space. We will present our investigation of the relationships between facial types and how this understanding can be used to create new communication and expression systems.