CURRENT RESEARCH
Virtual Beluga Project - Vancouver Aquarium
Actual screen shot from our Virtual Beluga Interactive Prototype which shows realistically swimming Beluga in a wild grouping (pod) created via 3d real-time graphics and artificial intelligence systems.
A Social Metaphor-based Virtual Communities (voiceAvatar)
The design goal of this project was to develop avatars and virtual communities where the participants sense a tele-presence – that they are really there in the virtual space with other people. This collective sense of “being-there” does not happen over the phone or with teleconferencing; it is a new and emerging phenomenon, unique to 3D virtual communities.
Can you extract the emotional aspects of a piece of music to animate a face. Music-driven Emotionally Expressive Face (MusicFace) is a early stage project that creates “facial choreography” driven by musical input. In addition to its artistic uses, MusicFace can be used for creating visual effects in movies and animations, and also realistic characters in computer games and virtual worlds.
Virtual Colab and the real CECM Colab located at SFU.
Every smart-board computer display is matched by an in-world browser in 3d
which are able to display the same web based information for distanced collaborators.
Using open source tools ( python / openGL / … ), our research group is working to create an intuitive, interactive 3D knowledge visualization system that explores a more organic approach to knowledge and data visualization. The hope is by borrowing from our interests in alternative user interface design, AI and aLife systems, visual and interaction design as well as intelligent systems that we can create a more living and intuitive system for exploring date spaces.
SFU/FIT Collaborative Design Project
Collaboratively created cyber-fashion show, where sketches (white bg)
from FIT fashion designers are turned into 3d avatar models (black bg) by SFU students
– all using distance collaborative tools between two coasts and countries.
GenFace - Exploring FaceSpace with Genetic Algorithms
Imagine an -dimensional space describing every conceivable humanoid face, where each dimension represents a different facial characteristic. Within this continuous space, it would be possible to traverse a path from any face to any other face, morphing through locally similar faces along that path. We will describe and demonstrate a development system we have created to explore what it means to ‘surf’through face space. We will present our investigation of the relationships between facial types and how this understanding can be used to create new communication and expression systems.
We will combine our AI work in Empathy based modeling for AI Character Agents with our Deep Learning-based Creativity system (see papers and work in PDF) that realizes a fine art portrait from sitters.
AI for Enhancing Cultural Tours
AI for Enhancing Cultural Tours