Relevant content for the tag: painterly-rendering
Lucid Loop: A Virtual Deep Learning Biofeedback System for Lucid Dreaming Practice
Conference Proceedings: 2019 CHI Conference on Human Factors in Computing Systems, April 18, 2019 2019
Extended Abstracts. DOI: https://doi.org/10.1145/3290607.3312952
Bringing together an interdisciplinary team, we created a wholly new AI technique to anonymize interview subjects and scenes in regular and 360 videos to create a technique that would be much better at conveying emotional and knowledge information than current anonymization techniques.
Using Cognitive Science as a basis for our work, we attempt to model aspects of human creativity in AI. Specially we are using Neural Networks (and evolutionary systems) in the form of Deep Learning, CNNs, RNNs and other modern techniques to model aspects of human expression and creativity.
What is abstraction? Can you use AI techniques to model the semantics of an idea, object, or entity, where that understanding allows for abstraction of the meaning? We use several AI techniques including genetic programming, Neural Nets and Deep Learning to explore abstraction in its many forms. Mainly here in the visual and narrative arts.
Portrait artists and painters in general have over centuries developed, a little understood, intuitive and open methodology that exploits cognitive mechanisms in the human perception and visual system.
Using new visual computer modelling techniques, we show that artists use vision based techniques (lost and found edges, center of focus techniques) to guide the eye path of the viewer through their paintings in significant ways.
Can you extract the emotional aspects of a piece of music to animate a face. Music-driven Emotionally Expressive Face (MusicFace) is a early stage project that creates “facial choreography” driven by musical input. In addition to its artistic uses, MusicFace can be used for creating visual effects in movies and animations, and also realistic characters in computer games and virtual worlds.