Relevant content for the tag: painterly-rendering
Quantifying artist's use of human vision constructs to influence viewer eye gaze
Conference Proceedings: SPIE Human Vision and Imaging, Int. Society for Optical Engineering, 2009
Using Cognitive Science as a basis for our work, we attempt to model aspects of human creativity in AI. Specially we are using Neural Networks (and evolutionary systems) in the form of Deep Learning, CNNs, RNNs and other modern techniques to model aspects of human expression and creativity.
What is abstraction? Can you use AI techniques to model the semantics of an idea, object, or entity, where that understanding allows for abstraction of the meaning? We use several AI techniques including genetic programming, Neural Nets and Deep Learning to explore abstraction in its many forms. Mainly here in the visual and narrative arts.
Portrait artists and painters in general have over centuries developed, a little understood, intuitive and open methodology that exploits cognitive mechanisms in the human perception and visual system.
Using new visual computer modelling techniques, we show that artists use vision based techniques (lost and found edges, center of focus techniques) to guide the eye path of the viewer through their paintings in significant ways.
Can you extract the emotional aspects of a piece of music to animate a face. Music-driven Emotionally Expressive Face (MusicFace) is a early stage project that creates “facial choreography” driven by musical input. In addition to its artistic uses, MusicFace can be used for creating visual effects in movies and animations, and also realistic characters in computer games and virtual worlds.