sketching the futurescapes ///// a research blog by Sebastian Gonzalez Dixon

About the Work
The latest work to utilize real time tracking and face projection mapping using a state of the art 1000 fps projector and ultra high speed sensing, ‘INORI-PRAYER-,’ has been released. This project was born when Nobumichi Asai (WOW) approached collaborators TOKYO (http://www.lab.tokyo.jp/), the dancing duo AyaBambi, and the Ishikawa Watanabe Laboratory at the University of Tokyo.

This project began when songs were created about ‘life,’ a theme proposed by Tanigawa (TOKYO), who acted as this project’s director. Creative and technical director Asai (WOW) and CG director Shingo Abe (WOW) completed visual production and programming based on inspiration they obtained from the songs. Aya Sato added the choreography, and TOKYO completed the project by making it into a video.

‘Radioactive’ is the inspiration that Asai felt from music. ‘Radioactive’ wields destructive power, and from that brings ‘death’, ‘suffering’, and ‘sadness’. And then, the ‘opportunity’ to overcome those things. Accompanied by the overwhelming performance of AyaBambi, a visual synchronization of black tears, skulls, faces being severed, Noh Masks of agony and the Heart Sutra have sublimated into a single piece of work.

The face mapping system made it possible to follow intense performances, which was impossible until now, thanks to the use of the state of the art 1000 fps projector DynaFlash※1 and ultra high speed sensing. The initial dilemma of speeding up the tracking to the detriment of performance latitude was resolved by the WOW team, Professor Watanabe, and Tomoaki Teshima (EXVISION), who trimmed several milliseconds during a trial and error period that lasted approximately three months, enabling the completion of this system※2. The projected image looks like it is integrated into part of the skin, and the expressions on a subject’s face, when it is distorted or transformed, are exponentially enhanced.

Interactive Dance Performance using Kinect from WE.DREAM CO on Vimeo.

Interactive digital content and software of the “Tahteravalli” dance show was developed by We Dream.
Motion tracking algorithm, creates visuals based on projective coordinates of each dancer on XYZ plane.
Setup uses a Kinect™ Sensor, 8000 Lumen projector. The software development finished in 10 Days and debugged in 2 days.
“Tahteravalli” was a live dance show, organized by Bilgi University students.

LIVING ROOM by recoil from Tina Tarpgaard on Vimeo.

LIVING ROOM is a room in motion, evoked not just by the dancers but also by an almost organically living video scenography. The floor gives in, the floor disappears – the space begins to breathe….
Creating a meeting between the human body and a motion sensitive scenography,
LIVING ROOM questions who is the puppet and who is the puppeteer – who controls who?
LIVING ROOM is nominated to Best Dance Performance of the year at the Danish Performing Arts Awards 2012.
(Reumert Awards)

Dancers: Nelson Smith, Siri Wolthoorn, Rumiko Otsuka, Jonas Örknér
Choreographer: Tina Tarpgaard
Video scenography: Ole Kristensen og Jonas Jongejan
Composer: Pelle Skovmand
Lighting design: Frederik Heitman
Assistant and much more: Jonas Corneliussen
Costume design: Inbal Lieblich

Premièred March 12th 2012
At Store Carl, Dansehallerne, Pasteursvej 20, Copenhagen V, Denmark

unnamed soundsculpture from Daniel Franke on Vimeo.

Project by Daniel Franke & Cedric Kiefer

produced by:
www.onformative.com
www.chopchop.cc

Documentation:
http://vimeo.com/38505448

Music: Machinefabriek "Kreukeltape"
http://www.machinefabriek.nu/

The basic idea of the project is built upon the consideration of creating
a moving sculpture from the recorded motion data of a real person. For
our work we asked a dancer to visualize a musical piece (Kreukeltape by
Machinenfabriek) as closely as possible by movements of her body. She was
recorded by three depth cameras (Kinect), in which the intersection of the
images was later put together to a three-dimensional volume (3d point cloud),
so we were able to use the collected data throughout the further process.
The three-dimensional image allowed us a completely free handling of the
digital camera, without limitations of the perspective. The camera also reacts
to the sound and supports the physical imitation of the musical piece by the
performer. She moves to a noise field, where a simple modification of the
random seed can consistently create new versions of the video, each offering
a different composition of the recorded performance. The multi-dimensionality
of the sound sculpture is already contained in every movement of the dancer,
as the camera footage allows any imaginable perspective.

The body – constant and indefinite at the same time – “bursts” the space
already with its mere physicality, creating a first distinction between the self
and its environment. Only the body movements create a reference to the
otherwise invisible space, much like the dots bounce on the ground to give it
a physical dimension. Thus, the sound-dance constellation in the video does
not only simulate a purely virtual space. The complex dynamics of the body
movements is also strongly self-referential. With the complex quasi-static,
inconsistent forms the body is “painting”, a new reality space emerges whose
simulated aesthetics goes far beyond numerical codes.

Similar to painting, a single point appears to be still very abstract, but the
more points are connected to each other, the more complex and concrete
the image seems. The more perfect and complex the “alternative worlds” we
project (Vilém Flusser) and the closer together their point elements, the more
tangible they become. A digital body, consisting of 22 000 points, thus seems
so real that it comes to life again.
text: Sandra Moskova

nominated for the for the MuVi Award:
http://www.kurzfilmtage.de/en/competitions/muvi-award/selection.html

see video in full quallity:
www.daniel-franke.com/unnamed_soundsculpture.mov

HQ Stills
http://www.flickr.com/photos/37752604@N05/sets/72157629203600952/

fidelity [extracts] from visiophone on Vimeo.

FIDELITY is an interactive-video-dance performance.
GAP gallery. Barcelona, May, 2011

choreography: Natalia Brownlie
live visuals: Rodrigo Carvalho
sound design: Miguel Neto [*original sound track : eDit_ants]
live camera: Paulo Pinto

Live visuals with VDMX and QuartzComposer (using rutt etra (by v002.info, and badtv by memo.tv)
PointCloud silluete (at 3:40) with Kinect and 1024KinectFun patch by 1024 architecture (1024d.wordpress.com/).

See the full performance here :: www.vimeo.com/26575684

Dancing with the technology

Versus – First Teaser from 1n0ut on Vimeo.

Stereoscopic Realtime Dance Performance by 1n0ut, with Nanina Kotlowski

http://www.1n0ut.com

Experimental performance with Kinect and Adaptive Learning Algorithms

This is a first teaser of our performance Versus which was premiered in Salzburg at ARGEkultur. We already started working on the second version of it with full Artificial Intelligence support and many additional visual effects (particle morphing, iso surfaces, etc.), will be premiered this summer.

Versus
A real performer meets her virtual counterpart, both learn from each other and adapt,
dance with or fight against each other. A performative experiment, that explores boundaries and possibilities in the struggle between indivual and the virtual, man and machine. Artistic and scientific positions in the fields of digital performance and artificial life are being explored.

Once again (and just as in our latest performances CPU, winner of the Salzburg Media Art Award, or 1dentity, we want to work in real existing and virtual rooms (IMAX like), we want to explore boundaries of what is possible in interactive computer performance art and we want to combine possibilities in media art with traditional art forms.

Team
1n0ut: Robert Praxmarer & Reinhold Bidner (Idea, Concept, Code, Realisation, Visualisation)
Nanina Kotlowski (dance & choreography)

Docu Cam: Tobias Hammerle

Thxs:
CADET – Center for Advances in Digital Entertainment Technologies
Made with Cinder, thx to the libCinder community
Podium Award of the Region of Salzburg
ARGEkultur Salzburg
Kultur Stadt Salzburg
Erste Salzburger Sparkasse Kulturfonds
MultiMediaTechnology / University for applied Sciences Salzburg

Read more…

Interactive Dance Performance

DANCING WITH SWARMING PARTICLES is an interactive installation and performace that intends to explore the relationship between a physical user/performer and a virtual performer the “avatar” which has the physical characteristics of morphing flocking particles.

The avatar’s body is composed by flocking particles that initially float in the virtual space without any apparent order. It is through the energy of the physical user/performer’s movements that the particles will start to morph into the avatar’s body.

After the performance, the audience was invited to experience and interact with their own swarming particles’ avatar [02:22]

by Rodrigo Carvalho, performer Tamar Regev
coordinator : Anna Mura
Made in Specs [Synthetic Perceptive, Emotive and Cognitive Systems group] – UPF – Barcelona
[25.02.2011 – Barcelona]

//////////////////////////////////////////////////////////

Made in Unity3d, using Kinect and Osceleton [vimeo.com/​17966780]for the Skeleton tracking.

Full performance video here :: http://www.vimeo.com/21049955

Selected by Creative Applications
http://www.creativeapplications.net/other/dancing-with-swarming-particles-kinect-unity/
Read more…

Dance + Technology Performance

First experiments for “FIDELITY” a dance and digital art collaboration between dance artist Natalia Brownlie and digital artist Rodrigo Carvalho.

for uptades check the project blog at :
http://fidelity2011.tumblr.com/

“FIDELITY” will be performed in May 2011 in Barcelona

music:Amon Tobin “Reanimator”

for uptades check the project blog at :
http://fidelity2011.tumblr.com/

“FIDELITY” will be performed in May 2011 in Barcelona

music:Amon Tobin “Reanimator”

Visit the project site @ http://fidelity2011.tumblr.com/
Read more…

Kinect Flock – Flocking + Particles

Built using Cinder + OpenNI + NITE + Xbox Kinect

Kinect Flock is a quick app we wrote using the user tracking and depth mapping abilities of the XBox Kinect for Golan Levin’s Interactive art + Computational design studio.

We created a particle system that exhibits flocking/swarming behavior when the user is moving, and flocks to and fills out the user’s silhouette when they are standing still. As a result we have a simulation that ebbs and flows between the recognizable and the abstract.

Built using Cinder + OpenNI + NITE + Xbox Kinect

Read more about it and my studio’s other kinect projects at http://golancourses.net/2011spring/projects/project-3-interaction/

more from Alex : http://alexwolfe.blogspot.com/

///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////