Atmospheric Memory by Rafael Lozano-Hemmer, is a collaboration between Manchester International Festival, Science and Industry Museum, FutureEverything, ELEKTRA / Arsenal Contemporary Art, Montreal and Carolina Performing Arts – University of North Carolina at Chapel Hill. 

“The most ambitious art project at this year’s festival” – New York Times

A breathtaking immersive art environment, Rafael Lozano-Hemmer’s Atmospheric Memory scours the sky for the voices of our past. Inspired by computing pioneer Charles Babbage’s 180-year-old proposal that the air is a ‘vast library’ holding every word ever spoken, Atmospheric Memory asks: was Babbage right? Can we rewind the movement of the air to recreate long-lost voices? And if so, whose would we want to hear?  

Projection content design, media server programming, NDI networking, chamber architectural design, artwork layout, 3D visualizations, computational design, by Kitae Kim.

Learn More

Atmospheric Memory

The Making of Atmospheric Memory

See Rafael Lozano Hemmer tell us more about Atmospheric Memory. A breathtaking immersive art environment.

babbage_lovelace_manchester_2019_rc_(91)_result.jpg

A generative animation based on the collected texts of 18th-century polymath Charles Babbage and 19th-century mathematician Ada Lovelace, known for their contributions to Babbage’s proposed Analytical Engine—the first computer. In this piece, letters flow in a turbulent stream and occasionally form sentences.

 

Atmosphonia

 
 

A breathtaking immersive art environment, Rafael Lozano-Hemmer’s ‘Atmospheric Memory’ premiered at Manchester International Festival 2019, in a custom built chamber at the Science & Industry Museum. ‘Atmosphonia’ is a sound environment featuring 3,000 audio channels on custom-made speakers with LED lights. In this tunnel, recordings change typology every metre: starting with wind, then water, fire, ice, over 200 types of insects, over 300 types of birds, bells, bombs and so on. By Rafael Lozano Hemmer, film by Mariana Yanez

 

Cloud Display

 
 

"Cloud Display" is a vertical water fountain consisting of 1,600 ultrasonic atomizers, controlled by a machine-learning voice recognition system. When a participant speaks into an intercom, the piece writes any words or sentences spoken using wisps of pure water vapour. The words appear and disappear slowly, forming an evocative and temporary display of language. When no one is participating, from time to time, the piece becomes a waterfall of vapour. The piece was premiered at the "Atmospheric Memory" exhibition performance at the Manchester International Festival in 2019 and is part of a series of water writing installations started with "Call on Water" in 2016. The project can work in most languages, and recognizes different accents.

 

The Build of the Atmospheric Memory Chamber

 
 

Interactive Visualization

Simulating the exhibition as an interactive previsualization gave everyone the confidence in the vision of the extremely technical exhibition and was utilized as the source of truth that all parties could reference. Each subsequent iteration was able to utilize the initial visualization to then plan and reconfigure the touring show for different venues.

Previous
Previous

Mapped Han

Next
Next

Recurrent Series: Mallarme, Sanches, Double