28
Jul/20

TWI-NY TALK — JANET BIGGS: AUGMENTATION AND AMPLIFICATION

28
Jul/20
ary Esther Carter reunites with A.I. Anne, Richard Savery, and Janet Biggs for Fridman Gallery pandemic commission (photo courtesy Janet Biggs)

Mary Esther Carter reunites with A.I. Anne, Richard Savery, and Janet Biggs for site-specific Fridman Gallery commission (photo courtesy Janet Biggs)

Who: Mary Esther Carter, Richard Savery, A.I. Anne, Janet Biggs
What: Final presentation of “SO⅃OS: a space of limit as possibility”
Where: Fridman Gallery online
When: Thursday, July 30, $5 for access to all twelve performances, 8:00
Why: In July 2019, I experienced multimedia artist Janet Biggs’s workshop presentation of her work-in-progress performance of How the Light Gets In, an extraordinary collaboration at the New Museum exploring the ever-growing relationship between humans and technology, with singer and dancer Mary Esther Carter; machine learning program A.I. Anne; composer and music technologist Richard Savery; drummer Jason Barnes, who lost an arm in an accident so uses a robotic prosthesis; marathon runner Brian Reynolds, a double (below-knee) amputee who is fitted with carbon fiber running prostheses; and violinists Earl Maneein and Mylez Gittens.

The Pennsylvania-born, Brooklyn-based Biggs has traveled to unusual places all over the world for her video installations, including a sulfur mine in the Ijen volcano in East Java (A Step on the Sun), the Taklamakan desert in the Xinjiang Uyghur Autonomous Region of China (Point of No Return), a coal mine in the Arctic (Brightness All Around), the crystal caverns below the German Merkers salt mine (Can’t Find My Way Home), and the Bonneville Salt Flats in Utah (Vanishing Point). She’s also all set to go to Mars after several simulated adventures.

During the pandemic lockdown, Biggs has been hunkering down at home with her her husband and occasional cinematographer, Bob Cmar, and their cat, Hooper, but that hasn’t kept her from creating bold and inventive new work. On July 30, she will debut the site-specific multimedia performance piece Augmentation and Amplification, concluding the Fridman Gallery’s terrific “SO⅃OS” series, cutting-edge performances made during the coronavirus crisis that incorporate the empty gallery space on Bowery, delving into the feeling of isolation that hovers over us all. (The program also features Daniel Neumann’s Soundcheck, Luke Stewart’s Unity Elements, Abigail Levine’s Fat Chance, Hermes, and Diamanda Galás’s Broken Gargoyles, among others; a five-dollar fee gives you access to all the works.)

In her third conversation with twi-ny, Biggs takes us behind the scenes of her latest innovative, boundary-pushing project.

twi-ny: You’re so used to traveling. What’s it been like being stuck at home?

janet biggs: Working on the performance has been a saving grace for me — to have something to focus on that feels exciting. But it has also had its share of interesting challenges.

twi-ny: How did it come about?

jb: I was asked by experimental sound artist and audio engineer Daniel Neumann if I would be interested in doing a performance for the series he was organizing for Fridman Gallery. The premise was that he would set up the gallery space with audio mics, projectors, and cameras, clean the whole space, and leave. The performer would be given a code to the lock on the gallery so they could safely enter the space by themselves and perform within shelter-in-place guidelines. During the performance, Daniel mixes the sound remotely from his home and livestreams it.

I loved his premise, but I don’t perform. I direct. I said I was eager to figure out a way to direct from home and send both a live performer and an Artificial Intelligence entity into the space. Both Daniel and gallery owner and director Iliya Fridman were excited about my proposal and gave complete support to the idea.

twi-ny: And then you turned to Mary Esther Carter and Richard Savery.

jb: Yes, I reached out to Mary and Richard, both of whom I worked with on the performance you saw at the New Museum. Happily, they were up for the challenge.

(photo courtesy Janet Biggs)

Richard Savery, Janet Biggs, and Mary Esther Carter rehearse Augmentation and Amplification over Zoom (photo courtesy Janet Biggs)

twi-ny: Which led you back to A.I. Anne.

jb: Richard has been working on expanding A.I. Anne’s abilities and neural diversity. A.I. Anne was trained on Mary’s voice and named for my aunt, who was autistic and had apraxia. Since the performance last year, A.I. Anne has gained more agency through deep machine learning and semantic knowledge. The entity can now express and respond to emotion. We are also using phenotypic data and first-person accounts of people on the autism spectrum for vocal patterning.

We want to explore neural diversity and inclusion in creative collaborations between humans and machines. Our challenge was how to get A.I. Anne in the gallery so she could perform live. A.I. is a disembodied virtual entity. Richard lives in Atlanta. While A.I. Anne is autonomous, Richard needed to be able to receive a single audio channel of Mary’s voice from the gallery and then send back a single channel audio response from A.I. Anne. With strong wifi and the right software, our tests from Atlanta to the gallery have been successful, so keep your fingers crossed for Thursday.

twi-ny: What was it like collaborating long distance?

jb: I’ve been having rehearsals with Mary and Richard for the last couple weeks via Zoom. We have been able to work out the choreography remotely and even developed some new camera angles due to the constraints of cellphone cameras and apartment sizes. The percussive soundtrack that Mary will dance to was generated by EEG sonification, the process of turning data into sound. Richard developed a process where he could use his brainwaves to control a drumset, creating a kind of brain-beat.

And lastly, I’ve been editing video images. Some will be projected on walls in the gallery and some will be a video overlay, run by the streaming software so that we essentially will have multiple layers of images and live action. If all goes well, I think this will be a pretty exciting performance.

twi-ny: Is that all? You don’t exactly make things easy for yourself.

jb: I’ve been to the gallery myself to see the layout and make some staging/lighting decisions. I will send Daniel a floor plan marked with my staging decisions and a tech-script. Daniel will set up the space (projector angles, lighting, camera and microphone placements) during the day on Thursday and then completely clean the space. Thursday evening, Mary will enter the space alone. Richard will run A.I. Anne from his computer in Atlanta. Daniel will mix the sound and images remotely into a livestream Vimeo channel that the audience can access from their homes. And I’ll be watching from home, holding my breath that everything works!