top of page
  • Krupa Kanaiya

The Volume Of Development: A Q&A With Candice Alger

Volumetric capture has been developing for years. From some of your favorite films to new medical advancements, this technology is more familiar than you know. After 20 years in the film industry, Candice Alger offers her insight into the realm of volumetric capture. Diving into the evolution of the technology to the effects on diversity, this method of creation is making way for young and diverse filmmakers to share their stories in record timing.

What is volumetric capture?

Volumetric filmmaking combines the artistry of cinema with the interactivity of gaming, and uncovers visuals beyond the limits of traditional media ... Volumetric filmmaking allows us to capture real people, real stories and real places, and invites audiences inside the virtual worlds where the story takes place.

It is a relatively new technology. It is similar to motion capture 20 years ago, when the tools for post processing were being developed. We have a 4D Views Holosys System from the company based in Grenoble, France. It is a 32-camera capture array that covers a 3 meter capture volume. It takes roughly 12-15 hours per character minute to process the data. We are able to capture live performances of actors in full wardrobe/costume, interacting with set pieces or props so long as they comply with the technical specifications of the system.

What you get at the end of the process is a photo real, 3D, Holographic like high quality image of the performance that can be viewed from any viewpoint. These high quality images can then be imported into web, mobile, or virtual world applications to then be viewed naturally in 3D. Post production tools for this technology are currently in development by a handful of companies. Once those tools are available, the use cases will expand dramatically.

Where have we already seen volumetric capture used?

We used the technology to capture background characters for a series called The Liberator that was produced by Trioscope Films for Netflix. We worked with the team at Trioscope here in Atlanta to capture a library of "virtual extras" in full wardrobe, that once processed could then be utilized to populate the scenes in post production. We even painted a treadmill green so that we could capture longer ranges of motion for each performance. People viewing the live action series might not realize that those characters are virtual. Trioscope Films is an innovator in film making, and we were very fortunate to be involved in this production.

We are also seeing a fair amount of interest in the music industry. Due to COVID, the music industry was forced into a lock down from concert performances. Because the artists couldn’t perform live, some of them pivoted by deploying their performances virtually. Epic's Fortnight has been leveraging this approach quite successfully. We have captured multiple artists for custom deployment via custom web/streaming applications. The interest seems to be growing, and we are getting more inquiries every day.

We have also been approached by Rite Media here in Atlanta about integrating volumetric capture with AI, and Machine Learning. The Last Goodbye: Virtual Reality Holocaust Survivor Testimony experience produced by USC Shoah Foundation is an amazing example of what can is possible. We are exploring opportunities like this that leverage the technologies in a very powerful way.

We have done a few other interesting projects utilizing volumetric cinematography for immersive experiences. We worked with Santander Bank and the Arnold Worldwide agency to capture the performance of a young woman playing a homeless health care worker living in her car. That once rendered, that was incorporated into an immersive experience that was launched in an effort to bring awareness to the problem. The end result was a very compelling example of what is possible utilizing this emerging tech.

Another example was the project we worked on with Futurus, and United Way of Greater Atlanta to help create the Call the Play VR experience for the Super Bowl experience inside Mercedes Benz stadium in 2019. We captured NFL legend Jerry Rice on our volumetric stage speaking about the program. Those files were then integrated into a virtual Mercedes Benz Stadium as part of the virtual gaming experience designed to help promote the United Way and the NFL’s partnership in Character Playbook. The program is one that educates students on how to cultivate and maintain healthy relationships during their middle school years.

When did you start working with volumetric capture?

When I joined CMII back in 2017 to work with David Cheshier and Elizabeth Stickler to help design and equip the proposed media lab, we began exploring what emerging technologies would be advantageous to our program. We wanted to bring in technology that would be relevant to the creative industry for years to come. We knew that motion capture, rapid animation prototyping, pre-visualization, audio, AR, and VR were all areas we needed to target. Volumetric Capture was just coming on to the scene. We did our research, and chose the 4D Views system out of France. We determined that it was the most robust system on the market, and the team in France were willing to partner with us to train us, and support us in building the pipeline. We were able to acquire a system, and then we brought in the technical expertise to operate, and teach the technology.

James Martin is one of our Professor’s of Practice who joined our team to help teach immersive media production. He, and one of our students Joel Mack are now recognized as two of the most experienced volumetric cinematographers in the country. James has also developed a Practicum around the technology that gives the students “hands on experience” with the technology. One of those students who took that class, Nicholas Oxford was able to leverage his skills, and experience in that class to land an amazing job at Peloton. This is an example of some of the unique curriculum that CMII is developing. Our faculty are developing curriculum from the ground up based upon real world production pipelines. We are teaching our students how to leverage these emerging technologies to tell their stories, because we know that if we can do that, they will find jobs in any sector because the world is becoming more digital every day.

What cutting edge technology are you using now?

CMII is a partner with GSU's School of Public Health on a Facebook Reality Labs Grant that was recently awarded to create a film aimed at advancing racial justice. The project is designed to increase viewers’ empathy and enhance their understanding of racism and structural inequality.

The immersive film we will be producing with Laura Salazar, School of Public Health professor and director of the Ph.D. program is called “Consider ME”. Production is scheduled to begin late 2021. We have worked with Laura and her team in the past, and we are extremely excited about working with them again on this compelling project.

Why should we use this tech over traditional animation? How has this technology affected your students at GSU?

Animation is obviously an art form. Traditionally, it's been very labor intensive. Very expensive. The animation software has not always been user-friendly, there's some complexities to it, so there's a lot of learning.

You have to learn how to model a character and rig the character, and then animate that character. So there are software tools out there today. Epic Games is really busting everything wide open with their MetaHumans project or their MetaHumans development. We have a strategic partnership with a company called Reallusion out of Taiwan. They have a pretty powerful set of animation tools. Reallusion has partnered with CMII as a strategic partner and donated hundreds of thousands of dollars with the software to our labs, so that we're teaching those pipelines to our students in the classes.

It's basically getting [students] animating and telling their stories much faster. Two of our professors, Max Thomas and James Martin, are teaching the rapid animation pipeline. They can have students animating in eight hours.

If you want to be an animator in the feature films or high-end visual effects or whatever, there are a lot of skills that you're going to have to learn and a lot of different steps in the process, but this basically simplifies that so that you can drag and drop assets. You could get a character that's already rigged. You can pull wardrobe from an asset library in these game engines, and you can start telling your story without being an expert 3D animator. We feel like if we can get them at least using these types of tools, then they can figure out where they want to specialize.

Do they want to be a Modeler or do they want to be a Rigger? Do they want to do lighting? I think it exposes them to what they're capable of far sooner than if they go down the traditional path of learning Maya, or MotionBuilder, which are a little more complex to get into.

What is CMII’s strategy in the area of animation?

Animation is obviously an art form. Traditionally, it has required a very complex learning experience that takes years to master. That is still the case in many production pipelines. CMII has added a production pipeline that enables our students to begin animating their stories very quickly. We want our students to be able to explore the animation process and be able to express themselves without having to dedicate the years of training that it typically takes to become a skilled expert. We were able to partner with an amazing software company out of Taiwan called Reallusion. They like what CMII is doing to help change the face of storytelling, and as a result, have donated hundreds of thousands of dollars worth of their proprietary software to our labs to help facilitate that mission. We are designing curriculum around their

suite of software that enables our students to begin animating early on in their education. The software not only provides intuitive asset creation and editing tools, it also offers an extensive library of existing assets that can be easily loaded into the various toolsets/pipelines that empower our students to move quickly into the creative storytelling process. They can select a pre-rigged character, choose hair, make-up, and wardrobe from a drag and drop menu, and then place them in a virtual world that they can generate in a similar fashion.

This approach also aligns with the rapid evolution of the gaming engines that are changing the face of animation as we know it today. We understand that there are skills required to master animation as a pure “art form”, and we continue to teach them. We simply wanted to provide a “rapid prototyping pipeline” to our students so that they could begin to explore storytelling through animation, and then determine what area of expertise they might want to focus on in greater detail. Students are also leveraging some of these tools in their game design classes. It is exciting to see our students actually creating content that also encourages collaboration with other students in the design and execution process from script to screen.

Can you tell us about your upcoming grants?

For the Facebook grant, we're going to use volumetric capture to capture some testimonials from people who have experienced something in their lives that affected them …

We want to try and educate people about how racism and prejudice affect people. We want to tell these testimonials, and then we're going to take that volumetric testimonial, and we're going to deploy that in a headset, but also on the web so that we can reach a much broader audience.

It's really just kind of telling stories that people need to hear. Dr. Salazar is interviewing the participants right now. And she was saying that one example would be a person who was wrongly convicted and served their time and afterwards came out and was not angry, but how to pause, [and] have more of a positive view on that. But then there are other people that have gone through equally devastating scenarios. It's impacted them in a very negative way and so it's really an educational process. To just get people to understand and to empathize with those experiences.


“We are teaching our students how to leverage these emerging technologies to tell their stories, because we know that if we can do that, they will find jobs in any sector because the world is becoming more digital every day."


How does this expand inclusivity in the industry?

I believe the biggest impact on change is the democratization of technology. When I began working in animation virtual production the cost of entry for an individual, or, a small company could be cost prohibitive. You either had to be enrolled in an expensive art program, or employed by a major studio or animation company. Hands-on experience was hard to come by.

Today, we continue focusing our energies on providing access to a student body that is very diverse, and financially challenged. Some of our students don’t even own a smartphone, or computer.

It has been even more difficult during the COVID lock down. Our staff was constantly scrambling to find creative ways to get the necessary tools in the hands of our students. Some of our labs were open under reduced capacity guidelines, and students were able to use our labs under safe conditions. We remain committed to providing access to the tools they typically do not have access to because we know that is the path to empowerment. We are constantly working to increase our limited lab capacity through revenue generation in our studios, and philanthropic efforts in the community. In just four years, we are already seeing results. Our admissions are off the charts, and we are seeing the number of females in our programs holding steady at about fifty percent. We have an amazing team that is tireless when it comes to supporting these efforts.

As the technology expands how do you see this tech impacting the current state of the industry in Georgia?

I'm very optimistic about the state of Georgia and the future technology. The film industry is wonderful and we're so blessed to have that. A lot of people have worked really hard to bring that here, but not everyone gets an opportunity to work on the Marvel films or whatever but it's great. It's the best marketing the state of Georgia could ever have. I mean, you look at what the Georgia Film Academy is doing; you look at what CMII is doing. We're teaching our population that's not working in the film industry how to leverage these tools to find jobs in other industries. You look at [how] retail is going to AR and VR, right? Retail's gonna be huge, automotive, medical, but every industry is moving into it …

When you look at NFTs, that's opening the door for artists and creatives, but then, it'll expand, but to make money just by creating digital art or registering their music. Everything that's happening in the digital world right now is really leveling the playing field for everyone, in my opinion. It's really exciting that Georgia is very supportive in all those areas.

There are a lot of companies I speak to, Primal Screen and Right Media and Trick 3D; there are so many companies in Georgia that have been in the creative space for many years, they've managed to survive this COVID shut down but they're pivoting and figuring out ways to reinvent themselves, to leverage the technologies that are still evolving. It's exciting just to see how much LED technology is coming to Georgia. We're getting a LED screen at CMII, but when you look at virtual production as demonstrated by the “Mandalorian,” that's taking virtual production to its max. And so I just think Georgia's done really well at nurturing and supporting innovation and emerging tech.

Where can folks see your latest projects using this technology?

Lisa Ferrell is a key player at the CMII and she's spending the summer putting together more social media examples and stories of the kind of work we're doing.

We worked with a company called XYZ out of Italy to do volumetric capture, they do architectural design and they want to populate those beautiful architectural worlds that they build with humans. So we captured a library of humans for them. We worked with -The Arnold Agency on a piece called in someone else's shoes, teaching people about homelessness in California. We captured the story of a young woman who was a nursing assistant in a hospital, but she was actually living in her car. We e told that story using volumetric capture.

We're doing a lot of music work, and we just did some match moving. That gets really complicated. We just did a huge capture for Reallusion to create. You know when you play a video game and you have the controller? So then the character has to go left or right, or pivot or turn around. You have to capture a huge library of what's called match moving data for that. e We did two full days of match moving data, capture of an athlete, running these different patterns so that you can inform the navigation system inside of a gaming engine so that you can then control the characters. It's not really exciting capture. It's not like a dramatic theatrical thing, but it's the backbone of how you navigate characters in a virtual world.

In June we did a pretty big music capture for a company out of California called Symbols Zero. They brought in some musicians to do volumetrics. We did XNs capture, which is like a sensor-based capture of musicians in the studio that they'll be deploying via a streaming app, which is all the rage right now. We're seeing a lot of interest, a lot of phone calls coming in now, people are starting to open back up.

During COVID we did COVID compliant capture. We were very careful about setting up those protocols and we worked with a company called StatusPro formerly Byte Cube. Lamar Jackson, NFL’s 2019 MVP, came in and did a bunch of captures, we captured him volumetrically and with motion capture to support their NFL training and their video game that they're creating.

So, a lot of interesting stuff, some of the most interesting though to me is medical advancements. There's a company called Surgical Theater that we've been speaking to out of the DC area.The two guys that started it used to run the flight simulation program for the Israeli Air Force. They developed software that takes an MRI and a CT scan that you have to have whenever you have any kind of brain injury. They create a 3D image of the brain so that surgeons can go in and rehearse their surgery in VR and also educate the patient about what's going on in their brain. Brain surgeons traditionally had to take 2D images and then visualize it in their brain, but now the image is in 3D and they can see how they're going to perform the surgery. Since they never had that information, that's fascinating.

Featured Stories

bottom of page