Sean speaks about the implosion of the Fonte Nova Stadium in Salvador, Brazil that we’ll be filming for the explosive demolition series Blowdown, how 3D event-based documentary filmmaking differs from 3D feature film production, and how we’ve rigged gear, planned data management and hammered out a work flow taking these differences into consideration.
“The tools exist for making feature films like Avatar, or we see a lot of animated 3D films, but now with this emergence of 3D television there’s a thirst and a demand for content but the tools don’t quite exist to be able to go out there and do filmmaking like we’re used to doing it for television.”
“Together with a team here in Victoria we’ve actually taken other cameras and sourced components from other places and assembled them in a way that certainly in B.C. and Canada and probably in the world these systems don’t exist.”
“Our hope is that as 3D becomes more prolific that a Canadian broadcaster will catch on to this stuff … it’s more than the future of television, this is the future of how we consume content, whether it’s on the Internet or on whether it’s on TV this is what future generations are going to be demanding and we’re on the forefront of creating it for television.”
Even though we’re going to shoot our first 3D documentary entirely in the third dimension, our visual effects will still be generated from stills.
Our compositor, Jakub Kuczynski, has been converting 2D VFX from previous episodes of the explosive demolition series Blowdown as he preps to work with photos from the implosion of the Fonte Nova Stadium in Salvador, Brazil.
He’s working in 3D space in Adobe After Effects. So to convert shots, he:
1) Generates a second camera in the 3D space, so there’s a camera for the left eye and the right eye.
2) Positions the second camera being aware of the same elements as shooting on location: how close objects are to the camera, etc.
3) Goes thorough the shot, key frame convergence depending on where the camera is in relation to objects.
4) Creatively decides where the depth cues will work best.
5) Renders both eyes.
6) Hand over to edit, then check the shot out on our polarized monitor.
Go through the shot, key frame convergence, spit out a left eye and a right eye.
It’s great to see some of the shots of 2D past converted into 3D – prominent, big and impressive.
As I’ve mentioned, the 3D files that Cineform creates only have two audio tracks.
To capture ambient noise as well as a conversation between two subjects for the explosive demolition show Blowdown, we have to capture at least three channels (a boom mic and two lavs), sometimes four (camera mic).
Our editor, Brian Mann, has been in conversation with Cineform developers to see if we could find a way to edit with more than two channels.
They’ve been very prompt in replying and helpful.
But unfortunately it looks like there’s no way to edit more than two channels of audio using the current version.
There’s no particular reason why the program’s this way – it’s just a design factor that isn’t optimal for our specific post production needs.
As far as we’re concerned it’s the best high-end game in town, and otherwise it’s working great.
Cineform’s lead Mac engineer plans to add it to the list of things to add to their future release, First Light.
In the interim, we’ll have to figure out how to adjust our workflow.
The reason we’re trying this program is that it allows for dual steam, which means each eye is at full res and in real time, so we can do convergence, colour correction and other editing in real time rather than having to render whenever we make an adjustment.
Cineform also works with Final Cut Pro, the editing software we normally use to cut 2D HD there’s no way to edit 3D in FC without a 3rd party programs, as far as we know.
Great for picture, but there’s a problem with audio.
The 3D files that Cineform creates will only have two audio tracks.
To capture ambient noise as well as a conversation between two subjects, we have to capture at least three channels (a boom mic and two lavs), sometimes four (camera mic).
And since Blowdown – the explosive demolition series we’ll be filming – is event-based, there’s no opportunity for ADR, and you can’t recreate most of the ambient sound in post.
It’s logistical …the cameras are just too big and cumbersome for this particular beam splitter rig.
We’ve modified the rig so they fit better, but getting them aligned vertically is rough – the mics protrude and we’re still seeing the edge of the box and/or the bottom of the mirror when we use our Sony EX 5.8 mm lens (which has a 56-degree horizontal angle of view).
The alternative is enlarging the image in post to eliminate the part(s) of the shot that contain the rig, but that will degrade the quality, so I want to try and avoid this (especially since we’ll be blowing the footage up it to a certain degree already to facilitate convergence).
So, now we’re working with Canon Canada directly to get loaners of the XF305, which has just been released.
2) Robust but adjustable: the rig’s sturdy, so I think it will stand up well in the field. But luckily the structure isn’t rigid – we’ve had some difficulty lining our Sony EX1 and EX3 up properly (see: large and cumbersome … it’s a problem), so we’ve disassembled the rig, manually repositioned parts, and tightened them to try and accommodate the cameras.
Specifically, the aluminum rails are locked in with screws that can be loosened, adjusted, and locked back in. Without this flexibility we’d have no hope of effectively adjusting the heights of the cameras relative to the base rail (which we’re still working on. Argh).
1) Manual operation: It doesn’t have all the fine, automated controls that the higher-end feature film rigs do. It lacks motorized components, so factors such as interaxial distance and convergence have to be adjusted manually. In feature film production, it’s often one person’s job just to operate the remote to make these adjustments on automated units.
2) Heavy load: This is just something the crew’s going to have to get used to – 3D shooting demands so much more gear. But it’s still a downer. Total tally: the rig, a tripod, two mid-sized cameras, the nano3D recorder, the Transvideo Cineform 3D Monitor, all the sync cables, and battery. We think it may take two extra bodies in the field just to move all of these components around.
And the cameras are a whole other story … more to come on the Sony EX1/EX3 issue.
And how to turn them on without knocking one (or both) out of alignment.
The cameras need to sit at a 74 mm interaxial distance, right next to each other, for us to capture the footage we need.
This means they’ll be positioned too close together for use to easily access the viewfinder on the right camera, where the camera controls are.
Since each camera comes with a remote, we tried to use them to adjust the settings on each one (holding two, trying to point each one at the infrared sensor on its respective camera), but it’s cumbersome and awkward.
It’s a problem: we need four elements to be in sync between before we start recording for these shots to work: the two cameras have to have the same zoom, the same white balance, the same exposure, and the same focus.
The risk of losing one or more implosion shots – our big bang footage that climaxes the show – because the crew’s running around like mad, trying to calibrate and turn these 18 cameras on properly while preserving their alignment, is a risk I’m not willing to take.
So our stereographer Sean White discovered a work-around – a home-made infrared transmission system that allows us to control both cameras at the same time.
With sourced components off the Internet, a box has been built that will receive any infrared signal and transmit it through a split cable to two infrared sensors.
Stereographer Sean White mans the 3D beam splitter rig
We need these shots to create several of out our in-house visual effects, a style we prefer to classic documentary CGI because it allows us to explain extremely technical concepts in a photo-real atmosphere.
This means our transitions in and out of our footage are much more seamless … viewers can stay more immersed in the environment and focused on the story.
As you can imagine, these cameras will take serious a beating – riding the building down, sitting in the centre of the field as the stadium crashes to the earth, etc.
For these POVs, we’re going with six (three pairs) of Canon Vixia HF 10s – the V cam systems.
These little cameras have survived the ultimate Parallax Film Productions 2D challenge – riding the Hoyt S. Vandenberg, now the second-largest artificial reef in the world, some 30 metres from the surface to the ocean floor when the vessel was sunk off the coast of Florida in May 2009.
I’ve thrown in a few screen grabs of the ride – watch the full episode trailer here.
Six Vixia 10s in our custom-built underwater housings went down – six solid-state, high-capacity SDHD cards survived, and we recovered all of the footage.
Because this system is flash-based, its memory is relatively robust.
For our intents and purposes, this means they have a better chance of surviving massive vibrations and debris that come with the massive implosions we cover. No tape heads to fall off, no moving mechanical parts to malfunction.
These three pairs will be mounted on small rails with a 74 mm interaxial distance.
Our M cam systems will also be placed at strategic places throughout the implosion perimeter to capture key demolition engineering story points (and, of course, rocking, gratuitous destruction).