We’ve been gearing to get CS5 running since we moved from Leopard to Snow Leopard to take advantage of the 64-bit architecture and improve workflow.
But our VFX artist, Jakub Kuczynski, was concerned stereo 3D scripts he found online that have given him a much more efficient pipeline for stereoscopic workflow in After Effects wouldn’t transfer over smoothly to CS5.
He contacted the scripts’ developer, Christoph Keller, to ask if they’d be compatible, but he didn’t know.
Now we do. And it’s very good news – work that would take Jakub a day to do manually takes him about an hour, thanks to the scripts.
As for the CS5/Leopard upgrades, we haven’t noticed a marked increase in speed, but even a little more juice over the long run means more efficient post production overall.
The issue isn’t technical – Alister Chapman reports using the same cameras successfully, and we were able to genlock the EX3 to the EX1 by connecting the EX1’s Y channel of the component output to the EX3’s genlock in connector, just as he has.
It’s logistical …the cameras are just too big and cumbersome for this particular beam splitter rig.
We’ve modified the rig so they fit better, but getting them aligned vertically is rough – the mics protrude and we’re still seeing the edge of the box and/or the bottom of the mirror when we use our Sony EX 5.8 mm lens (which has a 56-degree horizontal angle of view).
Wide shots are a must for Blowdown, the explosive demolition documentary we’ll be filming, so we need a system that will effectively capture this kind of footage – ie. we need to hit the sweet spot on the mirror, have the cameras vertically aligned and not see the rig when we use wide-angle lenses.
The alternative is enlarging the image in post to eliminate the part(s) of the shot that contain the rig, but that will degrade the quality, so I want to try and avoid this (especially since we’ll be blowing the footage up it to a certain degree already to facilitate convergence).
So, now we’re working with Canon Canada directly to get loaners of the XF305, which has just been released.
1) Relatively affordable: rings in at $3,895 US plus shipping and handling (we also bought an extra mirror in case the first one smashes in the field. Don’t think we’ll find a second lying around at the condemned sports stadium slated for explosive demolition in Salvador, Brazil, where we’re going to be filming, and shipping one in would surely be a nightmare).
2) Robust but adjustable: the rig’s sturdy, so I think it will stand up well in the field. But luckily the structure isn’t rigid – we’ve had some difficulty lining our Sony EX1 and EX3 up properly (see: large and cumbersome … it’s a problem), so we’ve disassembled the rig, manually repositioned parts, and tightened them to try and accommodate the cameras.
Specifically, the aluminum rails are locked in with screws that can be loosened, adjusted, and locked back in. Without this flexibility we’d have no hope of effectively adjusting the heights of the cameras relative to the base rail (which we’re still working on. Argh).
Cons
1) Manual operation: It doesn’t have all the fine, automated controls that the higher-end feature film rigs do. It lacks motorized components, so factors such as interaxial distance and convergence have to be adjusted manually. In feature film production, it’s often one person’s job just to operate the remote to make these adjustments on automated units.
2) Heavy load: This is just something the crew’s going to have to get used to – 3D shooting demands so much more gear. But it’s still a downer. Total tally: the rig, a tripod, two mid-sized cameras, the nano3D recorder, the Transvideo Cineform 3D Monitor, all the sync cables, and battery. We think it may take two extra bodies in the field just to move all of these components around.
And the cameras are a whole other story … more to come on the Sony EX1/EX3 issue.
And how to turn them on without knocking one (or both) out of alignment.
The cameras need to sit at a 74 mm interaxial distance, right next to each other, for us to capture the footage we need.
This means they’ll be positioned too close together for use to easily access the viewfinder on the right camera, where the camera controls are.
Since each camera comes with a remote, we tried to use them to adjust the settings on each one (holding two, trying to point each one at the infrared sensor on its respective camera), but it’s cumbersome and awkward.
It’s a problem: we need four elements to be in sync between before we start recording for these shots to work: the two cameras have to have the same zoom, the same white balance, the same exposure, and the same focus.
The risk of losing one or more implosion shots – our big bang footage that climaxes the show – because the crew’s running around like mad, trying to calibrate and turn these 18 cameras on properly while preserving their alignment, is a risk I’m not willing to take.
So our stereographer Sean White discovered a work-around – a home-made infrared transmission system that allows us to control both cameras at the same time.
With sourced components off the Internet, a box has been built that will receive any infrared signal and transmit it through a split cable to two infrared sensors.
The mission: to see if the system can capture green screen footage for our first 3D documentary the way we want it to.
Stereographer Sean White mans the 3D beam splitter rig
We need these shots to create several of out our in-house visual effects, a style we prefer to classic documentary CGI because it allows us to explain extremely technical concepts in a photo-real atmosphere.
This means our transitions in and out of our footage are much more seamless … viewers can stay more immersed in the environment and focused on the story.
Controlled Demolition Inc. President Mark Loizeaux outlines his demolition plan
The green screen footage we’ll need to include effects like this comes with an entirely different set of issues than the field shots we’ll have to tackle.
This environment is the most “studio” our event-based filming gets – the interviews aren’t scripted, but the lighting is set, the frame is stationary, and there’s opportunity for multiple takes.
But what we capture has to work in our compositor’s virtual environment or it’s completely useless.
Jakub Kuczynski, Parallax Film’s VFX artist, details these challenges:
We’ve thrown the footage over to post – we’ll see if it flies.
As you can imagine, these cameras will take serious a beating – riding the building down, sitting in the centre of the field as the stadium crashes to the earth, etc.
For these POVs, we’re going with six (three pairs) of Canon Vixia HF 10s – the V cam systems.
These little cameras have survived the ultimate Parallax Film Productions 2D challenge – riding the Hoyt S. Vandenberg, now the second-largest artificial reef in the world, some 30 metres from the surface to the ocean floor when the vessel was sunk off the coast of Florida in May 2009.
I’ve thrown in a few screen grabs of the ride – watch the full episode trailer here.
Six Vixia 10s in our custom-built underwater housings went down – six solid-state, high-capacity SDHD cards survived, and we recovered all of the footage.
Because this system is flash-based, its memory is relatively robust.
For our intents and purposes, this means they have a better chance of surviving massive vibrations and debris that come with the massive implosions we cover. No tape heads to fall off, no moving mechanical parts to malfunction.
These three pairs will be mounted on small rails with a 74 mm interaxial distance.
Our M cam systems will also be placed at strategic places throughout the implosion perimeter to capture key demolition engineering story points (and, of course, rocking, gratuitous destruction).
1) Be designed for a 1/3-inch sensor (specifically, the Iconix models we’ve purchased – lenses designed for a 2/3-inch sensor leave us with a cropped image);
2) Have HD resolution AND high-quality sharpness (the latter was what the Fujinon 2.8 mm and 4 mm lenses, generally used for security/surveillance systems, ultimately lacked);
3) Be a wide-angle lens that allows us to film 1 ½ to 2 metres away from our subject without having the background diverge – a cornerstone rule of 3D production.
Amazingly, it appears that there isn’t a lens on the market anywhere in the world that satisfies these criteria.
Well, why not just switch to a 2/3-inch sensor system, then?
Here’s the issue: we chose the 1/3-inch system because the 2/3-inch camera systems have a beefier head, which means the lenses would have to be mounted further apart.
This would increase our interaxial distance to a little further than we ideally want for these relatively close-up shots, a must for the explosive demolition series, Blowdown, that we’re going to film.
I’ve ordered the closest thing we can find – two Schneider Cinegon 5.3 mm lenses – from New York.
They’re designed specifically for a 1/3-inch sensor, and they apparently shoot better quality than the Fujinons – but they don’t shoot in HD.
We’ll have to test them and see if the footage makes the cut.
And while they’re in transit, our search for the ultimate A cam lenses carries on.
Yesterday I posted video of Sean – today, Rory’s in the spotlight (well, not really … he’s in our production house, working on gear, with no elaborate lighting. But he’s on his game):
3D HD growing pains: what they mean for event-based filmmakers
3D gear and data management: must knows for event-based filmmakers
I’ve turned to the Sony EX line to shoot B cam for our first 3D documentary, after we discovered that the two Canon 7Ds we planned to use can’t send an HDSDI signal to our Transvideo Cineform 3D Monitor.
Two Sony EX3s seem to be an intuitive choice, since this model has genlock in capability.
Our stereographer, Sean White, hit the blogosphere to see if there was any way to lighten the load. He found a lead on DoP Alister Chapman’s blog.
It looks like we can pair one Sony EX3 with a Sony EX1: the EX1 lacks a genlock in, but according to Chapman only one of the cameras needs to have it … we can send signal from the EX1 into the EX3 and then send both to the monitor.