I’d like to address a few things that I think get forgotten, especially as the fog clears after NAB.
Sony announced the F65, an 8k camera with a Super 35mm-sized sensor. For those keeping score, Sony has confused us even more by naming their new camera F65 because it has 65mm film equivalent resolution (but with a 35mm sensor, not 65mm) to go with their F35 (which has a 35mm-sized sensor but outputs 1080p HD) and their F23 (which has a 2/3″ sensor that also outputs 1080p).
Confusing, but anyway.
Sony has generated a lot of buzz with this camera by leapfrogging RED to 8k. RED Digital Cinema has made chasing resolution into an art form. But will you see it on-screen?
ARRI made waves last year with the Alexa. They created a Super 35mm sensor camera that shoots (only!) 2k, and gets 13.5 stops of latitude (damn!). This was huge because 35mm film gets 13-14 stops and the original RED One sensor could get maybe 8-9 stops on a good day. A Canon DSLR is more like 6 stops using its standard settings. The overall resolution wasn’t nearly as important as the ability to gather light and the demand for the Alexa has proved this. Film has still been a big player because of it’s unmatched light gathering ability. Video just can’t compare.
So as I read the literature about the F65, I just lost interest, which really amazed me at the time because I’m such a technical dork. Just looking at the pipeline required to deal with 8k imagery, and then how most people would view this footage, I just didn’t see the point. The vast majority of digital cinemas are HD or 2k. After that huge investment, it’s going to be quite awhile before the theater owners upgrade to 4k or beyond. Speaking with a few theater owners recently, many are questioning their initial investment into digital and 3D.
As a reader of this blog, I’m sure you’ve owned or used a 35mm adapter. Examples include models by Letus and P+S Technik at the high end, and Jag35 at the low end. These were all the rage because for the first time in history, you could capture the shallow depth of field of 35mm film on a “cheap” video camera. No $100k budget needed. The allure wasn’t the shallow depth of field in and of itself. Remember Citizen Kane? It was a huge hit (to film scholars at least) because it was the first movie to have DEEP depth of field. The appeal is that it was something video shooters couldn’t have. And now they could! But the most amazing thing about shallow depth of field is that you didn’t need a super expensive big screen TV or a multi-thousand dollar surround sound system to view it. You could watch it on the web. Yes, the impact would be greater on a big screen, but the visual aesthetic stayed intact, no matter the medium.
Shallow depth of field was a big deal because you could see it on any screen.
Going back to the Alexa and latitude, this reasoning holds as well. Whether you’re watching a video on the web or on a 60 foot screen, if there’s more detail in the shadows, you’ll see it. If more latitude results in a better color correction, you’ll see that too. More pure resolution doesn’t make the color better or the shadow details visible.
So I’ve devised a simple statement for dealing with the almost ridiculous amount of new technology that gets released:
What’s on screen is the only thing that matters.
This extends to the following questions: Will my audience notice or care? What’s the (cheapest, smallest, lightest, easiest, etc) way to achieve this effect or goal?
When you really start digging into these questions, you can really start to get creative. Here are a few examples from my own experience:
– Instead of paying for an expensive helicopter (or extremely unstable RC helicopter), try a hang glider. Or a really really long lens from across the valley.
– A dolly is too damn big to carry around and rig up. What about a shorter track that goes on top of the tripod, like a Glidetrack? Will you get the same dolly effect on-screen?
– We can’t afford a bank of Kinoflos or softboxes. What about some $20 China balls? Once the lights are set, the viewer won’t be able to tell the difference.
– How about using natural light and bounces instead of recreating the evening news set with 5+ lights that are way too bright? Depending on your camera, a light kit based around a backpack full of 250s, 500s and collapsable bounces might make more sense than a rolling case of 1k’s (or ARRI SUNs for that matter)
– If you’re using a jib, do you really need a motion head to get your shot? With a motion head you need the head, joysticks, monitor, power, etc. Maybe a wider lens and better jib movement planning would suffice? Would the viewer notice?
– Remote controlled cable cam with motorized head, wireless HD, batteries, etc; or a dude hanging from a zip line?
– Do you need a remote Technocrane when a scissor lift or cherry picker with an actual DP holding the camera will result in a better shot anyway? A remote head can be much more jerky than a person holding the camera.
– Does the ability to watch footage, color grade and transcode on-set, result in a better shot? Or is it more gear, power and personnel that just gets in the way?
I could go on and on, but I think you get the point. All the technology in the world doesn’t matter if it doesn’t result in noticeable difference on-screen. Go with the smallest, cheapest, lightest, easiest way to get the shot you see in your head. Sony’s F65 might be 8k RGB 4:4:4, but if you’re lighting sucks and your actor misses her cues, you might as well be shooting on your iPhone.
[…] say 2x better. Spend the minimum you need to achieve the results you want. Refer to my article What’s On Screen Is The Only Thing That Matters for more on this. Are you good enough to notice or be able to handle the difference? I thought […]