Tag Archives: hispeed

“Later, in the restaurant…” – some notes on the making of a hi-speed 3D short.

Karel Bata - 'Later, in the restaurant...'

I shot Later, in the restaurant… using the Olympus iSpeed camera system while I was doing my MA in Stereo 3D at Ravensbourne College. I had met the Olympus guys at a Z Axis event I organised, and they offered to demo their rig and after give us some hands-on. I would have to live with a one-hour time slot…

Olympus iSpeed

The Olympus iSpeed 1000fps camera

The concept

This offered an unusual challenge – could I make a narrative sequence that in real time spanned only 3 seconds? I came up with two ideas:

 Later concept 1 Later concept 2

The dog would have been fun, but it may have been difficult to get a second take! The other setup offered some interesting narrative possibilities. In fact, as is often the case, things emerged in the editing. In this case an erotic undertow which, with the overt dominance / submissive element, implied a certain dynamic to the relationship that some folks may uncomfortably recognize…

Lighting

Lighting was an issue. I knew we would be shooting at 500 – 1000 fps, and with regular lights running at 50Hz we would see flicker. The filament of a tungsten light, as it heats and cools, flickers at 100 times a second (twice for each cycle). Your eye won’t see this, but a camera running at 1000fps will. However the bigger the lamp the longer it takes for it to heat up and cool, so flicker is less pronounced. Generally a lamp of 10KW or more is regarded as ‘flicker free’ for high speed. There are other lighting solutions, like using constant voltage DC, but these are expensive or were impractical for us, and some don’t always behave as they should.

The Ravensbourne TV studio was equipped with 1K and 2K lamps – which were not of any use to us. But it did have a large three-phase outlet. We couldn’t afford 10Ks, but we could run three 5KW lamps off the three different phases. I had read (in CML – one of Geoff Boyle’s posts I think) that by doing so we’d effectively smooth out the flicker – the dips and troughs from each phase happen at different times and would largely cancel each other out. Smart idea, and that’s what we did.

Later lighting setup - Karel Bata

However, if you look carefully at the final video you can still see flicker in the drops of water when crossing black as they catch reflections from the different lights.

The Shoot

Having only an hour meant being very prepared. Actors, props etc had to be ready to go. I spent some time with Holly Wilcox rehearsing spitting and she picked it up quickly. Joe Steel was a hero – who else would volunteer to be spat at? My eternal gratitude to him.

The first shot was at 1000fps. I wanted a slow build up and reveal. After that I would have to pace it up, so later shots were at 750fps then 500.

Later concept 3

The IA was 1 to 1.5 inches. In retrospect, with having a black background I would have made it bigger. In fact, in post that’s what I did. We shot parallel – having no background meant we’d lose nothing in post doing HIT, and good geometry was prioritised. It also made post easier.

The lights were bounced off large sheets of poly set at ¾ from behind, with another two sheets in front to provide fill. It got very warm!

There were 3 set-ups and we did two takes of each. The cameras recorded data to a cycling internal RAM, much like a Phantom or FS700, and then compressed and downloaded to a 8-bit BMP image sequence. At high speeds we could only record to 720. We over-ran our one-hour schedule by 10 minutes!

Post

Unfortunately something had gone wrong with the system, which everyone failed to spot. Playback from the cameras was OK, but the recorded BMP images were badly underexposed. We were gutted. Here’s a sample frame:

Original file quality 2

Our 8-bit system had effectively become 5-bit, with a lot of blocky noise lurking in the shadows.

This took a huge amount of effort to ‘fix’, as well as I could, in After Effects. Of great help were Red Giant’s Instant HD, Denoiser II, and Cosmo to resize and fix the noise, blockiness, and skin tones. To adjust the IA Revision’s Re:Flex Motion Morph worked really well. No dedicated 3D software was used.

I felt I needed more 3D. Warping a 3D image to decrease IA usually works reasonably well, but increasing IA often creates visible spatial distortions, especially in areas where objects occlude each other. Fortunately the subjects here were geometrically simple with a black background, and I’m very happy with the end result – I’d increased the IA by 50 to 80%. But still I can see some global flaws when viewing the whole image and switching between L and R, but you’d have to be really sharp-eyed to spot it in a cinema where you can only view a portion of the frame.

One criticism I’ve heard is that there’s still not much 3D. This is interesting. In the final video the amount of 3D is precisely what’s needed to achieve the correct degree of ’roundness’ in the subjects. Any more and they would appear stretched along the z axis. I think it’s because using a black background with only the foreground subjects visible means that the overall amount of 3D is limited. If I’d shot against green and put in a background later (as I did in a video here ) the image would contain more depth, and it would be perceived as deeper, but the depth of the subjects themselves would really be unchanged. This makes me wonder about audience expectations with 3D – is it that folks want or expect deep shots?

I’ve seen Later many times, and the 3D version really does add something. It separates out detail, especially with the water droplets, and adds a lot more life to the faces.

Here’s a glimpse of the AE workflow of just one shot. Some of those nodes are for dynamic masks to tweak areas that needed edge sharpening, softening, colour adjustment etc. Each shot needed a slightly different (and painstaking) approach.

After Effects Flowchart

.

Some problems with projection…

I did a test screening at the Brixton Ritzy cinema, which uses a RealD circular polarised system, and discovered two problems.

1 – With titles converged on the screen against black people told me the titles were 2D! An audience watching the film critically and seeing no 3D might initially think something had gone wrong. You don’t want this distraction. I fixed this by floating the titles slightly forwards.

2 – Ghosting. This is significant when using RealD. If the subject is placed behind the screen, as was the case in the first DCP I did, then the Left and Right images will be horizontally displaced on the screen plane when viewed with glasses off. When the viewer puts the glasses on each eye should only see one image and you get 3D. But the system is not perfect, so you will get a little cross-talk, and if there is a bright image against black then each eye will see a dim ghost of the other eye in the dark areas.

Take the image below. I had placed the eyes on the screen plane, with the cheek on the left slightly behind. The effect was not so much ghosting along this edge, but an apparent de-
focussing – the edge appeared to lack sharpness. But then, no one else seemed to see it.

Joe Angelo Steel

The cure was to re-converge the image and bring it forwards, so the high contrast edge causing the problem was on the screen plane and there was no double-imaging. For shots where the subject is moving I had to track the image depth and create dynamic convergence. With a black background and no visual cues to tell you, this is imperceptible.

Of note is that when the first (uncorrected) DCP was projected at Beyond3D in Karlsruche a Dolby 3D system was used. This has very little cross-talk so no ghosting was perceptible.

Later, in the restaurant… is now doing the 3D film festival circuit and I’m pleased to say getting an excellent reception.

The IMDB page is here: http://www.imdb.com/title/tt4318828/

Advertisements