Tag Archives: 3D movie making

Mind The Gap (aka 3DTango)

Karel Bata 3D Tango

A 3D short currently doing the festival circuit inspired by Zbigniew Rybczýnski’s Tango.
I saw Tango at the Annecy Animation Festival. He’d cleverly composited multiple layers to create the illusion of an impossible number of people in a room:

tango

During my MA in Stereo 3D at Ravensbourne College, I created a version taking this idea into another dimension – 3D Tango. At one point there are 16 layers of green-screen 3D image – and each with a left and a right master.  That has been a lot of work in post!

Featuring Daisy Batova, Alfie Albert, and Helena Kuntz.

Click on any image for a closer view.

. .Karel Bata 3D Tango previz
. .
Karel Bata 3D Tango previz

. .Karel Bata 3D Tango previz
. .
Karel Bata 3D Tango previz
. .
Karel Bata 3D Tango previz
. .
Karel Bata 3D Tango previz
. .
Karel Bata 3D Tango previz
. .
Karel Bata 3D Tango previz
. .group-1
. .group-2
. .mtg-comping
. .Click images to enlarge
   . .

Mind The Gap has so far screened at:

3DKIFF, Seoul, S. Korea, 28 October, 2016
LA 3-D Movie Festival, Los Angeles, December 8 2016
3D Stereo MEDIA, Liege, 14 December 2016
SD&A San Francisco, 31 January, 2017


 

.

Advertisements

Some Common Errors in Stereo 3D

Alfie Albert at Baker Street station
“Now, where’s that 3rd dimension gone?”

Here’s a freebie download from me! This is a video I created as a teaching aid when I taught Stereo 3D to MAs and staff at Ravensbourne College.
There are nine short clips, each with a different 3D error. I played the video to the students, and then asked them to identify the error in each clip.

You can view it on YouTube here: https://youtu.be/VVGAlOLLiJo

Screen Shot 2015-10-11 at 19.04.29

And you are free to download the original HD file here:
https://mega.nz/#!Ssog3SQB!-JMxuW_az3njNEQRiyca3-Ru28euVC7RdIUWhcpB4ic (320MB)
Note: to download click the BLUE TEXT beneath the red button!

This was recorded with two Canon 105s in SBS mode converged about 8ft away. IA=4″

Here’s what I did in the classroom:

I first show ① ‘Raw’ and explain that this is straight from camera, but I’ve made some minor vertical and horizontal shifts to line the images up, and a small zoom in to eliminate cropping. Can the students spot anything else that may still need correcting?
I pause/repeat the video while they consider. They’re unlikely to see anything relevant, but it depends on who they are.
I then show ② to ⑨ saying that I have introduced some kind of error in each, and ask if they can spot what it may be.

The errors are:
① Raw (no errors added)
② Color differences between L & R
③ One eye soft
④ Vertical disparity
⑤ Out of sync
⑥ Rotational disparity
⑦ Zoom disparity
⑧ L & R reversed
⑨ Raw (same as 1)

I then ask them to look more closely at 9 and see if they can spot any errors they missed before.
There’s at least three!


The actor is the most excellent Alfie Albert
Video extract is from ‘3DTango’

*and please DON’T remove my name from the video!

.

Happy When It Rains

Karel Bata - Happy When It RainsClick image to enlarge

The First Shoot

This 3D short, a faux music video using the immortal track by Garbage was my first attempt at some serious 3D. It started life as a test to see how well a pair of Canon 105s performed on a Genus Hurricane 3D Rig (everything was fresh out its box) courtesy of the inventor, tornado-chaser Alister Chapman, to whom I am eternally grateful.

Hurricane RigHurricane Rig with Canon 105s (Click to enlarge)

This was during that whole post-Avatar wave of enthusiasm – and I was indeed most enthused. I had put together a Directors’ Guild 3D event, out of which grew The Z Axis, a networking organization for 3D professionals. Things were buzzing. Alister then asked if I could get him a venue to demo the Hurricane / 105 combo. In return I asked to use the rig for one hour in an adjacent theatre and shoot something.

I could have shot test charts, and folks at different distances, but I wanted to do something practical: hand-held and moving. And fun. A music video clip was a do-able challenge, but one hour (!) would mean some very solid prep.

I was lucky to have a clutch of Ravensbourne Stereo3D MA students eager to help. My daughter Daisy was keen to perform in front of camera, and Alister would be on hand to make sure everything was plugged in OK. There were things I wanted to test. Fast-cutting is supposedly a no-no in 3D (according to some industry experts) and I wanted to push that. And there were things in post. Various plug-ins, such as ToonIt and VideoGogh can create some very funky effects, but are designed for 2D. Small changes in the source image can create big differences in the rendered result. So Left and Right sources could yield results that wouldn’t fuse properly and thus lose the 3D. I wanted to see how far I could push this.

Daisy_Batova_Ghost
Red Giant ToonIt

Lighting

The theatre we shot in was nice, but a bit dingy. I could have re-lit the whole area, but didn’t have time and wanted a simpler solution that would still lift the material. I’ve always liked using ring lights (very 80s!) as they create a controllable glamorous look, which would be in keeping with the shoot’s intended style. But – I had no budget to hire one. So I had to make one. I found a tutorial: DIY Ring Light (YouTube) This looked very do-able. However, it used a modified toilet seat(!) which was bulky and would have to be tripod mounted, and – to be blunt – I thought a toilet seat wasn’t really very cool. It is important to look cool on set, isn’t it? So I figured I could do better, and made one from an aluminium bicycle wheel rim. It was light enough to hand hold, and had two separate dimmable circuits. It did a fantastic job. Another couple of lights on stands provided back-lighting.

Karel Bata - Ring Light Daisy Batova
For this CU some lights have been removed to create more modelling

The results were good. The Canon 105s performed well, and I would use them several times again. There was some noise in the shadows, but Red Giant Denoiser got rid of that. The images were slightly mis-aligned (in 3D they always are) but the Hurricane was a rigid mount and the results were rock solid. There was a bit of keystoning, but no real distortion to worry about. Or at least the distortions matched! I was impressed and decided to continue this 3D experiment and shoot some more.

The Second Shoot

Did I mention I had no budget? But I did have a pair of Pentax Optio WS80 cameras. These give a surprisingly nice picture (noisy, but I could fix that) but didn’t run in sync, and the only way I had of mounting them was on a simple sliding camera plate. I attached this to a monopod which I hand-held, and clamped a weight to the bottom to add stability. The minimum practical IA was about 2 inches, aligning the cameras was tricky and never spot on, and they were inclined to move a little with use! These problems meant being clever…

3D Rig using two Pentax Optio WS80Click to enlarge

Two cameras running freely will run at slightly different frame rates, and will drift in and out of sync over a period of time. Most of the time they’ll be out of sync. How much depends on how much error is acceptable. This is determined by how movement there is in frame – if you’re shooting a very slowly moving subject the error is insignificant. Something fast and you have a problem. Additionally there’s no way of knowing just how out of sync they are at any time.

Add to all that the enormous IA with this rig – the distance between the lenses – and you couldn’t have a background more than a few feet away.

There are a couple of ‘fixes’ for the timing issue. You can shoot several takes and assume that at last one will give you something close enough. And you hope it’s a good take! This requires patience on everyone’s part – fortunately Daisy’s got tons – and it isn’t at all foolproof. Also one trick is you can try to make subject movement be to and from camera, rather than to the side. Objects moving towards or away have a lot more apparent movement than lateral moves with the same image shifts. Using this can help reduce apparent sync errors.

The background issue was solved by hanging a black cloth which was out of focus. Lighting made it look like smoke.

Tests showed me that relative camera shake and drift (how different the images were from each other and how that varied over time) caused by vigorous camera moves was surprisingly severe. Variations will be interpreted by the brain as fluctuating depth, will undermine the overall 3D, and will give viewers a headache! This would have to be fixed in post, and to help with that I asked Daisy to constantly look at the camera. The ring light was clearly visible in her eyes, and this gave me something on which to later lock After Effect’s motion stabilizing tools. It also created a stylistic element that worked well. So, once stabilized, the two images would be locked together, and then the movement from one eye (which had been keyframed, copied, and removed) was applied to both. That way the cameras then had identical movements. Unavoidable misalignment meant the image would have to be cropped later, so shooting a little wide anticipated this.

Karel Bata - Daisy BatovaLeft and Right images’ alignment could be drastically out.

Time-Lapse

3D Time-Lapse material was shot using a pair of Canon 600Ds with bracketed exposures, and tone-mapped using Photomatix.

Karel Bata - The Shard

Post

All post was in After Effects CS6 using Red Giant’s Denoiser II, Toonit, and Particular (for the titles), RE:Vision’s VideoGogh, Fixel’s ALCE and Detailizer, Dashwood 3D Lite, and several plugs that come with After Effects. It. Was. Fiddly.

It was a challenge to keep the edit interesting – after all, it has two minutes of someone singing straight to camera.

Every shoot has it’s own problems, and my solutions here were specific to what was in front of me. They wouldn’t work for every shoot. Importantly this was a music video, and you can get away with murder – after all, who was to know what I really meant to do?

Red Giant ParticularDaisy Batova 2Daisy-Batova-4Daisy-Batova-5Daisy-Batova-3Daisy-Batova-6Daisy-Batova-2Karel Bata - Sweetheart FilmsClick any image to enlarge

.

Goodbye To Language 3D – a review

Cutting edge 3D

Well, I’ve been asked to give my opinion of this film, so…

Godard was doing this kind of in-yer-face stuff decades ago, and he hasn’t changed. Not that he has any need to – the French love him. But I still don’t get what having naked actors reading from books is all about. There’s the occasional off-beat touch, like when someone just walks into frame and drags a ‘character’ off. Or when a passer-by in the background sees what is happening, then walks forward and becomes part of the action (what there is of it). A novel way to introduce someone. But there’s little of this, and he’s done it all before. Except that…

He’s now got a dog to keep him company. So we follow the dog around and watch it swim and poo (I closed my eyes dear reader) while he tells us that dogs are the only animals that love you more than themselves. Really Mr. Godard? That’s so deep. But really, honestly, it’s not. It’s dog-lovers’ twoddle, and can be seriously challenged on a number of levels. Which is true of every other pearl of wisdom he offers us. It’s all rather like having an endless stream of those Facebook images with captions attached thrown at you. I spent at least twenty minutes thinking, “Yes, but…” until I gave up.

Then there’s the 3D. It’s excruciating. If a student turned this in they would fail the course. I had to keep closing my eyes, and after several times of doing so I found I was napping. Meanwhile four people snuck out…

And watching this further I realised there’s something amiss.

We’re meant to believe this is all done lo-tech: GoPros, DSLRs etc. with huge IAs, and just thrown together. They (Godard and cameraman Fabrice Aragno) are showing us their bold 3D experiments, in the raw. But really, I can see that someone’s been fixing this in post. The parallaxes may have been horrendous, but the vertical, rotational, lens and other errors are (by comparison) minimal. They’re still there, but much less so than should have been the case with the rather slapdash approach to 3D that is in evidence during shooting. This really should be totally unwatchable, but someone’s been messing with it.

So I went to IMdB to see who got credited with the editing, and no one is. Are we meant to believe Godard edited this himself? No way. 3D editing involves a huge learning curve. The fixes needed here require complex equipment and the skill to use it. Language gives the impression of being a 3D film made with minimal resources, and snubbing the high tech approach we usually see, but in fact that’s not the case at all.

EDIT: Or maybe I’m wrong! This blog is generating some disagreement from folks who think the 3D is just plain terrible and see no reason to believe that any substantial fixing has taken place at all!

To my amusement I see the IMdB keywords are: “dog | excrement | flatulence | experimental film | 3d”  Pretty accurate I think.

Rating: 2/10

“Later, in the restaurant…” – some notes on the making of a hi-speed 3D short.

Karel Bata - 'Later, in the restaurant...'

I shot Later, in the restaurant… using the Olympus iSpeed camera system while I was doing my MA in Stereo 3D at Ravensbourne College. I had met the Olympus guys at a Z Axis event I organised, and they offered to demo their rig and after give us some hands-on. I would have to live with a one-hour time slot…

Olympus iSpeed

The Olympus iSpeed 1000fps camera

The concept

This offered an unusual challenge – could I make a narrative sequence that in real time spanned only 3 seconds? I came up with two ideas:

 Later concept 1 Later concept 2

The dog would have been fun, but it may have been difficult to get a second take! The other setup offered some interesting narrative possibilities. In fact, as is often the case, things emerged in the editing. In this case an erotic undertow which, with the overt dominance / submissive element, implied a certain dynamic to the relationship that some folks may uncomfortably recognize…

Lighting

Lighting was an issue. I knew we would be shooting at 500 – 1000 fps, and with regular lights running at 50Hz we would see flicker. The filament of a tungsten light, as it heats and cools, flickers at 100 times a second (twice for each cycle). Your eye won’t see this, but a camera running at 1000fps will. However the bigger the lamp the longer it takes for it to heat up and cool, so flicker is less pronounced. Generally a lamp of 10KW or more is regarded as ‘flicker free’ for high speed. There are other lighting solutions, like using constant voltage DC, but these are expensive or were impractical for us, and some don’t always behave as they should.

The Ravensbourne TV studio was equipped with 1K and 2K lamps – which were not of any use to us. But it did have a large three-phase outlet. We couldn’t afford 10Ks, but we could run three 5KW lamps off the three different phases. I had read (in CML – one of Geoff Boyle’s posts I think) that by doing so we’d effectively smooth out the flicker – the dips and troughs from each phase happen at different times and would largely cancel each other out. Smart idea, and that’s what we did.

Later lighting setup - Karel Bata

However, if you look carefully at the final video you can still see flicker in the drops of water when crossing black as they catch reflections from the different lights.

The Shoot

Having only an hour meant being very prepared. Actors, props etc had to be ready to go. I spent some time with Holly Wilcox rehearsing spitting and she picked it up quickly. Joe Steel was a hero – who else would volunteer to be spat at? My eternal gratitude to him.

The first shot was at 1000fps. I wanted a slow build up and reveal. After that I would have to pace it up, so later shots were at 750fps then 500.

Later concept 3

The IA was 1 to 1.5 inches. In retrospect, with having a black background I would have made it bigger. In fact, in post that’s what I did. We shot parallel – having no background meant we’d lose nothing in post doing HIT, and good geometry was prioritised. It also made post easier.

The lights were bounced off large sheets of poly set at ¾ from behind, with another two sheets in front to provide fill. It got very warm!

There were 3 set-ups and we did two takes of each. The cameras recorded data to a cycling internal RAM, much like a Phantom or FS700, and then compressed and downloaded to a 8-bit BMP image sequence. At high speeds we could only record to 720. We over-ran our one-hour schedule by 10 minutes!

Post

Unfortunately something had gone wrong with the system, which everyone failed to spot. Playback from the cameras was OK, but the recorded BMP images were badly underexposed. We were gutted. Here’s a sample frame:

Original file quality 2

Our 8-bit system had effectively become 5-bit, with a lot of blocky noise lurking in the shadows.

This took a huge amount of effort to ‘fix’, as well as I could, in After Effects. Of great help were Red Giant’s Instant HD, Denoiser II, and Cosmo to resize and fix the noise, blockiness, and skin tones. To adjust the IA Revision’s Re:Flex Motion Morph worked really well. No dedicated 3D software was used.

I felt I needed more 3D. Warping a 3D image to decrease IA usually works reasonably well, but increasing IA often creates visible spatial distortions, especially in areas where objects occlude each other. Fortunately the subjects here were geometrically simple with a black background, and I’m very happy with the end result – I’d increased the IA by 50 to 80%. But still I can see some global flaws when viewing the whole image and switching between L and R, but you’d have to be really sharp-eyed to spot it in a cinema where you can only view a portion of the frame.

One criticism I’ve heard is that there’s still not much 3D. This is interesting. In the final video the amount of 3D is precisely what’s needed to achieve the correct degree of ’roundness’ in the subjects. Any more and they would appear stretched along the z axis. I think it’s because using a black background with only the foreground subjects visible means that the overall amount of 3D is limited. If I’d shot against green and put in a background later (as I did in a video here ) the image would contain more depth, and it would be perceived as deeper, but the depth of the subjects themselves would really be unchanged. This makes me wonder about audience expectations with 3D – is it that folks want or expect deep shots?

I’ve seen Later many times, and the 3D version really does add something. It separates out detail, especially with the water droplets, and adds a lot more life to the faces.

Here’s a glimpse of the AE workflow of just one shot. Some of those nodes are for dynamic masks to tweak areas that needed edge sharpening, softening, colour adjustment etc. Each shot needed a slightly different (and painstaking) approach.

After Effects Flowchart

.

Some problems with projection…

I did a test screening at the Brixton Ritzy cinema, which uses a RealD circular polarised system, and discovered two problems.

1 – With titles converged on the screen against black people told me the titles were 2D! An audience watching the film critically and seeing no 3D might initially think something had gone wrong. You don’t want this distraction. I fixed this by floating the titles slightly forwards.

2 – Ghosting. This is significant when using RealD. If the subject is placed behind the screen, as was the case in the first DCP I did, then the Left and Right images will be horizontally displaced on the screen plane when viewed with glasses off. When the viewer puts the glasses on each eye should only see one image and you get 3D. But the system is not perfect, so you will get a little cross-talk, and if there is a bright image against black then each eye will see a dim ghost of the other eye in the dark areas.

Take the image below. I had placed the eyes on the screen plane, with the cheek on the left slightly behind. The effect was not so much ghosting along this edge, but an apparent de-
focussing – the edge appeared to lack sharpness. But then, no one else seemed to see it.

Joe Angelo Steel

The cure was to re-converge the image and bring it forwards, so the high contrast edge causing the problem was on the screen plane and there was no double-imaging. For shots where the subject is moving I had to track the image depth and create dynamic convergence. With a black background and no visual cues to tell you, this is imperceptible.

Of note is that when the first (uncorrected) DCP was projected at Beyond3D in Karlsruche a Dolby 3D system was used. This has very little cross-talk so no ghosting was perceptible.

Later, in the restaurant… is now doing the 3D film festival circuit and I’m pleased to say getting an excellent reception.

The IMDB page is here: http://www.imdb.com/title/tt4318828/

Stereo 3D Reading List

3d-movie-making-book
3D Movie Making – Bernard Mendiburu
Absolutely essential reading.

3-diy
3-DIY: Stereoscopic Moviemaking on an Indie Budget
Great book by one of the giants in 3D.

3D Storytelling by Phil 'Captain 3D' McNally
3D Storytelling: How Stereoscopic 3D Works and How to Use It
Excellent and well illustrated primer by the master of 3D.

Sky3D Logo
http://bit.ly/3DBasics-SkyTV
Sky’s Basic 3D Guide. A very good introduction.
Sky3D’s Broadcast Spec – love it or hate it…
.

coraline_19
http://bit.ly/CoralineASC

Perception and the art of 3D Storytelling
Perception and The Art of 3D Storytelling
Two excellent articles about Brian Gardner’s seminal work on Coraline
– he’s recently shot to fame with his work on Life of Pi.

Geoff Boyle
3D Cinematography Basics – Geoff Boyle’s excellent primer

MCNALLY
Awesome page on 3D volume by Dreamworks’ genius
Captain 3D. http://www.captain3d.com/temp/cml/cml_volume.html

Andrew Woods
Andrew Woods’ paper on the parallel v converged debate causes
much controversy and is required reading
http://www.andrewwoods3d.com/spie93pa.html

Body Image
http://bit.ly/pUXhPx
Bernard Harper’s paper on Body Image Distortion in 2D/3D

derobes_methode
http://bit.ly/Methode-Derobe
An article on the Methode Derobe

Screen shot 2013-02-11 at 17.35.03
http://bit.ly/e1eoi9
Technicolor’s common errors chart. Some debate about this!

If you have any suggestions on ways to improve this list, please let me know.

.

Scorsese on Hugo
” A loose connection you reckon?”

.

.

Follow me on Twitter Karel Bata