Tag Archives: S3D

Mind The Gap (aka 3DTango)

Karel Bata 3D Tango

A 3D short currently doing the festival circuit inspired by Zbigniew Rybczýnski’s Tango.
I saw Tango at the Annecy Animation Festival. He’d cleverly composited multiple layers to create the illusion of an impossible number of people in a room:

tango

During my MA in Stereo 3D at Ravensbourne College, I created a version taking this idea into another dimension – 3D Tango. At one point there are 16 layers of green-screen 3D image – and each with a left and a right master.  That has been a lot of work in post!

Featuring Daisy Batova, Alfie Albert, and Helena Kuntz.

Click on any image for a closer view.

. .Karel Bata 3D Tango previz
. .
Karel Bata 3D Tango previz

. .Karel Bata 3D Tango previz
. .
Karel Bata 3D Tango previz
. .
Karel Bata 3D Tango previz
. .
Karel Bata 3D Tango previz
. .
Karel Bata 3D Tango previz
. .
Karel Bata 3D Tango previz
. .group-1
. .group-2
. .mtg-comping
. .Click images to enlarge
   . .

Mind The Gap has so far screened at:

3DKIFF, Seoul, S. Korea, 28 October, 2016
LA 3-D Movie Festival, Los Angeles, December 8 2016
3D Stereo MEDIA, Liege, 14 December 2016
SD&A San Francisco, 31 January, 2017


 

.

Goodbye To Language 3D – a review

Cutting edge 3D

Well, I’ve been asked to give my opinion of this film, so…

Godard was doing this kind of in-yer-face stuff decades ago, and he hasn’t changed. Not that he has any need to – the French love him. But I still don’t get what having naked actors reading from books is all about. There’s the occasional off-beat touch, like when someone just walks into frame and drags a ‘character’ off. Or when a passer-by in the background sees what is happening, then walks forward and becomes part of the action (what there is of it). A novel way to introduce someone. But there’s little of this, and he’s done it all before. Except that…

He’s now got a dog to keep him company. So we follow the dog around and watch it swim and poo (I closed my eyes dear reader) while he tells us that dogs are the only animals that love you more than themselves. Really Mr. Godard? That’s so deep. But really, honestly, it’s not. It’s dog-lovers’ twoddle, and can be seriously challenged on a number of levels. Which is true of every other pearl of wisdom he offers us. It’s all rather like having an endless stream of those Facebook images with captions attached thrown at you. I spent at least twenty minutes thinking, “Yes, but…” until I gave up.

Then there’s the 3D. It’s excruciating. If a student turned this in they would fail the course. I had to keep closing my eyes, and after several times of doing so I found I was napping. Meanwhile four people snuck out…

And watching this further I realised there’s something amiss.

We’re meant to believe this is all done lo-tech: GoPros, DSLRs etc. with huge IAs, and just thrown together. They (Godard and cameraman Fabrice Aragno) are showing us their bold 3D experiments, in the raw. But really, I can see that someone’s been fixing this in post. The parallaxes may have been horrendous, but the vertical, rotational, lens and other errors are (by comparison) minimal. They’re still there, but much less so than should have been the case with the rather slapdash approach to 3D that is in evidence during shooting. This really should be totally unwatchable, but someone’s been messing with it.

So I went to IMdB to see who got credited with the editing, and no one is. Are we meant to believe Godard edited this himself? No way. 3D editing involves a huge learning curve. The fixes needed here require complex equipment and the skill to use it. Language gives the impression of being a 3D film made with minimal resources, and snubbing the high tech approach we usually see, but in fact that’s not the case at all.

EDIT: Or maybe I’m wrong! This blog is generating some disagreement from folks who think the 3D is just plain terrible and see no reason to believe that any substantial fixing has taken place at all!

To my amusement I see the IMdB keywords are: “dog | excrement | flatulence | experimental film | 3d”  Pretty accurate I think.

Rating: 2/10

Crossing The Line

Crossing The Line

Why do people make this so complicated?

I’ve seen a lot of confusion over this and time wasted on set, so I thought I’d add a blog post. Let me stress though that this is solely my own views and other people will inevitably disagree. (But they are so wrong)

First, the ‘line’ is not a line. It is a plane that extends vertically between and beyond the characters. (Though if the camera is at a point where you are looking down you can generally disregard it).

Second, and this is the core of the matter: The ‘line’ goes between the subject of the current shot (who will be the audience’s focus of attention) and the object that this subject is looking at (which can be a person or thing). Simple as that.

The ‘line’ defines where we the audience (the ‘invisible guests’) are in relation to the drama we are watching, and is very important during editing. Editors usually strive to make a sequence fluid, and need to understand where the line actually is at the point of an edit.

Various factors can move this line around. We (the audience) can be physically carried through the ‘line’ via a camera move, or the subject of the shot may themselves move, or the subject’s eyeline (what they are looking at) may move dragging the line with it.

An example of the latter: Our hero is looking left while talking to someone. She hears something and looks over her shoulder thus dragging the line to the right. She can still be talking, but it’s her visual focus of attention that matters.

Stick with these ‘rules’ and you won’t go wrong. Here’s a couple of common misconceptions:

1 – (this one drives me nuts as I’ve heard it used by seasoned camera ops) Imagine a W.S. of a couple getting married and approaching the alter. You can hear them talking. The line, supposedly, is between the two, however if we cut we must stay the same side of their travel to the alter, thereby violating the theoretical line.
This is wrong. As long as we do not see the couple look at each other, the subject of the shot is ‘the couple’, and the object of their (somewhat passive) gaze is the alter. There is our line, and what we don’t cross. Interestingly, what we hear on the soundtrack is irrelevant.

2 – Our hero is looking left. Unnoticed by her, a door opens in the background revealing someone. Where is the line? Is it now between her and the door?
No, not if she is still looking left. But if the audience’s attention has shifted to the door at the point of the cut the subject has now changed and become whoever is walking through the door and we are now concerned with their eyeline.

This is important. The line can be shifting around all over the place, and it may not be between the two people talking at all.

But what happens when we cut to another character in the scene, who is outside of the current shot, and who is observing the previous shot’s subject? There’s a subject/object line there too, so the camera will be on one side of it before the cut, and should remain on that side after. If that’s not adhered to you will likely get a jump. Tip: if you’re shooting a dinner table scene get shots of people, saying nothing, looking between the other characters. They don’t have to do anything, but these cutaways will be invaluable during the edit, and are far better than the proverbial shot of the kitchen sink.

But of course every rule can be broken. And should be. Check out what Kubrick does here: https://www.youtube.com/watch?v=wN6TPtaBKwk
And some interesting examples from Satashi Kon: https://vimeo.com/101675469

They do this for effect. Crossing the line for no good reason will just look sloppy or amateur.

Of important note is what the folks doing 3D sport have discovered.
In a 2D football game you have a ‘line’ running between the two goals, and you must strictly observe this or you will confuse the audience.
In a 3D game this applies to the WSs and CUs, but in wider shots with the camera down on the pitch you may cross this ‘line’ provided you leave enough geographical detail in shot, like the goal, to orient the audience. In 3D we get a much better sense of where we are, and are not as disturbed by jumps in space as 2D audiences are. We have yet to see this implemented creatively in 3D drama, and I’m looking forward to where that might lead.

Hope that helps…

“Later, in the restaurant…” – some notes on the making of a hi-speed 3D short.

Karel Bata - 'Later, in the restaurant...'

I shot Later, in the restaurant… using the Olympus iSpeed camera system while I was doing my MA in Stereo 3D at Ravensbourne College. I had met the Olympus guys at a Z Axis event I organised, and they offered to demo their rig and after give us some hands-on. I would have to live with a one-hour time slot…

Olympus iSpeed

The Olympus iSpeed 1000fps camera

The concept

This offered an unusual challenge – could I make a narrative sequence that in real time spanned only 3 seconds? I came up with two ideas:

 Later concept 1 Later concept 2

The dog would have been fun, but it may have been difficult to get a second take! The other setup offered some interesting narrative possibilities. In fact, as is often the case, things emerged in the editing. In this case an erotic undertow which, with the overt dominance / submissive element, implied a certain dynamic to the relationship that some folks may uncomfortably recognize…

Lighting

Lighting was an issue. I knew we would be shooting at 500 – 1000 fps, and with regular lights running at 50Hz we would see flicker. The filament of a tungsten light, as it heats and cools, flickers at 100 times a second (twice for each cycle). Your eye won’t see this, but a camera running at 1000fps will. However the bigger the lamp the longer it takes for it to heat up and cool, so flicker is less pronounced. Generally a lamp of 10KW or more is regarded as ‘flicker free’ for high speed. There are other lighting solutions, like using constant voltage DC, but these are expensive or were impractical for us, and some don’t always behave as they should.

The Ravensbourne TV studio was equipped with 1K and 2K lamps – which were not of any use to us. But it did have a large three-phase outlet. We couldn’t afford 10Ks, but we could run three 5KW lamps off the three different phases. I had read (in CML – one of Geoff Boyle’s posts I think) that by doing so we’d effectively smooth out the flicker – the dips and troughs from each phase happen at different times and would largely cancel each other out. Smart idea, and that’s what we did.

Later lighting setup - Karel Bata

However, if you look carefully at the final video you can still see flicker in the drops of water when crossing black as they catch reflections from the different lights.

The Shoot

Having only an hour meant being very prepared. Actors, props etc had to be ready to go. I spent some time with Holly Wilcox rehearsing spitting and she picked it up quickly. Joe Steel was a hero – who else would volunteer to be spat at? My eternal gratitude to him.

The first shot was at 1000fps. I wanted a slow build up and reveal. After that I would have to pace it up, so later shots were at 750fps then 500.

Later concept 3

The IA was 1 to 1.5 inches. In retrospect, with having a black background I would have made it bigger. In fact, in post that’s what I did. We shot parallel – having no background meant we’d lose nothing in post doing HIT, and good geometry was prioritised. It also made post easier.

The lights were bounced off large sheets of poly set at ¾ from behind, with another two sheets in front to provide fill. It got very warm!

There were 3 set-ups and we did two takes of each. The cameras recorded data to a cycling internal RAM, much like a Phantom or FS700, and then compressed and downloaded to a 8-bit BMP image sequence. At high speeds we could only record to 720. We over-ran our one-hour schedule by 10 minutes!

Post

Unfortunately something had gone wrong with the system, which everyone failed to spot. Playback from the cameras was OK, but the recorded BMP images were badly underexposed. We were gutted. Here’s a sample frame:

Original file quality 2

Our 8-bit system had effectively become 5-bit, with a lot of blocky noise lurking in the shadows.

This took a huge amount of effort to ‘fix’, as well as I could, in After Effects. Of great help were Red Giant’s Instant HD, Denoiser II, and Cosmo to resize and fix the noise, blockiness, and skin tones. To adjust the IA Revision’s Re:Flex Motion Morph worked really well. No dedicated 3D software was used.

I felt I needed more 3D. Warping a 3D image to decrease IA usually works reasonably well, but increasing IA often creates visible spatial distortions, especially in areas where objects occlude each other. Fortunately the subjects here were geometrically simple with a black background, and I’m very happy with the end result – I’d increased the IA by 50 to 80%. But still I can see some global flaws when viewing the whole image and switching between L and R, but you’d have to be really sharp-eyed to spot it in a cinema where you can only view a portion of the frame.

One criticism I’ve heard is that there’s still not much 3D. This is interesting. In the final video the amount of 3D is precisely what’s needed to achieve the correct degree of ’roundness’ in the subjects. Any more and they would appear stretched along the z axis. I think it’s because using a black background with only the foreground subjects visible means that the overall amount of 3D is limited. If I’d shot against green and put in a background later (as I did in a video here ) the image would contain more depth, and it would be perceived as deeper, but the depth of the subjects themselves would really be unchanged. This makes me wonder about audience expectations with 3D – is it that folks want or expect deep shots?

I’ve seen Later many times, and the 3D version really does add something. It separates out detail, especially with the water droplets, and adds a lot more life to the faces.

Here’s a glimpse of the AE workflow of just one shot. Some of those nodes are for dynamic masks to tweak areas that needed edge sharpening, softening, colour adjustment etc. Each shot needed a slightly different (and painstaking) approach.

After Effects Flowchart

.

Some problems with projection…

I did a test screening at the Brixton Ritzy cinema, which uses a RealD circular polarised system, and discovered two problems.

1 – With titles converged on the screen against black people told me the titles were 2D! An audience watching the film critically and seeing no 3D might initially think something had gone wrong. You don’t want this distraction. I fixed this by floating the titles slightly forwards.

2 – Ghosting. This is significant when using RealD. If the subject is placed behind the screen, as was the case in the first DCP I did, then the Left and Right images will be horizontally displaced on the screen plane when viewed with glasses off. When the viewer puts the glasses on each eye should only see one image and you get 3D. But the system is not perfect, so you will get a little cross-talk, and if there is a bright image against black then each eye will see a dim ghost of the other eye in the dark areas.

Take the image below. I had placed the eyes on the screen plane, with the cheek on the left slightly behind. The effect was not so much ghosting along this edge, but an apparent de-
focussing – the edge appeared to lack sharpness. But then, no one else seemed to see it.

Joe Angelo Steel

The cure was to re-converge the image and bring it forwards, so the high contrast edge causing the problem was on the screen plane and there was no double-imaging. For shots where the subject is moving I had to track the image depth and create dynamic convergence. With a black background and no visual cues to tell you, this is imperceptible.

Of note is that when the first (uncorrected) DCP was projected at Beyond3D in Karlsruche a Dolby 3D system was used. This has very little cross-talk so no ghosting was perceptible.

Later, in the restaurant… is now doing the 3D film festival circuit and I’m pleased to say getting an excellent reception.

The IMDB page is here: http://www.imdb.com/title/tt4318828/

ENO mixes live opera with Stereo 3D

Last night I saw Sunken Garden – a new opera by Michel van der Aa and David Mitchell staged by the English National Opera at the Barbican.

Sunken Garden ENO Poster

During 50 of the show’s 110 minutes stereo 3D images are projected on to a screen that makes the stage appear to ‘extend’ out to the skies beyond, and adds some amazing effects. This mix of live theatre and stereo 3D is a pet interest of mine, so I was keen to see how effective this production was. I wasn’t disappointed. It was spectacularly successful, especially since this is the first time (that I am aware of) this has been done on this scale.

Below is my technical assessment of the show, and it should be understood that my criticisms are the nit-picking of a person working in S3D, and not meant to knock ENO’s considerable achievement, where the planning and forethought are impressive. They must have spent a long time creating this. To be honest though, the music left me cold, but maybe I’m not the right audience, and once the 3D kicked in my concentration was very much elsewhere.

I was fortunate to sit in row D where, when the first 3D set-up was first revealed, the effect was jaw dropping. I appeared to be watching a performer interacting with another performer standing in a garden that extended well beyond the limits of the stage. The stage lighting matched the projection (as best it could), and the floor extended seamlessly. I felt like I was on the Star Trek holodeck.

Sunken Garden Garden

Unfortunately there were a couple of geometric elements where the perspective didn’t quite match what I saw on stage, and the vertical vanishing points differed by quite a large amount.
I judged the ideal seating position was maybe two rows in front of me, which is where the picture above was likely taken from. My being slightly off-axis to one side didn’t seem to affect the illusion.

This is perhaps borne out by the still below that shows the production team at that very spot, and from where they would have made artistic judgements. I was quite surprised at how such a small movement, a couple of rows, could start to challenge the illusion, and I wondered just how well it could possibly work elsewhere, like at the back of the circle. Would the virtual stage appear overly tilted and stretched from up there? Would it work at all? But, it being a large opera, I couldn’t get up and wander around. This choice of sweet spot struck me as odd – why make it the front of the stalls?

SunkenGardenProduction

But I could also see that had I sat there at the front there were elements that still wouldn’t have quite gelled. As my partner noted, the flowers in the foreground, in negative space, were a bit too large. Maybe the camera lens was a little too wide and too close? This led me to wonder whether a longer lens is a better choice for such a production, along with the sweet spot being pushed further back.

At one point an actor was talking to a virtual actress to his side in negative space, but to me he appeared to be talking to a point about six inches behind her head. My girlfriend agreed. Perhaps he’d missed his mark or eyeline? After all, from his POV there’d be nothing there, just empty space to talk to. I marvelled at how, though all the other depth cues were correct, we could still perceive, at maybe thirty feet away, such a small stereo disparity. Would this have looked better from the production sweet spot where she wouldn’t have appeared to come out so far? It must have looked worse at the back of the stalls,  and appalling to anyone at the side where he’d be obscuring her image and edge violations would have been apparent.

There was a scene where an actress feigned scooping water from the on-screen illusion and threw it out deep into negative space in slow-motion over the audience. It was the classic spear-in-yer-face 3D gimmick and it nearly worked, but was compromised when she occluded the falling drops, wasn’t helped by the audience’ eyes needing to converge back and forth between her and the droplets, and was really undermined by a large degree of droplet ‘sparkle’ that wasn’t consistent between the eyes, thus creating an irritating rivalry. A brave attempt though, and again – what would this have looked like further back?

Later in the show some ‘special effects’ were introduced that clumsily warped the image. This completely broke the illusion.

SunkenGardenSFX

There were some curious bits where the onscreen actor was intentionally huge, which reminded me of the Thief of Baghdad. It worked well in terms of space perceived, but the illusion was clearly a fake and not to my eyes convincing, rather as if the actress was talking to a large 3D TV. Interesting to see though.

More successful was when the on-stage set was extended into the screen’s virtual space.
It appeared to go off way into the distance, creating a convincing, almost abstract illusion, and reminded me of some of the more stylized settings you often got in Hollywood musicals. I love those (in fact one scene appeared to have been inspired by Singing In The Rain).  And this is probably where the technique is most successful. The garden was meant to be real, so it was easy to pick flaws, but an abstract setting allows for the illusion to be stretched, and for a greater suspension of disbelief. The illusion would thus have likely worked better for a larger section of the audience.

Singing In The RainSinging In The Rain – imagine that in 3D

Fact is that seeing things in 3D is a completely normal thing to do, and I’ve noticed that after the initial ‘wow’ moment the novelty wears off quickly. When it works well, audiences may be impressed at first, but that interest wanes rapidly. By contrast something that is less rooted in the ‘real world’, without causing discomfort, will hold attention. And anyway, striving for that verisimilitude of reality is ultimately a bit dull, isn’t it?

The image was back projected and very bright, so there were no issues with the stage lighting (predominantly from the side) washing it out. The resolution was remarkably high – higher than I would have thought possible with HD.  Could it have been 4k? I could see no hot spot, nor hint of one, and I’m really puzzled by this. I’ll try to find out more about the projector and its position.

The 3D specs were new to me and carried the Polaroid logo. I found them comfortable and a distinct improvement on the RealD ones I’ve gotten used to in film theatres – they were larger, and more of a wraparound design.

polaroid-3dglasses

Overall a production to be applauded, and well worth catching. I would like to have seen more ideas tried out – I could write a list as long as my arm! – but that wouldn’t have served the narrative. I look forward to similar forays into on-stage stereo 3D.

In fact, there’s one I want to do myself…

.

There’s a newspaper article on the production here http://bit.ly/SunkenGardenMail from which many of the stills here were taken.

.

Follow me on Twitter Karel Bata

Enhanced by Zemanta

Stereo 3D Reading List

3d-movie-making-book
3D Movie Making – Bernard Mendiburu
Absolutely essential reading.

3-diy
3-DIY: Stereoscopic Moviemaking on an Indie Budget
Great book by one of the giants in 3D.

3D Storytelling by Phil 'Captain 3D' McNally
3D Storytelling: How Stereoscopic 3D Works and How to Use It
Excellent and well illustrated primer by the master of 3D.

Sky3D Logo
http://bit.ly/3DBasics-SkyTV
Sky’s Basic 3D Guide. A very good introduction.
Sky3D’s Broadcast Spec – love it or hate it…
.

coraline_19
http://bit.ly/CoralineASC

Perception and the art of 3D Storytelling
Perception and The Art of 3D Storytelling
Two excellent articles about Brian Gardner’s seminal work on Coraline
– he’s recently shot to fame with his work on Life of Pi.

Geoff Boyle
3D Cinematography Basics – Geoff Boyle’s excellent primer

MCNALLY
Awesome page on 3D volume by Dreamworks’ genius
Captain 3D. http://www.captain3d.com/temp/cml/cml_volume.html

Andrew Woods
Andrew Woods’ paper on the parallel v converged debate causes
much controversy and is required reading
http://www.andrewwoods3d.com/spie93pa.html

Body Image
http://bit.ly/pUXhPx
Bernard Harper’s paper on Body Image Distortion in 2D/3D

derobes_methode
http://bit.ly/Methode-Derobe
An article on the Methode Derobe

Screen shot 2013-02-11 at 17.35.03
http://bit.ly/e1eoi9
Technicolor’s common errors chart. Some debate about this!

If you have any suggestions on ways to improve this list, please let me know.

.

Scorsese on Hugo
” A loose connection you reckon?”

.

.

Follow me on Twitter Karel Bata