Tag Archives: 3D

Mind The Gap (aka 3DTango)

Karel Bata 3D Tango

A 3D short currently doing the festival circuit inspired by Zbigniew Rybczýnski’s Tango.
I saw Tango at the Annecy Animation Festival. He’d cleverly composited multiple layers to create the illusion of an impossible number of people in a room:

tango

During my MA in Stereo 3D at Ravensbourne College, I created a version taking this idea into another dimension – 3D Tango. At one point there are 16 layers of green-screen 3D image – and each with a left and a right master.  That has been a lot of work in post!

Featuring Daisy Batova, Alfie Albert, and Helena Kuntz.

Click on any image for a closer view.

. .Karel Bata 3D Tango previz
. .
Karel Bata 3D Tango previz

. .Karel Bata 3D Tango previz
. .
Karel Bata 3D Tango previz
. .
Karel Bata 3D Tango previz
. .
Karel Bata 3D Tango previz
. .
Karel Bata 3D Tango previz
. .
Karel Bata 3D Tango previz
. .group-1
. .group-2
. .mtg-comping
. .Click images to enlarge
   . .

Mind The Gap has so far screened at:

3DKIFF, Seoul, S. Korea, 28 October, 2016
LA 3-D Movie Festival, Los Angeles, December 8 2016
3D Stereo MEDIA, Liege, 14 December 2016
SD&A San Francisco, 31 January, 2017


 

.

Some Common Errors in Stereo 3D

Alfie Albert at Baker Street station
“Now, where’s that 3rd dimension gone?”

Here’s a freebie download from me! This is a video I created as a teaching aid when I taught Stereo 3D to MAs and staff at Ravensbourne College.
There are nine short clips, each with a different 3D error. I played the video to the students, and then asked them to identify the error in each clip.

You can view it on YouTube here: https://youtu.be/VVGAlOLLiJo

Screen Shot 2015-10-11 at 19.04.29

And you are free to download the original HD file here:
https://mega.nz/#!Ssog3SQB!-JMxuW_az3njNEQRiyca3-Ru28euVC7RdIUWhcpB4ic (320MB)
Note: to download click the BLUE TEXT beneath the red button!

This was recorded with two Canon 105s in SBS mode converged about 8ft away. IA=4″

Here’s what I did in the classroom:

I first show ① ‘Raw’ and explain that this is straight from camera, but I’ve made some minor vertical and horizontal shifts to line the images up, and a small zoom in to eliminate cropping. Can the students spot anything else that may still need correcting?
I pause/repeat the video while they consider. They’re unlikely to see anything relevant, but it depends on who they are.
I then show ② to ⑨ saying that I have introduced some kind of error in each, and ask if they can spot what it may be.

The errors are:
① Raw (no errors added)
② Color differences between L & R
③ One eye soft
④ Vertical disparity
⑤ Out of sync
⑥ Rotational disparity
⑦ Zoom disparity
⑧ L & R reversed
⑨ Raw (same as 1)

I then ask them to look more closely at 9 and see if they can spot any errors they missed before.
There’s at least three!


The actor is the most excellent Alfie Albert
Video extract is from ‘3DTango’

*and please DON’T remove my name from the video!

.

Happy When It Rains

Karel Bata - Happy When It RainsClick image to enlarge

The First Shoot

This 3D short, a faux music video using the immortal track by Garbage was my first attempt at some serious 3D. It started life as a test to see how well a pair of Canon 105s performed on a Genus Hurricane 3D Rig (everything was fresh out its box) courtesy of the inventor, tornado-chaser Alister Chapman, to whom I am eternally grateful.

Hurricane RigHurricane Rig with Canon 105s (Click to enlarge)

This was during that whole post-Avatar wave of enthusiasm – and I was indeed most enthused. I had put together a Directors’ Guild 3D event, out of which grew The Z Axis, a networking organization for 3D professionals. Things were buzzing. Alister then asked if I could get him a venue to demo the Hurricane / 105 combo. In return I asked to use the rig for one hour in an adjacent theatre and shoot something.

I could have shot test charts, and folks at different distances, but I wanted to do something practical: hand-held and moving. And fun. A music video clip was a do-able challenge, but one hour (!) would mean some very solid prep.

I was lucky to have a clutch of Ravensbourne Stereo3D MA students eager to help. My daughter Daisy was keen to perform in front of camera, and Alister would be on hand to make sure everything was plugged in OK. There were things I wanted to test. Fast-cutting is supposedly a no-no in 3D (according to some industry experts) and I wanted to push that. And there were things in post. Various plug-ins, such as ToonIt and VideoGogh can create some very funky effects, but are designed for 2D. Small changes in the source image can create big differences in the rendered result. So Left and Right sources could yield results that wouldn’t fuse properly and thus lose the 3D. I wanted to see how far I could push this.

Daisy_Batova_Ghost
Red Giant ToonIt

Lighting

The theatre we shot in was nice, but a bit dingy. I could have re-lit the whole area, but didn’t have time and wanted a simpler solution that would still lift the material. I’ve always liked using ring lights (very 80s!) as they create a controllable glamorous look, which would be in keeping with the shoot’s intended style. But – I had no budget to hire one. So I had to make one. I found a tutorial: DIY Ring Light (YouTube) This looked very do-able. However, it used a modified toilet seat(!) which was bulky and would have to be tripod mounted, and – to be blunt – I thought a toilet seat wasn’t really very cool. It is important to look cool on set, isn’t it? So I figured I could do better, and made one from an aluminium bicycle wheel rim. It was light enough to hand hold, and had two separate dimmable circuits. It did a fantastic job. Another couple of lights on stands provided back-lighting.

Karel Bata - Ring Light Daisy Batova
For this CU some lights have been removed to create more modelling

The results were good. The Canon 105s performed well, and I would use them several times again. There was some noise in the shadows, but Red Giant Denoiser got rid of that. The images were slightly mis-aligned (in 3D they always are) but the Hurricane was a rigid mount and the results were rock solid. There was a bit of keystoning, but no real distortion to worry about. Or at least the distortions matched! I was impressed and decided to continue this 3D experiment and shoot some more.

The Second Shoot

Did I mention I had no budget? But I did have a pair of Pentax Optio WS80 cameras. These give a surprisingly nice picture (noisy, but I could fix that) but didn’t run in sync, and the only way I had of mounting them was on a simple sliding camera plate. I attached this to a monopod which I hand-held, and clamped a weight to the bottom to add stability. The minimum practical IA was about 2 inches, aligning the cameras was tricky and never spot on, and they were inclined to move a little with use! These problems meant being clever…

3D Rig using two Pentax Optio WS80Click to enlarge

Two cameras running freely will run at slightly different frame rates, and will drift in and out of sync over a period of time. Most of the time they’ll be out of sync. How much depends on how much error is acceptable. This is determined by how movement there is in frame – if you’re shooting a very slowly moving subject the error is insignificant. Something fast and you have a problem. Additionally there’s no way of knowing just how out of sync they are at any time.

Add to all that the enormous IA with this rig – the distance between the lenses – and you couldn’t have a background more than a few feet away.

There are a couple of ‘fixes’ for the timing issue. You can shoot several takes and assume that at last one will give you something close enough. And you hope it’s a good take! This requires patience on everyone’s part – fortunately Daisy’s got tons – and it isn’t at all foolproof. Also one trick is you can try to make subject movement be to and from camera, rather than to the side. Objects moving towards or away have a lot more apparent movement than lateral moves with the same image shifts. Using this can help reduce apparent sync errors.

The background issue was solved by hanging a black cloth which was out of focus. Lighting made it look like smoke.

Tests showed me that relative camera shake and drift (how different the images were from each other and how that varied over time) caused by vigorous camera moves was surprisingly severe. Variations will be interpreted by the brain as fluctuating depth, will undermine the overall 3D, and will give viewers a headache! This would have to be fixed in post, and to help with that I asked Daisy to constantly look at the camera. The ring light was clearly visible in her eyes, and this gave me something on which to later lock After Effect’s motion stabilizing tools. It also created a stylistic element that worked well. So, once stabilized, the two images would be locked together, and then the movement from one eye (which had been keyframed, copied, and removed) was applied to both. That way the cameras then had identical movements. Unavoidable misalignment meant the image would have to be cropped later, so shooting a little wide anticipated this.

Karel Bata - Daisy BatovaLeft and Right images’ alignment could be drastically out.

Time-Lapse

3D Time-Lapse material was shot using a pair of Canon 600Ds with bracketed exposures, and tone-mapped using Photomatix.

Karel Bata - The Shard

Post

All post was in After Effects CS6 using Red Giant’s Denoiser II, Toonit, and Particular (for the titles), RE:Vision’s VideoGogh, Fixel’s ALCE and Detailizer, Dashwood 3D Lite, and several plugs that come with After Effects. It. Was. Fiddly.

It was a challenge to keep the edit interesting – after all, it has two minutes of someone singing straight to camera.

Every shoot has it’s own problems, and my solutions here were specific to what was in front of me. They wouldn’t work for every shoot. Importantly this was a music video, and you can get away with murder – after all, who was to know what I really meant to do?

Red Giant ParticularDaisy Batova 2Daisy-Batova-4Daisy-Batova-5Daisy-Batova-3Daisy-Batova-6Daisy-Batova-2Karel Bata - Sweetheart FilmsClick any image to enlarge

.

Goodbye To Language 3D – a review

Cutting edge 3D

Well, I’ve been asked to give my opinion of this film, so…

Godard was doing this kind of in-yer-face stuff decades ago, and he hasn’t changed. Not that he has any need to – the French love him. But I still don’t get what having naked actors reading from books is all about. There’s the occasional off-beat touch, like when someone just walks into frame and drags a ‘character’ off. Or when a passer-by in the background sees what is happening, then walks forward and becomes part of the action (what there is of it). A novel way to introduce someone. But there’s little of this, and he’s done it all before. Except that…

He’s now got a dog to keep him company. So we follow the dog around and watch it swim and poo (I closed my eyes dear reader) while he tells us that dogs are the only animals that love you more than themselves. Really Mr. Godard? That’s so deep. But really, honestly, it’s not. It’s dog-lovers’ twoddle, and can be seriously challenged on a number of levels. Which is true of every other pearl of wisdom he offers us. It’s all rather like having an endless stream of those Facebook images with captions attached thrown at you. I spent at least twenty minutes thinking, “Yes, but…” until I gave up.

Then there’s the 3D. It’s excruciating. If a student turned this in they would fail the course. I had to keep closing my eyes, and after several times of doing so I found I was napping. Meanwhile four people snuck out…

And watching this further I realised there’s something amiss.

We’re meant to believe this is all done lo-tech: GoPros, DSLRs etc. with huge IAs, and just thrown together. They (Godard and cameraman Fabrice Aragno) are showing us their bold 3D experiments, in the raw. But really, I can see that someone’s been fixing this in post. The parallaxes may have been horrendous, but the vertical, rotational, lens and other errors are (by comparison) minimal. They’re still there, but much less so than should have been the case with the rather slapdash approach to 3D that is in evidence during shooting. This really should be totally unwatchable, but someone’s been messing with it.

So I went to IMdB to see who got credited with the editing, and no one is. Are we meant to believe Godard edited this himself? No way. 3D editing involves a huge learning curve. The fixes needed here require complex equipment and the skill to use it. Language gives the impression of being a 3D film made with minimal resources, and snubbing the high tech approach we usually see, but in fact that’s not the case at all.

EDIT: Or maybe I’m wrong! This blog is generating some disagreement from folks who think the 3D is just plain terrible and see no reason to believe that any substantial fixing has taken place at all!

To my amusement I see the IMdB keywords are: “dog | excrement | flatulence | experimental film | 3d”  Pretty accurate I think.

Rating: 2/10

“Later, in the restaurant…” – some notes on the making of a hi-speed 3D short.

Karel Bata - 'Later, in the restaurant...'

I shot Later, in the restaurant… using the Olympus iSpeed camera system while I was doing my MA in Stereo 3D at Ravensbourne College. I had met the Olympus guys at a Z Axis event I organised, and they offered to demo their rig and after give us some hands-on. I would have to live with a one-hour time slot…

Olympus iSpeed

The Olympus iSpeed 1000fps camera

The concept

This offered an unusual challenge – could I make a narrative sequence that in real time spanned only 3 seconds? I came up with two ideas:

 Later concept 1 Later concept 2

The dog would have been fun, but it may have been difficult to get a second take! The other setup offered some interesting narrative possibilities. In fact, as is often the case, things emerged in the editing. In this case an erotic undertow which, with the overt dominance / submissive element, implied a certain dynamic to the relationship that some folks may uncomfortably recognize…

Lighting

Lighting was an issue. I knew we would be shooting at 500 – 1000 fps, and with regular lights running at 50Hz we would see flicker. The filament of a tungsten light, as it heats and cools, flickers at 100 times a second (twice for each cycle). Your eye won’t see this, but a camera running at 1000fps will. However the bigger the lamp the longer it takes for it to heat up and cool, so flicker is less pronounced. Generally a lamp of 10KW or more is regarded as ‘flicker free’ for high speed. There are other lighting solutions, like using constant voltage DC, but these are expensive or were impractical for us, and some don’t always behave as they should.

The Ravensbourne TV studio was equipped with 1K and 2K lamps – which were not of any use to us. But it did have a large three-phase outlet. We couldn’t afford 10Ks, but we could run three 5KW lamps off the three different phases. I had read (in CML – one of Geoff Boyle’s posts I think) that by doing so we’d effectively smooth out the flicker – the dips and troughs from each phase happen at different times and would largely cancel each other out. Smart idea, and that’s what we did.

Later lighting setup - Karel Bata

However, if you look carefully at the final video you can still see flicker in the drops of water when crossing black as they catch reflections from the different lights.

The Shoot

Having only an hour meant being very prepared. Actors, props etc had to be ready to go. I spent some time with Holly Wilcox rehearsing spitting and she picked it up quickly. Joe Steel was a hero – who else would volunteer to be spat at? My eternal gratitude to him.

The first shot was at 1000fps. I wanted a slow build up and reveal. After that I would have to pace it up, so later shots were at 750fps then 500.

Later concept 3

The IA was 1 to 1.5 inches. In retrospect, with having a black background I would have made it bigger. In fact, in post that’s what I did. We shot parallel – having no background meant we’d lose nothing in post doing HIT, and good geometry was prioritised. It also made post easier.

The lights were bounced off large sheets of poly set at ¾ from behind, with another two sheets in front to provide fill. It got very warm!

There were 3 set-ups and we did two takes of each. The cameras recorded data to a cycling internal RAM, much like a Phantom or FS700, and then compressed and downloaded to a 8-bit BMP image sequence. At high speeds we could only record to 720. We over-ran our one-hour schedule by 10 minutes!

Post

Unfortunately something had gone wrong with the system, which everyone failed to spot. Playback from the cameras was OK, but the recorded BMP images were badly underexposed. We were gutted. Here’s a sample frame:

Original file quality 2

Our 8-bit system had effectively become 5-bit, with a lot of blocky noise lurking in the shadows.

This took a huge amount of effort to ‘fix’, as well as I could, in After Effects. Of great help were Red Giant’s Instant HD, Denoiser II, and Cosmo to resize and fix the noise, blockiness, and skin tones. To adjust the IA Revision’s Re:Flex Motion Morph worked really well. No dedicated 3D software was used.

I felt I needed more 3D. Warping a 3D image to decrease IA usually works reasonably well, but increasing IA often creates visible spatial distortions, especially in areas where objects occlude each other. Fortunately the subjects here were geometrically simple with a black background, and I’m very happy with the end result – I’d increased the IA by 50 to 80%. But still I can see some global flaws when viewing the whole image and switching between L and R, but you’d have to be really sharp-eyed to spot it in a cinema where you can only view a portion of the frame.

One criticism I’ve heard is that there’s still not much 3D. This is interesting. In the final video the amount of 3D is precisely what’s needed to achieve the correct degree of ’roundness’ in the subjects. Any more and they would appear stretched along the z axis. I think it’s because using a black background with only the foreground subjects visible means that the overall amount of 3D is limited. If I’d shot against green and put in a background later (as I did in a video here ) the image would contain more depth, and it would be perceived as deeper, but the depth of the subjects themselves would really be unchanged. This makes me wonder about audience expectations with 3D – is it that folks want or expect deep shots?

I’ve seen Later many times, and the 3D version really does add something. It separates out detail, especially with the water droplets, and adds a lot more life to the faces.

Here’s a glimpse of the AE workflow of just one shot. Some of those nodes are for dynamic masks to tweak areas that needed edge sharpening, softening, colour adjustment etc. Each shot needed a slightly different (and painstaking) approach.

After Effects Flowchart

.

Some problems with projection…

I did a test screening at the Brixton Ritzy cinema, which uses a RealD circular polarised system, and discovered two problems.

1 – With titles converged on the screen against black people told me the titles were 2D! An audience watching the film critically and seeing no 3D might initially think something had gone wrong. You don’t want this distraction. I fixed this by floating the titles slightly forwards.

2 – Ghosting. This is significant when using RealD. If the subject is placed behind the screen, as was the case in the first DCP I did, then the Left and Right images will be horizontally displaced on the screen plane when viewed with glasses off. When the viewer puts the glasses on each eye should only see one image and you get 3D. But the system is not perfect, so you will get a little cross-talk, and if there is a bright image against black then each eye will see a dim ghost of the other eye in the dark areas.

Take the image below. I had placed the eyes on the screen plane, with the cheek on the left slightly behind. The effect was not so much ghosting along this edge, but an apparent de-
focussing – the edge appeared to lack sharpness. But then, no one else seemed to see it.

Joe Angelo Steel

The cure was to re-converge the image and bring it forwards, so the high contrast edge causing the problem was on the screen plane and there was no double-imaging. For shots where the subject is moving I had to track the image depth and create dynamic convergence. With a black background and no visual cues to tell you, this is imperceptible.

Of note is that when the first (uncorrected) DCP was projected at Beyond3D in Karlsruche a Dolby 3D system was used. This has very little cross-talk so no ghosting was perceptible.

Later, in the restaurant… is now doing the 3D film festival circuit and I’m pleased to say getting an excellent reception.

The IMDB page is here: http://www.imdb.com/title/tt4318828/

ENO mixes live opera with Stereo 3D

Last night I saw Sunken Garden – a new opera by Michel van der Aa and David Mitchell staged by the English National Opera at the Barbican.

Sunken Garden ENO Poster

During 50 of the show’s 110 minutes stereo 3D images are projected on to a screen that makes the stage appear to ‘extend’ out to the skies beyond, and adds some amazing effects. This mix of live theatre and stereo 3D is a pet interest of mine, so I was keen to see how effective this production was. I wasn’t disappointed. It was spectacularly successful, especially since this is the first time (that I am aware of) this has been done on this scale.

Below is my technical assessment of the show, and it should be understood that my criticisms are the nit-picking of a person working in S3D, and not meant to knock ENO’s considerable achievement, where the planning and forethought are impressive. They must have spent a long time creating this. To be honest though, the music left me cold, but maybe I’m not the right audience, and once the 3D kicked in my concentration was very much elsewhere.

I was fortunate to sit in row D where, when the first 3D set-up was first revealed, the effect was jaw dropping. I appeared to be watching a performer interacting with another performer standing in a garden that extended well beyond the limits of the stage. The stage lighting matched the projection (as best it could), and the floor extended seamlessly. I felt like I was on the Star Trek holodeck.

Sunken Garden Garden

Unfortunately there were a couple of geometric elements where the perspective didn’t quite match what I saw on stage, and the vertical vanishing points differed by quite a large amount.
I judged the ideal seating position was maybe two rows in front of me, which is where the picture above was likely taken from. My being slightly off-axis to one side didn’t seem to affect the illusion.

This is perhaps borne out by the still below that shows the production team at that very spot, and from where they would have made artistic judgements. I was quite surprised at how such a small movement, a couple of rows, could start to challenge the illusion, and I wondered just how well it could possibly work elsewhere, like at the back of the circle. Would the virtual stage appear overly tilted and stretched from up there? Would it work at all? But, it being a large opera, I couldn’t get up and wander around. This choice of sweet spot struck me as odd – why make it the front of the stalls?

SunkenGardenProduction

But I could also see that had I sat there at the front there were elements that still wouldn’t have quite gelled. As my partner noted, the flowers in the foreground, in negative space, were a bit too large. Maybe the camera lens was a little too wide and too close? This led me to wonder whether a longer lens is a better choice for such a production, along with the sweet spot being pushed further back.

At one point an actor was talking to a virtual actress to his side in negative space, but to me he appeared to be talking to a point about six inches behind her head. My girlfriend agreed. Perhaps he’d missed his mark or eyeline? After all, from his POV there’d be nothing there, just empty space to talk to. I marvelled at how, though all the other depth cues were correct, we could still perceive, at maybe thirty feet away, such a small stereo disparity. Would this have looked better from the production sweet spot where she wouldn’t have appeared to come out so far? It must have looked worse at the back of the stalls,  and appalling to anyone at the side where he’d be obscuring her image and edge violations would have been apparent.

There was a scene where an actress feigned scooping water from the on-screen illusion and threw it out deep into negative space in slow-motion over the audience. It was the classic spear-in-yer-face 3D gimmick and it nearly worked, but was compromised when she occluded the falling drops, wasn’t helped by the audience’ eyes needing to converge back and forth between her and the droplets, and was really undermined by a large degree of droplet ‘sparkle’ that wasn’t consistent between the eyes, thus creating an irritating rivalry. A brave attempt though, and again – what would this have looked like further back?

Later in the show some ‘special effects’ were introduced that clumsily warped the image. This completely broke the illusion.

SunkenGardenSFX

There were some curious bits where the onscreen actor was intentionally huge, which reminded me of the Thief of Baghdad. It worked well in terms of space perceived, but the illusion was clearly a fake and not to my eyes convincing, rather as if the actress was talking to a large 3D TV. Interesting to see though.

More successful was when the on-stage set was extended into the screen’s virtual space.
It appeared to go off way into the distance, creating a convincing, almost abstract illusion, and reminded me of some of the more stylized settings you often got in Hollywood musicals. I love those (in fact one scene appeared to have been inspired by Singing In The Rain).  And this is probably where the technique is most successful. The garden was meant to be real, so it was easy to pick flaws, but an abstract setting allows for the illusion to be stretched, and for a greater suspension of disbelief. The illusion would thus have likely worked better for a larger section of the audience.

Singing In The RainSinging In The Rain – imagine that in 3D

Fact is that seeing things in 3D is a completely normal thing to do, and I’ve noticed that after the initial ‘wow’ moment the novelty wears off quickly. When it works well, audiences may be impressed at first, but that interest wanes rapidly. By contrast something that is less rooted in the ‘real world’, without causing discomfort, will hold attention. And anyway, striving for that verisimilitude of reality is ultimately a bit dull, isn’t it?

The image was back projected and very bright, so there were no issues with the stage lighting (predominantly from the side) washing it out. The resolution was remarkably high – higher than I would have thought possible with HD.  Could it have been 4k? I could see no hot spot, nor hint of one, and I’m really puzzled by this. I’ll try to find out more about the projector and its position.

The 3D specs were new to me and carried the Polaroid logo. I found them comfortable and a distinct improvement on the RealD ones I’ve gotten used to in film theatres – they were larger, and more of a wraparound design.

polaroid-3dglasses

Overall a production to be applauded, and well worth catching. I would like to have seen more ideas tried out – I could write a list as long as my arm! – but that wouldn’t have served the narrative. I look forward to similar forays into on-stage stereo 3D.

In fact, there’s one I want to do myself…

.

There’s a newspaper article on the production here http://bit.ly/SunkenGardenMail from which many of the stills here were taken.

.

Follow me on Twitter Karel Bata

Enhanced by Zemanta