Expelled & Fair Use

One of those things you pick up pretty quickly working in commercial filmmaking is that you are never, ever, ever, nerver, never, ever allowed to use music in your work without getting permission, and generally paying through the nose for it. Producers also generally tell you not to have any visible brands, artwork, or any copyrightable material without getting permission from the proper authorities first. Basically that means people in movies don’t live in the real world, because that stuff is everywhere. But that message didn’t reach the producers of Expelled, the documentary about the so-called Intelligent Design theory and its allegedly unfair treatment by the scientific establishment (unsurprising given the theory’s lack of… science). The producers of Expelled actually used 15 seconds of John Lennon’s song “Imagine” without paying for it, and expected to get away with it.

AND THEY DID!!!!! This seemed to me like an open-and-shut case of copyright infringement. Yoko and John Lennon’s sons (owners of the publishing rights) sued. But the judge in the case decided yesterday that the filmmakers were commenting on the content of the music, and refused to grant an injuction. I’ve read the decision, and it makes a lot of good points. The film is directly commenting on the lyrics and general message of “Imagine” and in order to make that point, you need to play some of the song. Just as you generally need to quote a portion of a book in order to write about it. Why shouldn’t filmmakers have the same ability to discuss works of art?

This story seems destined to grow more complicated. EMI owns the master license to the song and they’re still waiting to hear from the courts. I think it’s very important, and it’s unfortunate that I’m finding myself starting to agree with the makers of Expelled. Then again, the last big fair use case I remember involved 2 Live Crew and lyrics like “All that hair, it ain’t legit / ‘Cause you look like Cousin It.” Just because it’s crude doesn’t mean it’s not legal.

OpenCut Begins

Right now I’m participating in the first “OpenCut” project, which is a great idea unfortunately saddled with an awful script. Basically some very clever people came up with a great way to give people access to footage shot with the exciting new Red One camera. I’ve been following the development of the camera and went to NAB NY last year to see some 4K Red footage projected. It was very pretty. But I haven’t worked with any producers or directors crazy enough to try it out on a film yet. There are definitely still a lot of limitations and a lot of bugs to be worked out, but the promise of the system is incredible.

So the OpenCut people shot a short film on the Red One and they’re giving the footage to anyone who pays the very reasonable $25 fee. I just got my hard drive in the mail today and I’m currently transcoding everything into ProRes HQ and syncing up the audio.

Here’s why ProRes HQ. You see, Red shoots in a kind of RAW format, like digital still cameras can do. It retains metadata regarding exposure and whatnot, but you can adjust that after it’s shot. It’s all very fancy, but I don’t want to spend my time grading the image before I edit. You end up wasting a lot of time on footage that you won’t use. That’s why you do a quick one-light telecine when you’re working on film. I just want to start working on the footage as soon as possible. You can’t edit straight from the “RAW” (actually .r3d) files, but the camera automatically generates QT reference (proxy) files of various resolutions. You can edit using those proxies, but it requires access to the original r3d files and a lot of processing power. My quad-core 3 gHz processor is around 90% on all cores while playing back one of those files. I can add some real-time effects in there too, but it makes me uneasy. So I’m going with something I know. I know ProRes HQ is great, and my processor barely breaks a sweat once it’s transcoded.

I’m doing the transcoding using the Red Log and Transfer Plugin, which works just like those old-fashioned P2 cards. You open up the original folders and start transcoding the clips you want. FCP creates a master clip and generates the new media on your scratch disk. I noodled around with RedAlert, which seemed nice, but had more controls than I wanted, and I tried RedCine, which was completely baffling, and usually froze up on me. I never even managed to figure out how to export a clip (UPDATE: Hit the big red “Go” button). The Log and Transfer Plugin is definitely the simple way to go.

I’ve looked at some of the footage, and I started syncing the audio. For some reason it was recorded at 44.1 kHz, so I have to be careful to change my sequence settings to match. There are no scene numbers either, which I guess makes sense for such a short film and for a project that will be edited by different people in presumably wildly different ways, but it threw me a bit. Every shot is assigned a number, although luckily it’s not in shooting order like they do in the foreign lands (a confusing system obviously not designed by an editor). It’s in order based on the script / storyboards as far as I can tell.

As for the script, the less said about it the better. I’m going to do my best to turn the movie into something completely different.

IMDb Theaters

I got an email last night from Withoutabox, a fantastic service for people submitting to film festivals. For some reason I got bored with submitting to film festivals right around the time video on the web started getting popular. Maybe it’s because I can get my short film seen by thousands and thousands of people without paying any money, while a film festival can show my short to dozens or half-dozens of people for a $35 submission fee. So I haven’t used Withoutabox in a few years.

The email I got was about Withoutabox’s new deal with IMDb. They were purchased by the IMDb/Amazon conglomerate earlier this year, and their first visible joint initiative is something they’re calling “IMDb Theaters.” Supposedly any film you have in the Withoutabox database that matches an IMDb entry is eligible. In my case, only one of four seems to be eligible at the moment, probably because they have a length limit. I would like to upload a clip or a trailer for the other three that exceed the length limit, but I don’t have the option yet.

I did upload all of Getting Laid Tonight. It’s great seeing the thumbnail on the actual IMDb page.

Editing La Commedia – Part 2

I’m back from Amsterdam and we’ve finished editing La Commedia. A few weeks ago I described the basic setup we were working with. The whole thing worked beautifully. We almost never rendered anything. There were times when playing back a freeze frame while 2 other video streams were running would cause dropped frames, but that just required a quick render on the 8-core Mac Pro we were working on. The finale of the show finally uses all 5 screens at once in a seizure-inducing extravaganza that also included some effects, so that needed rendering as well. But for the majority of the show I was able to edit up to 4 streams of resized HD video in realtime.

This is an example of the format we used during editing. Each picture represents one of the 5 screens that will be in various locations throughout the theater. The “C” screen is not in use in this example.

The real difficulty in this project was creative. The opera doesn’t really have a clear narrative like a traditional opera. It’s more fragments of ideas that all relate to a theme. Hal Hartley, the director of the show, came up with a separate story for the film that was inspired by the ideas in the opera. But while there’s a clear relationship between the two, the film is definitely not just an illustration of what the people are singing about on stage. And I usually had no idea what anyone was singing about anyway since we edited to MIDI recordings of the score.

As we started editing there were a lot more decisions to be made than usual. In a movie you can take for granted the fact that you’re going to have an image on screen most of the time. You might have a few seconds of black here and there, but in general movies tend to have something going on all the time. But with a stage production, you might not have any video at all for several minutes while the audience focuses on some activity on the stage. And if we do want to show some video, we have 5 different screens to choose from. And some portions of the audience can’t see some of the screens. Some of the audience sees the back side of some of the screens, so the image is in reverse and we can’t put text on those screens. It was all very tricky. Read more

Lip Sync

I made a lot of progress on my animation project recently. The biggest breakthrough came from Aharon Rabinowitz’s tutorial on Lip-Syncing for Character Animation over at Creative Cow. I highly recommend it. I’m using his time remap method for lip sync and eye movement. It’s really helped me understand how this project can be done as quickly as I had originally hoped. I should be able to create a whole scene from multiple angles and only do the lip sync once. I’ll just use the same frames for each mouth position for every angle (straight-on, profile, etc) and copy and paste keyframes to each one.

Here’s my first test. It’s a line of dialog from the first episode. I did an arbitrary camera move and put Jennie in front of a background so I can start figuring out how those things work. I finally figured out that increasing the aperture size of an After Effects camera is a quick way to ensure the low depth of field I was looking for.

[qt:https://www.kylegilman.net/videos/JennieLipSyncWeb.mov 500 297]

Watch it in HD at Vimeo

There’s definitely an Uncanny Valley effect going on here. Since it’s so close to looking like a real person, I think it is guaranteed to look a little creepy no matter what I do. But I’m going to work with that, not against it.

Editing La Commedia – Part 1

I’m in Amsterdam for 5 weeks editing an opera + movie. It’s not a movie of an opera and it’s not a movie with opera music. It’s an opera that also has a movie component that will be projected during the performance. There will be performers on the stage who are also in the movie. The movie consists of one giant screen projected in 1080p25 and 4 smaller screens projected in SD PAL anamorphic. The whole extravaganza is directed by Hal Hartley and the music is written by Louis Andriessen.

The movie portion was shot before I arrived on a Sony HDW-750P which shoots 25psf; basically putting the same frame in both fields of a 50i video stream, which is functionally the same as 25p, but not technically the same. It also looks a lot like 24p, with none of the weird motion that I usually see when I watch PAL video. Although there are definite judder effects when people move too much from frame to frame in front of high contrast backgrounds. I didn’t see things like that in Fay Grim, but I’m not sure what accounts for the difference. It’s HDCAM, so it’s 1920×1080 8-bit video. For the first time in my career we’re actually editing at 1080p using Apple’s ProRes HQ in FCP 6. We captured from an HDCAM deck using a Kona LHe card. Everything goes to an Xsan which so far has been able to fairly reliably play back 5 streams of ProRes HQ.

That’s important because in order to simulate the effect of 5 screens, we’ve broken our canvas up into 5 sections. There’s one large picture in the middle for the big screen and one 25% sized picture in each corner. In the theater, some of the screens will actually be perpendicular to the proscenium, but this is a good-enough approximation until we get our 3D holographic monitors going. I’ve set up 10 tracks in our timeline, one for video and one for graphics and other overlays for each screen. I’ve set up 5 motion path favorites in FCP and assigned them to the numbers 1 through 5 on the keyboard. As soon as I cut a clip into the timeline I select it and type the number of the screen it’s assigned to and it moves to the appropriate position. We aren’t doing any rendering at all. This stuff is getting indistinguishable from magic.

A word about ProRes HQ. It looks really amazing. At first I couldn’t see any difference between it and the HDCAM originals, but every once in a while now I’m seeing a small amount of aliasing on high contrast diagonal lines. The kind of thing that is always really tough for digital video. I’m willing to let that slide though, because otherwise it’s fantastic. EDIT: Those problems are entirely an artifact of monitoring through the Kona card in 8-bit mode. In 10-bit, I don’t see any problems with the picture. We’re monitoring on a Sony LMD-2450W 24-inch LCD screen and everything looks incredibly sharp. I am now officially spoiled by HD.

YouTube Gets Better

The mystery is over. YouTube, in another in a long series of overdue moves, has revealed some real information about how people find your YouTube videos. In my case, it turns out that my most popular videos happen to be declared “related” to some other, more popular videos. I’ve never questioned the success of the “Bad Webcam Sex” video, which is naturally connected to all kinds of filth that people are mistakenly looking for on YouTube instead of the entire rest of the Internet. What really surprised me was the sudden and unexpected rise of viewers for Two Night Stand. I’ve now learned that nearly 50% of the traffic comes from being “related” to a video called “Fake Wife Swap” which was made for one of those 24-hour film festival challenges. 90% of Two Night Stand viewers come from related videos.

The other big change—which still hasn’t quite worked itself out yet—is the so-called “high quality” option for YouTube videos. On certain videos (the criteria isn’t at all clear to me) you can add &fmt=6 to the URL and get a significantly better video. Unfortunately I don’t know what you have to upload in order to get the higher quality. Is it a higher resolution, or a higher bitrate that I should be going for? Adding &fmt=6 to most videos gets you the usual blender-set-on-purée look. Some guidance from YouTube would be nice.

UPDATE: Brian Gary has an article at kenstone.net explaining the best settings to use to take advantage of YouTube’s higher quality options.

It’s Not Called Final Cut Pro HD

I read a lot of job postings on Craigslist and Mandy. I have RSS feeds for any “editor” jobs that pop up.  Amazingly enough, I’ve ended up with a handful of great contacts from jobs I got through Craigslist. For some reason, among the many misspellings (trailor) and inaccuracies, the one that bugs me the most is requests for editors who can work with “Final Cut Pro HD.” Version 4.5—and only version 4.5—of Final Cut Pro was known as Final Cut Pro HD. It was a mistake to call it that at the time, and it’s just led to confusion. Anybody who is working in Final Cut Pro HD should upgrade. FCP 6 is much better.

My other pet peeve is the people looking for free work who try to make me feel guilty for wanting money for my time and extremely specialized skills. They’re always variations on the theme of “don’t apply to this job if you just got into this business to make money.” Well, you know what, I’ve worked for free on movies when I thought it would be worth my time, and I’ve asked people to work for me for free as well, but anyone with that attitude is not going to be fun to work with. You know who goes into a business hoping to lose money? People with complicated tax schemes, that’s who. Whenever you ask for someone’s time and effort, you need to compensate them. It doesn’t have to be money, but credit and a copy of the finished film are not compensation, they are an obligation. A positive experience is compensation.