I’ve updated my reel to reflect some of the great projects I’ve worked on in the past several months. Take a look, tell your friends.
If you’re making a fiction film, and you’re looking for an editor, the number of hours of footage you shot is not important.
The Cheech & Chong concert movie is going very, very well. I recently came across a press release that Panasonic put out about our use of the new 3700 P2 cameras. My name is mentioned briefly at the end. Above is one of the photos included in the story taken during Cheech’s performance of “Earache My Eye”
The movie I’m working on now involves two shows each running 5 simultaneous 1080p angles. The source material is AVC-Intra shot with 3700 Varicams. When I brought the footage into FCP, I converted to ProRes HQ because I thought our system could handle it. We bought a Caldigit HD Element just for this movie, and it’s a year-old 8-core Mac Pro. We played through both shows, watching all 5 angles at once and playing out 1080i through the Intensity Pro and never skipped a frame. However, once I started editing, trouble appeared almost immediately. Every once in a while a dark green frame would suddenly appear in the Viewer or Canvas windows, and FCP would usually crash immediately after. Sometimes it wouldn’t, but eventually as soon as I saw the green frame I would just save and shut down the program.
I went through driver updates on the Intensity and Element, tried rolling back QuickTime to version 7.5.5, and FCP to 6.0.4 (which is hard to do since Apple doesn’t let you download old update files. Save those things, kids.) Nothing helped. But the word on the street is that ProRes HQ is still pretty damn fancy. Cutting 5 simultaneous angles is a bit too much for some component of the computer to handle, and you’re unlikely to see any difference between HQ and SQ anyway. So I re-transferred everything to ProRes sans HQ. No dice. Still crashing every 15 minutes, although the green frame showed up less often.
On Friday I decided to use the Media Manager to transcode everything to 720p DVCPRO HD. It was estimating 26 hours of encode time when I left. This morning I arrived at work with a fully functional project with everything properly linked up, and it plays perfectly. I edited all day without a single crash or green flash. Even better, I’m able to play out the multiclips to the HD monitor at full quality. With ProRes I could only do medium or low. Hooray for DVCPRO HD! And hooray for a fully functional Media Manager!
So one of those things that doesn’t come up much, but is really important, is the prohibition against letting timecode go past midnight. Once it gets past 23:59:59:23 (or 29 or 24 depending on your timebase) it goes to 00:00:00:00. If that happens, how does your timecode-based editing system know that the footage with lower numbers comes after the footage with higher numbers? It’s a timecode break. Computers aren’t good at guessing.
I ran into this problem recently with a multi-camera P2 shoot using Time of Day timecode on a 2-hour show that started at 11pm. The timecode started at 23:00:00:00 (approximately) and ended at 001:00:00:00. That’s no good for FCP. What we should have done was start the time code at 11:00:00:00 instead, but the show started late, and we were supposed to be done before midnight, and nobody had planned to shoot past midnight and nobody remembered it would be a problem. The big problem I ran into was since these were 2 hour AVC-Intra clips recorded on P2 cards, everything was spanned over about 17 clips on each camera. But since the timecode reset, FCP couldn’t figure out how to combine the spanned clips into the one clip I wanted.
I could have just log & transfer imported all the individual clips, laid them out on a timeline, and then exported that timeline as one big QT file, but that would take forever to import and export since the files are so big. What I did instead, thanks to an idea from David Wulzen at Creative Cow, was go in and edit the start timecode in the Contents/Clip/*******.xml files for all 70 of the clips I wanted to span, and now FCP joins them up with no problem. Hooray for the Internet!
We shot 3 days with the Canon 5D last week. It looks awesome. I highly recommend it. Here was our workflow:
1. Record separate audio at 48048, stamped at 48000. This is possible with some audio recorders even if you’ve never noticed it before. Check your manual.
2. Copy contents of CF card to hard drive.
3. Convert h.264 QTs to ProRes HQ QTs using Compressor.
4. Use Cinema Tools to batch conform QTs from 30 fps to 29.97
5. Sync in FCP.
I’ve spent a lot of time on this blog writing about 24p editing because it’s so complicated and misunderstood. Last year I wrote about shooting 24p but editing 29.97 arguing that nobody is going to notice the difference. This year I want to write about the reasons to go through the trouble to shoot and edit 24p. And, as always, 24p = 23.98 fps
1) Blah, blah, blah, film blowups. My big pet peeve about 24p discussions is the obsession with film blowups. First there was the completely false idea that shooting 24p “advanced” was somehow better than 24p “regular” for doing film blowups. I hope nobody believes that anymore. As long as you use the right workflow, there is absolutely no difference in the end product. The more pervasive rumor is that the only time it makes sense to edit in 24p is when you’re going to do a film blowup. This is also false, for reasons I’ll get into below. And who the hell is wasting their money by blowing video up to film anymore?
2) Computers. Here’s my big reason for progressive 24p editing. A lot of video is made for computer displays these days, and computers and interlacing go together like two things that don’t go together. If you’re going to show your film on the web, it’s going to look a lot better at 24p than 29.97 with pulldown in it. And considering that a lot of web video is higher quality than DVD at this point, you’ll really appreciate the boost.
3. DVDs. If you make a 23.98 QuickTime and compress it to MPEG-2, it will play perfectly on any DVD player. If your DVD player can upconvert and output 24p via HDMI, it might actually play it that way on your 24p HDTV. If you play the DVD on a computer, you won’t see any interlacing. And, since DVD encoding is generally based on average megabits per second, the fewer frames you have in a second, the more data goes to each frame.
4. Educational. Editing 24p video has taught me so much about the way video works. I worry that computers are so easy to use these days that kids who didn’t grow up have to create config.sys boot menus in order to play Doom won’t really get under the hood of their computers and learn what they’re really doing. In the same way, if video just works (like it used to) then you could edit for years without really knowing what you’re doing on a technical level. I like to know how things work, and I think it’s valuable for more people to know. The proliferation of incompatible video formats may be infuriating, but it requires people to learn about technology in a really useful way. It also helps me pay my rent on time every month.
Through no fault of my own, I’m quickly getting a lot of experience cutting recordings of live performances. Last September I started small with a bunch of online videos recapping New York Fashion Week. It was all single-camera footage, with a lot of quick-cutting and jump-cutting. I think it was the first time I ever found myself using the quick-flash-to-white transition so popular with the kids today.
In October I started cutting some Jerry Seinfeld stand-up performances, which were shot with three cameras. I synced up the three cameras and used multicam editing in FCP, which turns editing into a totally different animal. Now, rather than assembling a scene shot by shot, you can kind of wade through the stream of images and go with your gut to pick the nicest angle of the ones available, then revise to your heart’s content. I had cut some stand-up before, in my very early film about Tim McIntire, but it was all montage-based, with very little spacial continuity between shots. Learning where to cut in Jerry’s movements was very interesting. He’s not a relentless pacer like Chris Rock, but on particular beats he turns his body to address different parts of the audience, and he does move back and forth a bit. It’s something he’s obviously thought a lot about, and as an editor it’s not something I wanted to get in the way of. I wanted the changes in camera angle to stand in for, and highlight, the changes in focus he’s giving to the various parts of the room. At first I wanted him to almost complete a turn before I cut, but I found that anticipating a move by a few frames could be very effective, so he turns into the new angle rather than already being there. Of course cutting in the middle of the action often works too. It all depends on the context.
Almost immediately after I started the Seinfeld project, I cut the film version of Hal Hartley’s staging of Louis Andriessen’s new opera La Commedia. In the spring I edited a 5-screen movie that was projected during the performances, and two of those performances were filmed with two cameras. So the material we had to work with was the original movie footage, and up to 4 different angles of the performance. Unfortunately, good audio recordings of the shows that were video taped did not exist. Only the premiere had a good audio mix. So I had to get very creative with the editing. I could only hold on a performer singing for a few seconds (if I was lucky) before the shot would start to drift, and I’d have to slip each shot a few frames in order to keep everything in something close to sync. There was always the question of whether to show some of the stage or some of the movie. In the theater you can have 10 different things going on at once, but in the film we just had one at a time. We considered doing split-screen for a while, but it never really seemed like the right thing to do. The whole thing is confusing enough as it is, since there are two related, but slightly different plots going on at the same time between the movie parts and the staged parts. Eventually we worked out a method, and I think it was by far the best work I’ve done on anything.
Next up, and very exciting, is a recording of Cheech & Chong’s Light Up America tour. In March they’re going to shoot two performances with around 5 cameras each, plus some backstage action with the two gentlemen. I’ll be editing the whole thing myself. I can’t say too much more about it, but I think it will be a very cool project. Definitely the highest profile thing I’ve worked on. There will be a lot more angles to work with for the performance, and it’s all being supervised by a great DP. I expect we’ll have good, in-sync sound recordings as well.
Over the past couple months I’ve had a wonderful opportunity to check out two cutting-edge tapeless workflows, both of which seemed at first glance to be difficult to work with in Avid. First was the Arri D-21 with an S.two digital magazine. Before I had a chance to look at it I was actually told that it would not work with Avid. I was pretty sure there’s always a way to make anything work, so I went in and looked at it firsthand.
S.two’s system records to a heavy-duty hard drive array that can then be plugged into a fancy dock that processes the video and allows you to ingest into your computer via HD-SDI in real time. Essentially it turns a tapeless workflow into a tape workflow. You get deck control and everything. The one advantage FCP has over Avid in this workflow is that the mag automatically generates a FCP XML file that allows easy batch digitizing. What you get with Avid is more work for the Assistant Editor because you have to enter the start and stop times and names and whatnot manually. Why they didn’t use the cross-platform ALE format, I don’t know, but it’s really not a big issue. It’s just like working with tapes.
With the RED workflow there’s absolutely nothing anywhere close to “realtime” processing. What you get with RED is a lot of waiting. It’s like processing 35mm film. It takes time. For some projects this isn’t really a big deal, for others it is. RED and FCP have been like two peas in a pod from the beginning, but Avid is getting things worked out nicely. The disadvantage Avid has at the moment is that it doesn’t read metadata from QuickTime files. If you were to import any QT file into Avid, its timecode would always start at 01:00:00:00. But the new REDRushes, which comes with REDAlert can create an ALE for easy batch importing.
The situation as I see it right now with all these crazy workflows being introduced, is that all you’re still doing as an offline editor is generating a list of numbers for the conform. In most cases, Avid and FCP are equally good at doing that. And if you feel more free and comfortable to create and actually edit in Avid, you should be working in Avid, no matter what anyone says about how well FCP handles newer tapeless workflows. Of course, that’s assuming you have someone in the production—such as myself—who actually understands what’s going on under the hood.