I’m working on an all-in-one tutorial on a very important area of video conversion and editing: handling interlaced content. This stuff is of high importance for iOS users as the hardware is capable of rendering videos at 60 (technically, 60000/1001; I’ll, however, use the “60” shorthand from now on) fps – the frame rate of U.S. TV broadcasts, taking interlacing into account. (Technically, U.S. TV broadcasts are only 29.97 but interlacing allows for rendering movement as if it was a 59.94 fps system. Outside the U.S., it’s (truly) 50 fps (again, 25 fps * 2 because of the interlacing) in most (but not all) PAL / SECAM areas However, when you convert your interlaced MPEG-2 contents, half of this temporal (but not necessarily spatial!) information is just discarded by most converters (e.g, HandBrake), making the resulting video have motion data for 30 (technically, 30000/1001) frames per second – half of the (interlaced) rate of the original, which did the same 59.94 times in a second. In addition, the hardware itself can’t play back truly interlaced H.264 videos (only progressive ones); therefore, in some way, they need to be deinterlaced. My new tutorial series, starting with the recent article, will help you learning how this needs to be done. (For the theory of interlacing, check out the Wiki. From now on, I assume you’ve read the Wiki article, know the difference between “frame” and “field” and also know that the former is composed of two fields.)
In order to be able to properly assess the original fields of the interlaced videos separately from each other, you’ll need to find a tool that is able to display not only separate frames (consisting of two fields), but also the two fields contained by the frame. This is the best way of evaluating the deinterlacing and framerate-doubling capabilities and quality of the video converters I’ll elaborate on in my main deinterlacing tutorial.
Unfortunately, as with a lot of other areas of video editing, this subject isn’t documented – apart from some forum posts telling users to use Adobe After Effects (AE) for this (but nothing else, while FCPX is also capable of this), there are absolutely no tutorials actually showing this in practice, only questions (an example). Therefore, I’ve decided to write one myself.
NOTE: this article is only meant for people seriously into video editing and conversion. If you don’t want to know some of the less-published secrets of Mac video editing or you don’t want to convert your interlaced videos (DVB or camera recordings) to watch at their full glory and maximum possible framerate and without any information loss (as opposed to just converting your footage with HandBrake, which will just throw away half of the temporaly information), stop reading now: there won’t be of anything interest for you in here. However, if you are interested in taking interlaced playback to really the max, read on, you’ll read some brand new stuff: tutorials like this on the subject have never been published.
To be able to follow this tutorial with your own video editors, use the interlaced 576i50 MPEG-2 file HERE (in a TS container, as recorded directly from the air) or THIS (the same video with a video track only (demuxed via Project-X), in an m2v container). It’s a short excerpt from the Finland-Russia ice hockey match on 13/May/2011 as broadcast by the Finnish national TV. Everything I tell you will also work with the U.S. 1080i60 DVB recording HERE.
Unfortunately, there’s no way of making traditional video players show individual fields. VLC will only show (composite, full frames, even when you decrease the playback speed to 50% (basically, the same trick as with After Effects below). QuickTime (with the m2v file, which, under Lion / Mountain Lion, unlike with previous OS X versions, no longer requires the QuickTime MPEG-2 Playback Component (more on it later, as is also explained HERE) ditto. An example screenshot of VLC showing a composite frame (see the two individual fields composing of this frame at the end of this article), making it completely impossible to individually assess the individual fields:
The Windows-based VirtualDub can’t import files with MPEG-2 video track in them (at least not under XP), independent of the container used.
Fortunately, there still are solutions. Below, I show how you can access the individual fields in both Adobe After Effects (AE for short and Apple‘s Final Cut Pro X (FCPX).
1. After Effects
With AE, just import the file into AE (File > Import). Then, right-click it in the upper left list and select “New Comp from Selection” – or just drag it into the upper center view. (Note that double-clicking, while it’ll show it there, won’t open it in a selection and you won’t be able to it to show up in the center area.). There, it’ll be shown without applying the anamorphic translation to 16:9 from 5:4 (720*576). The following screenshot shows this, along with the list item you’ll need to right-click / drag:
(as with all the almost-full screen, wide screenshots in this article, click the above image for the original, much larger version!)
I’ve annotated the list item (of type “MPEG Optimized” – see the third column) you’ll need to right-click with a red rectangle. Note that it’ll be the only list item (as opposed to the screenshot above, which already shows two before selecting “New Comp from Selection”. The list item, of type “Composition” above it will appear after this.
Now, right-click the newly-autocreated Composition list item (the one above the annotated one in the previous screenshot) and select “Composition Settings…“. Open the Preset drop-down list and select “PAL D1/DV Widescreen Square Pixel” instead of the default “PAL D1/DV Widescreen“. The results (along with an annotation of the Preset menu you need to change):
Now, the video will be rendered as it should, in 16:9 widescreen:
You can start advance through the individual fields with the two buttons in the top right, annotated in the above screenshot. However, you’ll quickly find out you’re still seeing full frames (composed of two fields) and not the individual fields. To enable the latter, just go to Layer > Time > Time Stretch… and set Stretch / Stretch Factor to 200% (from the default 100):
Now, when you advance using the back / forward icons annotated above, you’ll notice that you need to click twice the number of one direction to cover one second. You can also easily see you’re already seeing the individual fields (and not the frames composed by the two fields) by checking out the bottom-most pixel row of this particular video: it only has information in every second field, starting with the first. BTW, the explanation for this (why it’s the second frame that this all starts) is very simple: DVB broadcasts (both standard- and high-definition) use the upper-field first approach; hence, the first frame will not have any scanline at the very bottom to convey any information. There won’t be such alternating in the full frame view (the one you saw when navigating the footage without time stretching) as it contains both fields.
1.1 Effects of the bad field order
Incidentally, you can quickly check out the field order: it’ll be shown in the Project pane if you select the original video (not the Composition list item). In the following screenshot, I’ve annotated the word “Upper” (meaning it’s an “Upper field first” video) with a red rectangle:
It’s really worth giving the other configuration (“Upper field first”) a try to see how it messed up the video. Messed-up field order results in jerky, jittery playback in runtime, and the reason for it may not be evident for the untrained eye.
To override the default (and, for DVB recordings, evident) upper field first configuration, right-click the original video in the Project pane and select Interpret Footage > Main. Then, in the “Fields and Pulldown” menu, change the “Separate Fields” drop-down list to “Lower field first” as is also shown in the next screenshot:
Now, let’s step over every single field! The first will be the following (screenshot made with undocked Composition panel):
Note that the current field number is also shown in this pane so that you can always know where, in which field you’re are; see my red-rectangle annotation above)
This is the second field:
If you watch closely, you’ll see that the ice hockey players have moved a bit backward; so did the flag-waving hands of the folks in the foreground. The third frame will move them quite a lot forwards and so on.
Let’s take another spot in the video more discernably showing this effect: field 19 in the 2nd second, which is an odd field (also visible by it having true information on the lowermost pixel row):
This is field 20 (the next one):
As you can see, it was actually the previous field in the original stream – it’s just because we’ve manually overridden the field order that AE shows it as a later one.
Now, move to the next field (nr. 21):
As you can see (make sure you check out the motion of the letters in the background – I particularly recommend the letters “O” and “T” of the word “TISSOT” in the center right!), this field is the one that should follow field 19, and not the previously-shown one, field 20.
After going back to the correct field order (upper first), these three fields are also shown in chronological order by AE (make sure you check out the field number at the bottom to see I’m right!):
I’ve also created two framerate-doubled (so that no temporal information gets lost), deinterlaced MP4 videos so that you can see the consequences of the wrong field rate. (Again, you are unlikely to run into deinterlaced videos with this problem as it requires manual overriding of the default setting in most deinterlacing app, if at all possible. Nevertheless, it’s good to know why some videos stutter so bad.)
For the conversion, as it allows for frame-doubled deinterlaced export, I used Episode. The right (upper-first) video is HERE (Episode Encoder settings screenshot HERE), the wrong (lower-first) video is HERE (Encoder settings HERE). In both cases, to produce quality output footage, I’ve overridden the default, very low H.264 quality setting to use VBR Quality at 50%.
Let’s take a look at some framegrabs. First, the right (upper-first) footage. I’ve advanced the frames in VLC – now, as the video is deinterlaced (=progressive) and there are no fields any more, you can already use the “E” key in VLC to advance over individual frames. As you can see, the order of frames is OK and the temporal resolution is here: as opposed to blindly deinterlacing, without doubling the framerate, all the original info is present. And even more, I should add, thanks to the very slow, but excellent “Motion Compensation” used to create the missing information (the other field for every field). The framegrabs are all in order:
Now, for the wrong footage deinterlaced (and framerate-doubled) with the (wrong) Bottom-first setting:
As you may have noticed, I made the second image series in AE (instead of VLC) because it also displays the field / frame (here, frame, as we have progressive content) number. It’s also worth knowing that, while these two videos correctly identify themselves as 50 frame-per-second ones, AE still imports them as 25fps ones. Therefore, you’ll still need to stretch the footage to 200% in order to be able to iterate over all the frames, as was the case in the original, interlaced case.
2. Final Cut Pro X
In addition to AE, it’s also possible to see at least half of the fields (the bottom-most ones) in Apple’s own Final Cut Pro X.
First, (as opposed to AE) as Final Cut Pro X is unable to read MPEG-2 content, you’ll need to convert your MPEG-2 footage to a format also allowing interlacing and compatible with Final Cut Pro X. I recommend ProRes 422 for this. Note that Apple Intermediate Codec (AIC) also supports interlacing and exporting from MPEG Streamclip will be correctly read (and handled as interlaced content) by both AE and FCPX. Feel free to use it instead of ProRes 422 if you want.
For the MPEG-2 -> ProRes 422 / AIC conversion, you can use several apps. I provide you with tutorial of the two most popular alternatives, the (not taking the MPEG-2 plugin’s price into account) free MPEG Streamclip and Apple’s own, $50 Compressor.
2.1 MPEG Streamclip
MPEG Streamclip (homepage; get the latest beta version; currently it’s 1.9.3b8) will work just fine if you have previously purchased the QuickTime MPEG-2 Playback Component from Apple. (Unfortunately, it seems it’s no longer available from Apple any more as it’s not needed by QuickTime in Lion / Mountain Lion any more. Still, MPEG Streamclip does need it.)
If you still have the QuickTime MPEG-2 Playback Component installer (a DMG file), after downloading MPEG Streamclip, start “Utility MPEG2 Component M. Lion.app” inside the Streamclip DMG file. It’ll make you also mount the QuickTime MPEG-2 Playback Component DMG file. When you mount the latter, Utility MPEG2 Component M. Lion will fetch the needed MPEG-2 codec from it and, from there, you’ll also be able to export from MPEG-2 footage as well.
It also reads TS files (unlike Apple’s Compressor; see next section below); that is, no need to pre-demux the MPEG-2 stream from your DVB (or camera) recordings.
Just open the input interlaced MPEG-2 file and, then, go to File > Export to QuickTime… . In the selection dialog shown, select Apple ProRes 422:
Don’t touch anything else – the default save mode will be interlaced, that is, all of your fields will be exported. Finally, click “Make movie” in the lower right corner and name your exported file.
2.2 Apple’s Compressor
It’s very easy to convert m2v files with Compressor. (It doesn’t convert TS files; therefore, you’ll need to re/demux the video track of your TS files with ProjectX first. Read THIS for more info on ProjectX .)
Just open Compressor, click Cancel (you’ll need a non-listed output format) and drag the m2v file to the top right pane. After this, select “Apple ProRes 422” under Apple > ProRes in the lower-left “Settings” pane (annotated). Drag-and-drop it to the top left pane, which (now, still) says “Drag Settings and Destinations here”:
Now that the Settings have been drawn, just click “Submit” in the lower right corner of the top left pane (also annotated above). On the dialog now displayed, name the file and click “Submit“.
Note that, in the screenshot above, I’ve also annotated the “Inspector” pane (bottom center) to show you Compressor has correctly parsed the MPEG-2 input file to be interlaced. The output will also be interlaced, which you can (but don’t have to) also check by clicking the “101…011″ icon in the Inspector pane’s top bar and, on the new pane, the Video > Settings button, both annotated in the next shot:
Here, you can change the interlacing parameters in the lower left corner:
Now, you can directly import the output of Compressor / MPEG Streamclip to FCPX.
First, create a new project by either clicking the, in the next screenshot, oval-annotated area or File > New Project:
(Simply importing a clip via Import Files (the rectangle-annotated area in the screenshot below or File > Import > Files), in general, results the clips being considered progressive. More on this later.)
Make sure you leave “Set based on first clip” on when clicking OK in the next dialog:
Drag-and-drop the file to the lowermost timeline. It’ll show you the following dialog:
Make sure you set “Rate” to interlaced from the default 25p – in this case, to 25i. You may also want to change the resolution and anamorphic setting, if needed (generally, FCPX doesn’t recognize anamorphic videos; therefore, you’ll need to set it manually.).
Now, select the video at the bottom (just click it), on the timeline and NOT on the top, in the Clips pane! The reason for this is that FCPX will always consider clips progressive (non-interlaced), unlike videos in the timeline, with which it accepts your setting, which – see the previous step – informs it to be interlaced. It’ll be immediately rendered in the top center pane.
Now, you can quickly navigate the video on both the timeline and, even better, frame-by-frame (more precisely, top field-by-top field) with the (regarding the top field-to-top field navigation) two arrows annotated with ovals in the screenshot below.
Just click the switch in the upper right corner of the top center pane, annotated with a rectangle in the next screenshot. You’ll be presented a menu with a “Show Both Fields” item (also annotated):
(Note that I also annotated the background letters behind the ice hockey player. In this particular video, they’re one of the best objects to pay attention to when checking whether deinterlacing is done OK and no temporal info is lost! Also note that this screenshot shows it’s the video in the timeline that is selected, not in the Clips pane – see the yellow border around the video on the timeline.)
The “Show Both Fields” menu item is disabled by default, meaning you’ll only be shown one (the top) field. (But never the other!) If you do enable it (see the annotated checkmark in the following screenshot), the other (bottom) field’s contents will be superimposed on the current (top) one:
Again, you can’t separately check the top and bottom field and is, therefore, inferior to AE’s solution. (The latter, in addition, also supports TS and other container formats.) Nevertheless, this only-type-of-fields-can-be-rendered is still waaaaay better than nothing at all – for example, VLC’s or QuickTime Player’s inability to show any kinds of fields.
By the way, on the screenshot above, I’ve also annotated the Clip Inspector on the top right: as you can see, it (despite your declaring it’s interlaced) considers the clip to be progressive. This is why I’ve told you not to select the clip, but the video on the timeline, to examine the (top) fields instead of the composite frames.
3. What about Apple’s iMovie?
Unfortunately, iMovie is unable to advance over individual fields. It’s, actually, doesn’t even have a way of setting a project to be interlaced – in the project settings, you can only set the framerate but not the interlaced/ non-interlaced media:
(For comparison: the “bigger brother” FCPX lets users declare a project interlaced here.)
Needless to say, it can’t show individual fields either and will only show full frames. An example screenshot proving this (agan, check out the letters in the background) also showing its video inspector (note: there isn’t anything interlacing-specific there):