Report on field synchronisation problems that can cause interlacing artefacts when encoding telecine material from video tape.
Background
Many archives are attempting to encode material from videotape that was originated on film. The original film material could have been transferred to videotape by telecine up to 30 years ago. The first telecine transfer could have been to 2” videotape with subsequent transfers to 1”, UMatic, BetaSP and new digital formats.
After encoding, it may be noticed that some material shows exaggerated interlacing artefacts, described as combing, tearing or ghosting. When the affected MPEG files are played back a secondary ghost image can be seen in frames that have both high contrast and movement. When a still from one of these files is viewed you can clearly see the ‘comb’ effect along high contrast edges in the image.
The problem is caused when the original film is transferred to video using a telecine machine. Older telecine machines did not allow the operator to select the video field dominance with respect to the film frames and later machines may not have been correctly configured by the operator. When the telecine field dominance is incorrect, the MPEG encoder will create every frame using two fields from adjacent film frames, instead of using two fields scanned from the same film frame.
There is no simple fix for this problem.
What is happening?
In order to understand what has happened with this image, it is necessary to understand the way that video images are organised. Although we tend to talk about video in terms of frames, most of us know that each frame is made up from two separate fields. Each field contains alternate lines from the image and the two fields are combined, in a process called interlacing, to display a full frame. To make things a little more complicated, each field is captured at a different moment in time so the two fields are different temporally as well as spatially.
To illustrate this, look at the sequence below. Assume that these images of a ball flying past were captured with a film camera taking 50 pictures a second:
The time delay between each image is 1/50th of a second and each image shows the ball in a different position.
Now, say you capture the same sequence with a modern PAL video camera (older tube cameras added even more complications compared to modern CCD cameras). PAL video is captured at 50 fields per second so you may think that the sequence would be the same as the one above. This is incorrect. The camera does record 50 images per second, but each image contains only half the scan lines of the complete pictures like so:
Note that the odd numbered images contain one set of lines and the even numbered images contain the other set of lines.
The images captured by the video camera do not look like this:
And they do not look like this:
Every field is captured at a different instant in time, so any movement in the image area will show up in adjacent fields.
N.B. Throughout this document we do not need to distinguish between composite, RGB, YUV or digital SDI video signals and we will use the terms PAL and NTSC to refer to 625 line, 50Hz and 525 line, 60Hz television systems respectively.
How does interlacing affect digitisation?
When video is played back on a video monitor the first field in the sequence is drawn on the screen, starting on the top line and missing out every other line. There is a short delay and then the second field is drawn, starting on the second line down from the top and again skipping every other line. The sequence is repeated continuously.
With standard CRT televisions and video monitors, the image is drawn by aiming a stream of electrons at the screen to make a dot fluoresce. As soon as a dot is made to fluoresce, the fluorescence will begin to die away. The lines on screen are drawn in order from top to bottom and by the time the last line at the bottom of the screen is drawn, the lines at the top will be much dimmer.
When the video standards were designed, each video frame was split into two fields so that the viewer would not be able to see this decay in brightness. This has the effect of refreshing the screen twice for every frame which keeps the brightness even and the viewer’s persistence of vision takes care of integrating the two fields into a high resolution image.
This interlaced video is a very good solution to reduce the flickering on an analogue TV set but it causes some problems when you want to digitise a frame of video. If you want to capture a still frame of video and save it as a JPEG image you need both fields from each video frame to be combined into a single image.
Unfortunately, no matter which pair of images you choose to combine, the resulting image still looks quite bad. This artefact is variously known as “tearing”, “fingering” or “combing”.
If you capture only one field, you will be missing every other line. You could duplicate the lines to restore the height of the image but this will make the still image blocky:
You could interpolate between the lines to make the image look better but this will make the image look softer and it still means that you have to discard half your image data:
There are a number of techniques that can be used to try and use all the available image data to produce a good still frame. Many of these combine multiple fields and try to detect areas where artefacts will be a particular problem. These techniques all come under the heading of “de-interlacing”.
This artefact is obvious when you try and make a still image, but it is also visible when video is digitised using a frame based compression system such as MPEG-2. The artefacts are often more visible when the video is displayed on a computer monitor. Unlike video monitors and TV, computer monitors are non-interlaced and the screen is refreshed much more frequently. This gives the steady, crisp image that is needed for most computer software.
When digital video is displayed on a computer monitor it is played at a rate of 25 frames per second and each frame will be made by combining two adjacent fields. As the two fields are displayed simultaneously and as computer monitors usually have a better resolution than video monitors, comb artefacts can often be seen in areas of high contrast and movement.
To overcome this, many computer video players will process the image during playback to reduce the artefacts using filters, or the player may try to “de-interlace” the video to produce 25 “progressive” frames per second.
Leave a Comment