How JPEG and MPEG picture compression algorithms work

2 February, 2010 at 12:38 am 1 comment

While their names sound similar apart from the first letter, JPEG and MPEG refer to the names of two unrelated organisations.  The Joint Photographic Experts Group (JPEG) created the JPEG image file format (as well as JPEG 2000, a different format), while the Moving Picture Experts Group give their backing to a large number of audio, video, container and stream formats to support the transmission of digital audio or digital video, which are grouped into generations (MPEG 1, MPEG 2, MPEG 4).  For instance, the popular MP3 audio file format is a part of MPEG 1 (MPEG 1 audio layer 3), DVDs and digital television usually use standards from MPEG 2, and the popular “MPEG 4″ or “DivX” video format is standardised as part of MPEG 4 (MPEG 4 part 4).

All of the formats, including JPEG images and all types of MPEG videos, work in a very similar way, which I’ll call block based DCT image compression.  This involves chopping up the picture into blocks, 8 pixels by 8 pixels, and compressing each block separately, and using a mathematical transform called a Discrete Cosine Transformation (DCT).

Here are some of the steps involved in JPEG and MPEG image compression:

Colour space conversion and resampling

If necessary, the image is first converted into the correct colour space and resolution.  This involves separating the colour image into 3 separate images or ‘channels’: a luma channel which carries the brightness information (like a black and white image), and two chroma channels which carry the colour information.  Furthermore, the luma channel is usually given double or quadruple the resolution of the colour channels; this saves some storage space already, and works because humans are much better at seeing intricate changes in lightness than colour, which is why we prefer reading light text on dark background and vice versa, to red text on green or similar.

Separate into 8×8 blocks and perform Discrete Cosine Transform

Each channel of the image is then divided into blocks of 8 pixels by 8 pixels (64 pixels per block).  To each block, a mathematical formula called a Discrete Cosine Transform (DCT) is performed.

Before the discrete cosine transform, you have 64 values, each representing a pixel in the block.  After the transform, you have 64 values, each representing a particular frequency of any detail in that block.  It is like taking a piece of audio and constructing a spectrum analysis, where you get a frequency vs amplitude curve, except the curve is more of a surface, because it’s in two dimensions.  The very first resulting value represents a frequency of ‘zero’, ie it is the average value of all of those original 64 pixels.  The rest are co-efficients: values which you can input back into the inverse DCT function to get the original pixel values.

Storing these coefficients is no more efficient than storing the original pixel values.  In fact, due to rounding error you need greater precision in order to be able to store them than the pixel values.  How, then do we achieve any compression?  Well, for that, you need to throw away some data.

Quantisation

Quantisation is the process of reducing the precision of a number so that it is less accurate, but takes fewer digits (that is, bits) to write, or store.  It is, quite simply, dividing a number by a certain value, and throwing away the remainder.  When multiplying by that value again in an attempt to get the original number, chances are that the number is close to, but slightly different to, the number you started with.  The more you divide the number by, the greater the strength of the quantisation and hence the greater the likely loss in precision once you multiple the number again.

So, the previous compression step left us with 64 co-efficients arranged in an 8×8 matrix, representing the values of different horizontal and vertical frequencies within that 8×8 block.  Now, we apply quantisation to these coefficients, reducing their precision by dividing them by a specific number and throwing away the remainders.  When we save a JPEG image or MPEG movie we can usually set either a quality slider or quantisation factor – these just vary the number we are dividing by and hence the amount of precision we want to retain.

The JPEG and MPEG compression can also apply different amounts of quantisation to the different frequency coefficients within a single block, or different amounts per block in an image.  When this is done, it is all in an attempt to get the best looking picture, as perceived by human eyes, for a given amount of quantisation.

Encoding the coefficients

Depending on the amount of quantisation applied, some or most of the smallest coefficients will have been reduced to zero, because they were so small and insignificant to start with and during quantisation, resulted in a value less than 1 and therefore were discarded.  It’s likely that a typical block will only have a few non-zero coefficients remaining after quantisation.

The co-efficients, which were laid out in an 8×8 matrix, are now picked out in a specific order for encoding: usually a “zigzag pattern”, chosen so that the smallest co-efficients will likely be last; any zero-coefficients at the end are not encoded at all.  The remaining coefficients undergo some form of lossless coding, which minimises the bits taken by the remaining coefficients.  The type of coding chosen varies between sub-types of MPEG video, but the general idea is that each coefficient is represented using a variable number of bits, so that smaller ones take less space.

Motion

The process so-far is simplified, but basically true for still images, and for the still component (keyframes or I-frames) of MPEG video.  MPEG video encoding can also involve motion estimation, where a block can be optimised by only encoding the difference between it and a nearby area of some other frame.  Encoding is still done per-block, and the blocks still undergo DCT and quantisation, but the selection of a reference point in another frame can be somewhat complex.

Decoding the image

Decoding is a reverse of the above process: the stream is decoded to find the coefficients for each block, and these coefficients are multiplied by the same number they were previously divided by, in order to reverse the quantisation.  The quantisation process did involve some precision loss, which can be seen by this point.  Then, an inverse DCT is applied to get pixel values for the block.

Finally, if it is a colour image and the two chroma channels used a lower resolution than the luma channel, these are now up-sampled – by interpolation – to match the luma channel, and then the image can be converted into a suitable colour space for viewing, such as RGB (when displaying on a monitor).

About these ads

Entry filed under: Video. Tags: .

Improved tools for optimising PNG images – pnqnq and pngquant “Improved” Stable vs stable: what ‘stable’ means in software

1 Comment Add your own

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Trackback this post  |  Subscribe to the comments via RSS Feed


Tweets


Follow

Get every new post delivered to your Inbox.

%d bloggers like this: