In a previous posting I commented on how people claim to be able to sell a 4CIF solution that uses the same amount of storage as someone else's 2CIF. This is only true if you are using an equal bitrate because of some differentiated compression technique, but if the technique is basically the same, say conventional MPEG-4 Part 2, then it's usually a price war that is forcing the low-bidders to cram video into an abnormally thin pipe, and end up with an awful picture caused by over-compression.
But there's something else I'd like to say. It's not life threatening but it sadly reflects a lack of understanding in our industry.
There is a way to improve your video (assuming you let the bitrate rise with it of course) - move from CIF, to 2CIF or even 4CIF. But what is CIF? It is the name given to the number of horizontal and vertical pixels (picture elements) in an image. For the purposes of this posting I'll stick with NTSC, which has 480 horizontal lines from top to bottom.
A CIF image is 352 pixels across by 240 down. 2CIF has double the information going across, i.e. 704 x 240, and this is useful because the human eye is more interested in left-right activity than up-down - so more information the better. 4CIF is as good as NTSC gets, a full 704 x480.
What we have described here are a number of different image sizes, exactly like computer monitors used to be IBM's 1987 VGA (640x480) or SVGA (800x600) right up to QXGA (2048x1536). Notice I haven't used the word resolution yet. But isn't VGA a resolution? No, it's an image size.
We can better define resolution as pixels per inch (ppi), just like printer resolutions are often measured in dots per inch (dpi). And it is this tie in to distances in the real world, the monitor, that is of fundamental importance.
If we had a monitor that measures 704x480 and we put it into quad mode, then a CIF image in one quarter of the screen will look identical to a 4CIF image right next to it in another corner. The only difference is that the 4CIF image consumes up to 4 times the bandwidth to carry the detail we cannot see until we digitally zoom in (typically on recorded video). So now imagine standing in front of a PC video management system, showing 16 cameras in a 4x4 array. If the monitor is a modest 1024x768, and even if all the screen was used to show video (which is not the case), then each image has 256x192, which means you will not see any difference between CIF and 4CIF until you make one camera window much larger.
So, if you have a fixed size camera window, then as you increase your image size (CIF, 2CIF etc.) your resolution increases (there are more pixels per inch on the screen) and clarity increases, so there is clearly a close relationship. However if you increase your image size (CIF, 2CIF etc.) but you also increase the size of the camera window, your resolution will not change and the clarity stays exactly the same.
Resolution is influenced by image size, but not only by image size. They are related but not the same thing. It is the same lack of understanding that causes people to be clueless when scanning in 5x7 photos at extremely high resolutions (say 8MB per photo), and being confused as to why it looks identical on a computer monitor as a 100kB version of the same photo. It is because computer monitors are generally limited to resolutions of about 72-96 ppi so anything higher is simply not visible. Another good example is home digital cameras which are now in the 8-10MP range, yet, unless you zoom in or print out at poster-size, the image actually looks identical on a PC monitor as a humble 3MP camera. All you're doing is taking up more hard drive.
There you go - 4CIF is an image size, not a resolution.