The Dutch military has developed a new night vision system, that makes night-time images as clear and colorful as those shot in daylight. Normally, night-vision images lack color, since the infra-red emitted by the objects being photographed do not contain enough information to estimate the color. Since humans can at any one time distinguish only between 100s of gray levels (shades, also known as grayscales), but can easily separate thousands of colors, 'coloring' a night-vision image can lead to better visibility and depth-perception.
a) Night-Image b) Source-Image c) Colored-Image d) Daytime-PhotoAlexander Toet of the
TNO Human Factors uses a day-time image (source) of similar surroundings to color the grayscale, night-vision image (target). The work is published in the
January issue of the journal
Displays (Pages 15-21). For example, if the night-vision image is of a tree and surroundings, the system requires that a secondary image of a tree be provided as the source image. The system analyzes the statistical distribution of grayscales and chromacity (amount of color) in the target (night-vision) and the source image respectively, and then correlates the two. This allows the system to color each pixel in the target using colors in the source image.
The fact that a secondary image is required, is not a big hindrance. Given the capacities of today's hard-drives, a light, mobile system can contain tens of thousands of images from all settings/scenarios. An intelligent pattern matching and image segmentation (with or without human help) algorithm can easily match a corresponding source image for each target image. The coloring can then proceed in real-time.
One issue is not really addressed in the paper, and that is of coloring a
video stream. There has to be enough correlation between the coloring of the contiguous frames in the video stream, without which the user is most likely to see a kaleidoscope of colors. One way of doing this could be to use a source image for the first frame, and then use the first frame as the source for the second frame, and so on.
This is not really my line of research, but I would have loved to do this!