The Dutch military has developed a new night vision system, that makes night-time images as clear and colorful as those shot in daylight. Normally, night-vision images lack color, since the infra-red emitted by the objects being photographed do not contain enough information to estimate the color. Since humans can at any one time distinguish only between 100s of gray levels (shades, also known as grayscales), but can easily separate thousands of colors, 'coloring' a night-vision image can lead to better visibility and depth-perception.

a) Night-Image b) Source-Image c) Colored-Image d) Daytime-Photo Alexander Toet of the TNO Human Factors uses a day-time image (source) of similar surroundings to color the grayscale, night-vision image (target). The work is published in the January issue of the journal Displays (Pages 15-21). For example, if the night-vision image is of a tree and surroundings, the system requires that a secondary image of a tree be provided as the source image. The system analyzes the statistical distribution of grayscales and chromacity (amount of color) in the target (night-vision) and the source image respectively, and then correlates the two. This allows the system to color each pixel in the target using colors in the source image.
The fact that a secondary image is required, is not a big hindrance. Given the capacities of today's hard-drives, a light, mobile system can contain tens of thousands of images from all settings/scenarios. An intelligent pattern matching and image segmentation (with or without human help) algorithm can easily match a corresponding source image for each target image. The coloring can then proceed in real-time.
One issue is not really addressed in the paper, and that is of coloring a video stream. There has to be enough correlation between the coloring of the contiguous frames in the video stream, without which the user is most likely to see a kaleidoscope of colors. One way of doing this could be to use a source image for the first frame, and then use the first frame as the source for the second frame, and so on.
This is not really my line of research, but I would have loved to do this!

a) Night-Image b) Source-Image c) Colored-Image d) Daytime-Photo
The fact that a secondary image is required, is not a big hindrance. Given the capacities of today's hard-drives, a light, mobile system can contain tens of thousands of images from all settings/scenarios. An intelligent pattern matching and image segmentation (with or without human help) algorithm can easily match a corresponding source image for each target image. The coloring can then proceed in real-time.
One issue is not really addressed in the paper, and that is of coloring a video stream. There has to be enough correlation between the coloring of the contiguous frames in the video stream, without which the user is most likely to see a kaleidoscope of colors. One way of doing this could be to use a source image for the first frame, and then use the first frame as the source for the second frame, and so on.
This is not really my line of research, but I would have loved to do this!
5 Comments:
It's not so much an issue of matching an image to what you're seeing, that really wouldn't be practical if you were effecting an insertion in a jungle environment that has probably never been photographed before, plus has changed since the last recon'.
The idea is to match a colour profile against a region based on geographical location and season using stuff like Global Positioning Systems.
It's a bit of fudge, but it's not really meant to be an accurate representation of the actual colours, it's more a case of adding in a generalized colour profile so that the human eye can gauge depth more accurately than would otherwise be possible with starlight imaging technology.
Think of all of those old black & white films that have been restored and colourized.
Much of that process is automated. When the colours go awry, then a human steps in a matches the correct colour profile to the surface: beit a cupboard door, a large chair, a field of grass .. or even a bare arse...
The old B&W films had more information than a night-vision image. Even though B&W, the analog tapes would contain more gray levels, and since the pictures were of *actual* colors, and not heat (infra-red), coloring should be easier than say, coloring a warm tree on a cold night. Also, a warm wind flowing through a cold tree would distort the night-vision coloring, no such problem with B&W movies!
Since color of a scene changes much less rapidly than the temperature, it is harder to color a night-vision image.
I can't really see how you could do this with infra-red imaging.
Starlight imaging, yes but not infra-red imaging, surely?
There's two levels of false colour: first you're seeing heat signatures, which are unintelligible to all intents and purposes, given we don't see that way, then the colours added to denote heat won't ever match the actual objects in view...
True, night-vision images lack the color information. So, one has to use a secondary image to derive some color context. This image can be a daytime shot of the same area, or another image with similar information. To color B&W images, we can use the information inherent to the images (gray levels). This cannot be done for night-vision images.
So, coloring a night-vision video stream is much harder, since the color interpolation might not be uniform across all frames. For B&W streams, where each frame has its inherent information (which is already smoothly interpolated across frames), this is not a problem.
Never mind the false coloring, as long as it is consistent across frames, one can learn to adapt.
I imagine that's as a result of the older, less detailed film stock not containing the same level of depth as more recent black & white films...
Post a Comment