Munipack's logo

Color Processing

There are described basic principles and algorithms used for color processing.

Sample data

What's color?

As a color one can consider a perception of a spectral property of reflected or illuminated light in human eye. Our goal is detect, processed and display pictures by a way which reproduces color which will cause the same color perception as in the original scene.

The color si derivated from a spectral property of reflected light by an object. The color vision is a product of processing of the light in the human eye and brain. The right can't be setup by an instrument but is absolutly determined by human perception itself.

The physical meaning of the color can be specified more precisely. The color is composition of light transmitted throughout exactly defined set of filters. The filters exactly approximates color sensitivity of receptors of human eye.

By the ideas, the color processing is completely focused to provide the most close approximation of given color spectral sensitivities. To get the best color reproduction, it is required to use of known non-linear deformations of color transformations.

The astronomical terminology is more vague. The word "color" means a light flux in any filter, not necessary, in a filter appropriate to human receptor sensitivity.

Color reproduction

The common method of reproduction of colors is RGB color model for emitting devices. The RGB color emitters spectral sensitivity are different from human sensitivities. Therefore original colors must be transformed to the output colors space.

Color approximation

Munipack is designed to handle colors as correctly as possible. There is effort to display of images by colors perfectly reconstructing of the original scene. The authenticity can be limited just only by the used hardware.

Light fluxes acquired by an instrument like an astronomical CCD camera are usually different from ones of display devices. The display devices uses sRGB (PC-type hardware) or Adobe RGB (Apple) opposite to Johnson UBVRI color system. Therefore, we need transform the color data each other. Without the transformation, the colors will strongly deformed.

The primary color space of Munipack is CIE XYZ which is practically color space of the human eye.

Munipack can display only color FITSes as is specified. There is no widely accepted color FITS definition so color FITSes can be created just only by Munipack utilities. Please, be consentient that the definition can be changed at any time in future.

Color processing

The color processing is based on working in color spaces. Internally, Munipack uses CIE 1931 XYZ and CIE 1976 L*u*v* color spaces. An input data in another color space are transformed to CIE 1931 XYZ. The display is in a RGB space.

The color processing in Munipack starts with loading of color FITS. The software automatically recognize the type of color space by reading of COLORTYP keyword in FITS header.


When the type is different from XYZ, the data need to be transformed to XYZ. In this case, the first step is prescaling of values. It is the optional step, but usually required for best results. Main goal of prescaling is to give the same flux from a white object in different filters. Unfortunately, the fluxes are violated by a different background and detector sensitivity or exposure time. In light polluted industrial localities, the main source of pollution are sodium lamps. The background for example in blue or red filters is affected differently and we need light of the object not, background. Munipack pre-estimate background levels as the median subtracted by 1-sigma. The fine tuning needs an user experience. The weight of every channel is not pre-estimated by any way. The guide can be for example exposure time, but it may be also derived from the telescope aperture, atmospheric conditions. etc. The prescaling can be omitted (level =0, weight = 1 for all bands).

B'ij = wB (Bij - B0),
V'ij = wV (Vij - V0),
R'ij = wR (Rij - R0).

B0 = V0 = R0 = 2000

B0 = V0 = R0 = 10000

B0 = 3700, V0 = 9300, R0 = 20000
The image has strong orange background due to sodium lamps. One has been taken near of center of Brno town. The first image shows all colors as has been detected. The second cut-offs blue. The last image has background level arranged according to per-frame backgrounds. The structure above lighted sky are clearly visible. This is common property of all urban observations.

Color transformation

The color transformation follows the prescaling. The file ctable.dat is looked-up for the header's identifier. When the color space is found, the matrix is loaded and all data are transformed. The transformation is usually done by matrix multiplication. When the type remains unknown, the behavior is undefined so color assignment will random (false colors). Note that the number of input colors can be different from XYZ (tree colors).

X = a11 B + a12 V + a13 R,
Y = a21 B + a22 V + a23 R,
Z = a31 B + a32 V + a33 R.











Night Vision

When light intensity decreases, the effective of use of cones is low and the (otherwise saturated) rod cell are activated. The spectral sensitivity of rods is shifted to shot wavelengths with respect to Y trisimulus. The transition region from daily (photopic with cones as receptors) to night vision (scotopic by rods) is mesotopic vision and the break occurs around 0 magnitudes (10-2 - 10-6 cd/m2, see reference) for naked eye.

Munipack simulates the scotopic and mesotopic vision. The scotopic sensitivity is approximated by the formula:

Is = 0.362 Z + 1.182 Y - 0.805 X.

Generally, the photopic, mesotopic and scotopic vision probably operates simultaneously. The detailed mechanism is perhaps unknown so the vision transition are simulated by the (empirical estimation!) logistic function. The logistic function drives many similar effects in real world. Especially, phenomenons of saturated detectors are frequently described by the way.

w = 1/(1 + exp(-x),
x = (Is - It)/wt.

Then output colors are computed as:

X' = w X + (1-w) Is,
Y' = w Y + (1-w) Is,
Z' = w Z + (1-w) Is.

The break must be setup manually and the the both vision are mixed. The parameters Threshold Is and Mesotopic Is are used. The thresholds sets the level corresponds to the zero-magnitude break. The mesotopic sets wide of transition region. This is absolutely empirical value and depends on vision and detector's gain.

The weight determines the types of vision:

Picture in mesotopic regime. The threshold sets background to scotopic and foreground to photopic vision. The setting corresponds to use of a telescope with one meter diameter.
Picture in scotopic regime. The red hydrogen shock waves are invisible. One correspond to vision by a small telescope.

Color tuning

The XYZ color space corresponds to eye's precipitation of color light by the cones. The XYZ has no upper limit. The numbers must be zero or positive. Unfortunately, the human perceptions of light intensity and colors is not linear. Therefore to get tunable parameters the parameters are transformed to CIE Luv color space. The color space is used to tune parameters saturation, hue and to scale luminosity.


The saturation parameter enable decrease or increase colors.

The saturation is practically multiplier of radius of color in u,v coordinates.

saturation = 1.5

saturation = 1

saturation = 0.5

saturation = 0


Hue rotates the pixel in color space and it is probably useless.

The hue is an angle added to angle of current color in u,v coordinates.

White point

The white point parameters enable user to fine tuning of white on image. Note that the white is also given by color temperature. The ideal object for white tuning are cumulus clouds. They are easy available and white is excellent (tested on white etalon).

u = 0.5, v = 0.5

u = 0.1, v = 0.8

u = 0.8, v = 0.6

u = 0.8, v = 0.1

Output Color space

Finally, the Luv is converted back to XYZ and consequently XYZ to a RGB space. There are two possibilities. The sRGB color space is widely used on PC-like hardware. If you are running Linux or Windows your monitor, LCD or beamer works in sRGB. The AdobeRGB is very similar (a slightly different parameters are used) and is used on Apple hardware. Note that the AdobeRGB has wider gamut (displays more colors). Your RGB color space must correspond to your HW, otherwise the output colors will certainly deformed.

The tuning of color space is available in Preferences. The color temperature must exactly corresponds to values set on your display.



No other color spaces are available, but ones might be easy implemented when needs.

Why so complicated?

The standard image formats stores data in very limited precision of 256 levels on every color. It is 256*256*256 approximately 17 millions colors. But low-cost CCD chip has dynamic rage more wider and human eye at least over ten orders. The main problem of displaying of astronomical images is correct displaying of the wide range of data on display with 256 levels. The problem is widely known in recent as the high range photography HDR. Moreover, the data has more usage over displaying. For example, the photometric data requires high precision.

Therefore, the best way how to store of data is store raw data with exact definition of photometric instrument (filters, etc.) and use this data by various ways. One from the ways, it can be the color imaging. The side effect of the way are wide possibilities of image tuning.

The algorithm

There is described algorithm used to rendering of color FITS images.

  1. Detect input color space as COLORTYP keyword in FITS header.
  2. Processing:
    1. For general color space, scale values and convert to XYZ
    2. Convert XYZ to to Luv
    3. Scale luminosity, tune saturation and hue.
    4. Convert back from Luv to XYZ
    5. Optionally add night vision.
    6. Convert to a display RGB color space
  3. Display image

The rendering code is implemented in C (fitspng.c), Fortran (coloring/ctrafo.f90) and C++ (xmunipack/fitsimage.cpp).

Coloring tool

The coloring tool is invoked from the menu of View:

File → Create → Color image.

Command line usage

Complete color management can be driven from a command line. There are two Munipack internal utilities coloring and ctrafo providing its. The export from color FITS to any conventional picture format PNG is provided by fitspng utility.

Color composition

Composing of images to a color image is provided by coloring internal utility. It is invoked via munipack command. Use the syntax:

$ munipack coloring -o color.fits -c 'Color space' \
                     blue.fits green.fits red.fits

Prepare pictures in a color space and pass ones in wavelength increase order to create a color FITS. The test data package contains pictures of Dumbbell nebula in Johnson BVR. The color image can be created as:

$ munipack coloring -o m27.fits -c 'Johnson BVR' \
                       m27_B.fits m27_V.fits m27_R.fits

Color space transformations

The transformations of images between color spaces is provided by an internal utility ctrafo. The schematic usage is:

$ munipack ctrafo -o output.fits -c 'final color space' \
                     input.fits b0,bx v0,vx r0,rx

where b0,bx and others are mean levels (becoming black) and multiplicators in a given band.

The known color spaces can be listed using as:

$ munipack ctrafo --list

An example can be:

$ munipack ctrafo -o m27_xyz.fits -c 'XYZ' \
                     m27.fits 3700,0.66 9300,1 20300,1

The corresponding values for levels can be get from xmunipack. Also look for recommendation in Prescaling section above. The output image can be used by fitspng to convert to PNG.

Please, read man page

$ man munipack

to get more detailed description and parameters of color command line utilities.


Images of M27 has been taken by J.Połedniková, O. Urban and M. Kocka on MonteBoo observatory via 0.62 m reflector.