Image Enhancement Techniques: Spectral Enhancement Techniques (Especially for GATE-Geospatial 2022)

Get unlimited access to the best preparation resource for CTET-Hindi/Paper-1 : get questions, notes, tests, video lectures and more- for all subjects of CTET-Hindi/Paper-1.

Examrace Books on Mapping, GIS, and Remote Sensing prepares you throughly for a wide range of practical applications.

Image Enhancement techniques are instigated for making satellite imageries more informative and helping to achieve the goal of image interpretation. The term enhancement is used to mean the alteration of the appearance of an image in such a way that the information contained in that image is more readily interpreted visually in terms of a need. The image enhancement techniques are applied either to single-band images or separately to the individual bands of a multiband image set.

Types of Image Enhancement Techniques

Image enhancement techniques can be categorized into two:-

  • Spectral Enhancement Techniques- discussed in this topic
  • Multi-Spectral Enhancement Techniques- discussed as a separate topic

Spectral Enhancement Techniques

Density Slicing

Density Slicing is the mapping of a range of contiguous grey levels of a single band image to a point in the RGB colour cube. The DNS of a given band is “sliced” into distinct classes. For example, for band 4 of a 8 bit image, we might divide the 0 - 255 continuous range into discrete intervals of 0 - 63,64 - 127,128 - 191 and 192 - 255. These four classes are displayed as four different grey levels. This kind of density slicing is often used in displaying temperature maps.

Contrast Stretching

  • A common problem in remote sensing is that the range of reflectance values collected by a sensor may not match the capabilities of the film or colour display monitor. Materials on the Earth՚s surface reflect and emit different amounts of energy.
  • A sensor might record a tremendous amount of energy from one material in a certain wavelength, while another material is recorded at much less energy in the same wavelength. Image enhancement techniques make an image easier to analyze and interpret. The range of brightness values present on an image is referred to as contrast. Contrast enhancement stretching is a process that makes the image features stand out more clearly by making optimal use of the colours available on the display or output device.
  • Contrast manipulations involve changing the range of values in an image to increase contrast. For example, an image might start with a range of values between 30 and 90. When this is stretched to a range of 0 to 255, the differences between features are accentuated. Unfortunately, different features often reflect similar amounts of energy throughout the electromagnetic spectrum, resulting in a relatively low contrast image. In addition, besides the obvious low contrast characteristics of biophysical materials, there are cultural factors at work. For example, in rural settlements use of natural building materials is common (e. g. , wood and soil) in the construction. This result in remotely sensed imagery with a much lower contrast as opposed to urban settlements where concrete, asphalt is used which give high contrast. Thus, it is important to consider both the biophysical and human components when enhancing an image for maximum contrast.
  • The sensitivity of the sensor is another factor to consider when creating low-contrast remotely sensed imagery. Most sensors today are equipped with detectors that are designed to record a relatively wide range of unsaturated scene brightness values (e. g. , 0 to 255) . When an image becomes saturated, the radiometric sensitivity of the detector is insufficient to record the full range of intensities of reflected or emitted energy emanating from the scene. Naturally occurring materials on the earth have a wide range of spectral properties. Satellite detectors must be sensitive to low reflectance material, such as dark volcanic basalt, as well as high reflectance material such as fields of t now. However, most of real-world remote sensing applications involve few scenes that are composed of brightness values which utilize the full sensitivity range of satellite detectors. Such scenes are relatively low-contrasting with brightness values ranging from 0 to 100.
  • The contrast of an image can be increased by utilizing the entire brightness range of a display or output device. Digital methods generally produce a more satisfactory contrast enhancement because of the precision and wide variety of processes that can be applied to the imagery. Contrast Stretching can be displayed in three categories:

Linear Contrast Stretch

  • Linear contrast enhancement, also referred to as a contrast stretching, linearly expands the original digital values of the remotely sensed data into a new distortion. By expanding the original input values of the image, the total range of sensitivity of the display device can be utilized. Linear contrast enhancement also makes subtle variations within the data more obvious. These types of enhancements are best applied to remotely sensed images with Gaussian or near-Gaussian histograms, meaning, all the brightness values fall within a narrow range of the histogram and only one mode is apparent.
  • This technique involves the translation ′ of the image pixel values from the observed range Demin to DNmax to the full range of the display device (generally 0 - 255, which is the range of values representable in an 8bit display devices) This technique can be applied to a single band, grey-scale image, where the image data are mapped to the display via all 255 three colors LUTs.
  • A LUT, or ‘Look Up Table,’ holds a set of numbers which are looked up by the software or hardware you are using to deliberately change the colors of an image. LUTs can be technical, creative (usually generated within software) or camera specific. For example, Canon has produced LUTs for its Cinema EOS cameras that convert Canon Log footage to Rec. 709 or Cineon.
  • It is not necessary to stretch between DNmax and DNmin- Inflection points for a linear contrast stretch from the 5th and 95th percentiles, or ± 2 standard deviations from the mean (for instance) of the histogram, or to cover the class of land cover of interest (e. g. water at expense of land or vice versa) . It is also straightforward to have more than two inflexion points in a linear stretch, yielding a piecewise linear stretch.
Linear, Non-Linear, and Standard Deviation Stretches

Histogram Equalisation

Histogram equalisation is an operation devoted to the uniform distribution of grey levels between the pixels of the image. This process gives a higher digital level of the output image to the most frequent digital levels in the input image. Consequently, in the enhanced image, grey levels that occupy most grids in the original image are more contrasting. In general, a better-distributed histogram is obtained, with better separation between the most frequent DN of the image.

Image Reduction

  • It is often necessary to locate the exact row and column coordinates of a study area within an image during the early stages of a remote sensing project. Many digital image processing systems today are unable to display a full image at the normal commercial pixel scale (> 3000 rows and 3000 columns) . Being unable to view the entire image may pose a problem in locating the exact coordinates of a study area.
  • Under such circumstances, image reduction allows the analyst to view a subset of an image at one time on the screen by reducing the original image dataset down to a smaller dataset. These 16 techniques is useful for orientation purposes as well as delineating the exact row and column coordinates of an area of interest.
  • row and mth column of the image are systematically selected and displayed. An image containing 5160 rows by 6960 columns could be reduced so that every other row and every other column (i.e.. , m = 2) were selected for a single band.
  • This reduction would create a sampled image containing only 2580 rows by 3480 columns. This reduced dataset would contain only 25 % of the pixels found in the original scene (Jensen, 1996) . The logic of a simple 2x integer reduction is shown in Figure.
Image Reduction

A hypothetical example of 2x image reduction achieved by sampling every other row and column of the original data. This operation results in a new image consisting of only 25 per cent of the original data.

  • Unfortunately, a simple 2x integer reduction is often still too large to view on most screens. In cases where a 2x reduction is not small enough, the data must be sampled more intensely. An image sampled at a 10x reduction, meaning every tenth row and the tenth column of the image is sampled, will result in an image consisting of 516 rows and 696 columns.
  • Although a resampled image at this scale contains only 1 % of the original data, it is a small enough to view the entire scene within the screen. Because a resampled image has obviously lost many of its original pixels, it does not contain adequate data for image processing and interpretation.
  • Resampled images are more commonly used for orienteering within a scene and locating the exact row and column coordinates of a specific study area. These coordinates can then be used to extract a portion of the image for full resolution analysis.

Image Magnification

  • Digital image magnification is often referred to as zooming. This technique most employed for two purposes:
    • To improve the scale of the image for enhanced visual interpretation
    • To match the scale of another image
  • Just as row and column deletion is the simplest form of image reduction, row and column replication represents the simplest form of image magnification. To magnify an image by an integer factor m squared, each pixel in the original image is usually replaced by an mxm block of pixels which all have the equivalent spectral values as the original input pixels (Jensen, 1996) .
  • An example of the logic of a 2x magnification is shown in Figure below. This form of magnification doubles the size of each of the original pixel values.
Image Magnification

A hypothetical example of 2x image magnification achieved by replicating every row and column of the original image. This operation results in a new image consisting of four times as many pixels as the original data.

  • In many sophisticated digital image processing systems, the analyst can specify a floating-point magnification rate such as 2.75x. This requires that the original remote sensor data be resampled in near real-time using one of the image resampling algorithms (e. g. , nearest neighbor, bilinear interpolation, or cubic convolution) .
  • This technique is often used when detailed spectral reflectance or emittance characteristics of a relatively small geographic area of interest is needed] Being able to zoom in to the raw remote sensing data at precise floating-point incitements can also be helpful during a supervised classification of an image.

Developed by: