HYPERSPECTRAL REMOTE SENSING

Rimjhim Singh
14 min readJan 22, 2023

--

By Rimjhim Singh

INTRODUCTION

Remote sensing has been defined as “the field of research concerned with obtaining information about an item without physically coming into contact with it.” Remote sensing sensors, in this sense, gather precise information from which judgments may be formed. A weather satellite, for example, may capture remote sensing photos and, with correct algorithmic processing, atmospheric characteristics can be determined, which can then be used to drive choices. Often, remote sensing devices do not directly measure the intended information, but instead record data from which the desired information may be retrieved or linked.

Remote sensing, in a more technical sense, refers to the collection of technological instruments used to record the electromagnetic radiation generated and/or reflected by the observed objects.

Hyperspectral remote sensing is a subset of remote sensing distinguished by the sensors used to collect data. In the mid-1980s, two previously different technology domains converged: spectroscopy and remote sensing. As a result, “Hyperspectral Remote Sensing” or “Imaging Spectroscopy” emerged. The resulting instrumentation is known as hyperspectral sensors or image spectrometers. Before delving further into technical characteristics of the sensors, it is beneficial to present first the broader field of electro-optical and infrared remote sensing, along with the corresponding terminology.

Figure 1: Earth’s Atmospheric Opacity

The electromagnetic spectral area detected by hyperspectral sensors is a tiny fraction of the whole spectrum, which typically extends from 0.4 to 2.5 m. The fact that these sensors are sensitive in this region of the spectrum is due to our planet’s atmospheric circumstances (Figure below). Specific wavelength ranges are allowed (transmitted) and blocked (absorbed) by the environment. The wavelengths that are sent reach the Earth’s surface and are reflected by it. The reflected radiation is recorded by the hyperspectral sensors.

The electromagnetic radiation is reflected and emitted in variable amounts by various materials/objects. The reflected light from the surface reacts with each material/object in a unique way, and even little differences are assessed and documented using hyperspectral photography. The consequence of this light-to-object contact is known as spectral signature, and it is regarded as each object’s spectral “fingerprint.” The majority of hyperspectral remote sensing methodologies and approaches are based only on the precise spectral fingerprints recorded by image spectrometers.

Figure 2: Hyperspectral cube and pixel Spectral Signature

The data recorded from hyperspectral sensors have a 3-dimensional structure, also called data cube or simply cube. The first 2 dimensions are the spatial ones (Figure below), while the third dimension is the spectral dimension. Each spatial pixel is a vector (pixel column or pixel vector) containing the spectral information, i.e. values of reflected radiance or reflectance (Figure below). If we select a pixel vector and plot its values as a function of the corresponding wavelengths, the result will be the average spectral signature of all materials and objects covered in the ground area of the pixel.

In all sections of the spectrum where they operate, hyperspectral remote sensing equipment often have numerous contiguous bands. For example, the Digital Airborne Imaging Spectrometer is hyperspectral, with 63 bands: 27 in the visible and near infrared (0.4–1.0 microns), two in the short wave infrared (1.0–1.6 microns), 28 in the short wave infrared (2.0–2.5 microns), and six in the thermal infrared. Because these instruments can detect reflectance in numerous consecutive bands throughout a given section of the spectrum, they can generate a spectral curve that can be compared to reference spectra for any number of minerals, allowing the mineral composition of a specific piece of ground to be established.

Hyperspectral remote sensing divides a bandwidth of visible and infrared light into hundreds of spectral portions, allowing for an extremely exact comparison of ground features, such as colour, to reference standards. The technique is so sensitive that it can detect camouflaged objects and has been used in forestry to measure biomass and damage caused by plant disease. Hyperspectral remote sensing combines imaging and spectroscopy in a single system, which often includes large data sets and require new processing methods. Hyperspectral data sets are generally composed of about 100 to 200 spectral bands of relatively narrow bandwidths (5–10 nm), whereas, multispectral data sets are usually composed of about 5 to 10 bands of relatively large bandwidths.

HYPERSPECTRAL DATA

Because of the large number of wavebands collected, the data produced by imaging spectrometers differs from that of multispectral sensors. The data collected for a specific geographical area photographed can be seen as a cube, with two dimensions representing spatial position and one representing wavelength. Despite the fact that data volume does not pose any significant data processing issues with today’s computer systems, it is nevertheless useful to look at the relative magnitudes of the data, such as TM and AVIRIS. The number of wavebands (7 vs. 224) and radiometric quantization are clearly the most significant differences between the two (8 vs 10bpppb). The relative data volumes per pixel, ignoring changes in spatial resolution, are 56:2240. As a result, AVIRIS has 40 times the number of bits per pixel as TM.

With 40 times as much data per pixel-does it means more information per pixel? Generally of course, that is not the case-much of the addional data does not add to the inherent information content for particular information even though it often helps in discovering that information in other words it contains redundancies. In remote sensing data redundancy can take two forms: spatial and spectral. Exploiting spatial redundancy is behind the spatial context methods. Spectral redundancy means that information content of one band can be fully or partly predicted from the other bands in the data.

Figure 3: Vegetation Spectral Reflectance extracted from AVIRIS data

Hyperspectral data (or spectra) can be thought of as points in an n-dimensional scatterplot. The data for a given pixel corresponds to a spectral reflectance for that given pixel. The distribution of the hyperspectral data in n-space can be used to estimate the number of spectral endmembers and their pure spectral signatures and to help understand the spectral characteristics of the materials which make up that signature.

CLASSIFICATION METHODS

Hyperspectral image classification methods are classified into supervised classification, unsupervised classification and semisupervised classification according to whether the classification information of training samples is used in the classification process.

Supervision Classification

A common hyperspectral image classification approach is the supervised classification method. The basic process is to first determine the discriminant criteria based on the known sample category and prior knowledge, then calculate the discriminant function; common supervised classification methods include support vector machine, artificial neural network, decision tree, and maximum likelihood classification methods.

Unsupervised Classification

Unsupervised classification refers to classification based on hyperspectral data spectral similarity, i.e. clustering without previous knowledge. Unsupervised classification can only assume beginning parameters, generate clusters through preclassification processing, and then iterate until the relevant parameters reach the permitted range because no prior information is employed.

Semisupervised Classification

The supervised method’s key disadvantage is that the classification model and accuracy are largely dependent on the quantity of training data sets of label points, and generating a large number of hyperspectral image class labels is a time- and cost-intensive process. Although unsupervised algorithms are not sensitive to labelled data, the link between clustering categories and actual categories is unknown due to a lack of prior knowledge [55]. Semisupervised classification trains the classifier using both labelled and unlabeled data. It compensates for the lack of both unsupervised and supervised learning opportunities. On the feature space, this classification approach is based on the same type of labelled and unlabeled data. Because a large number of unlabeled examples may better explain the overall properties of the data, the classifier trained with these two samples has superior generalisation. In hyperspectral image classification, this classification approach is commonly employed. Typical semisupervised classification methods include model generation algorithms, semisupervised support vector machines, graph-based semisupervised algorithms, and self-training, collaborative training, and triple training.

STUDIES COMPARING HYPERSPECTRAL MULTISPECTRAL IMAGING:

Multiple studies have attempted to compare the performance of multispectral and hyperspectral data for estimating different vegetation properties. Some of these studies have successfully indicated that hyperspectral data performed better than multispectral data. For instance consider the following studies which have proved the performance of hyperspectral imaging as superior.

· Lee et al. [3] compared performance of hyperspectral AVIRIS data with that of multispectral ETM+ and MODIS data for estimating LAI, and found that the hyperspectral AVIRIS performed better.

· Sluiter and Pebesma [4] used hyperspectral HyMap data and multispectral Landsat ETM+ and ASTER data for vegetation type classification and concluded that the hyperspectral HyMap data provided the best classification results.

· Marshall and Thenkabail [5] compared hyperspectral EO-1 Hyperion data with multispectral IKONOS, GeoEye-1, Landsat ETM+, MODIS, and WorldView data for crop biomass estimation, and suggested that the hyperspectral data-derived products achieved higher accuracy than that of multispectral data.

· Sun et al. [6] investigated performance of ground-based multispectral and hyperspectral LiDAR for estimating nitrogen concentration in rice leaves and found that the hyperspectral image had obvious advantages.

It should be highlighted, however, that these findings do not imply that hyperspectral imaging always outperforms multispectral imagery when mapping vegetation attributes. To assess their efficiency for identifying vegetative characteristics, a more detailed comparison of these two forms of photography is required.

To ensure a fair comparison, the photos must be collected on the same day, or within a few days of each other, to minimise the impact of vegetation phenological variations and meteorological conditions. Because various sensors, particularly spaceborne sensors, have distinct imaging cycles (i.e., revisit dates), getting the same or even close to the same date photos from separate sensors can be difficult. As a result, researchers have tried to replicate pictures in order to make image comparison easier, such as imitating multispectral photos with a hyperspectral image. Utilizing simulated images can also avoid potential influences of various environmental or mechanical factors (e.g., atmospheric conditions, vegetation changes, and sensor’s signal-to-noise ratio) and thus achieve a fair comparison of spectral resolutions.

ADVANTAGES

The primary advantage to hyperspectral imaging is that, because an entire spectrum is acquired at each point, the operator needs no prior knowledge of the sample, and postprocessing allows all available information from the dataset to be mined. Hyperspectral imaging can also take advantage of the spatial relationships among the different spectra in a neighbourhood, allowing more elaborate spectral-spatial models for a more accurate segmentation and classification of the image.

Hyperspectral imaging is far more beneficial than multispectral imaging because of the several reasons including:

1. Hyperspectral remote sensing data have high spatial resolution.

2. Hyperspectral data are frequently collected in a distinct spectral range.

3. Bands of hyperspectral data are contiguous and overlapping, making them useful to detect all necessary information.

4. The contiguous spectrum obtained assists atmospheric windows to be recognized for removal from the radiance signal, which is not applicable for multispectral sensors.

5. The signal to noise ratio of the data can be enhanced by comparing pixel spectra, but in multispectral data, it’s not possible because of the number of discontiguous bands.

6. The problem of mixed spectra can be solved by directly deriving the relative abundance of materials.

7. The objects or classes of a hyperspectral image can be derived from various spaces such as the spectral space, image space, and character space.

Figure 4: Comparison of the image stacks in multispectral imaging, in which there are images taken in several different spectra, and hyperspectral imaging, in which there are images taken in many different spectra.

LIMITATIONS

Computer classification of remote sensing images is the process of identifying and classifying information about the earth’s surface and environment on remote sensing images in order to extract the required feature information and identify the feature information corresponding to the image information. The use of automated pattern recognition technology in the realm of remote sensing is computer categorization of remote sensing photographs. Furthermore, as hyperspectral imaging technologies advance, the data captured by hyperspectral pictures will become more detailed and richer. On the one hand, spatial resolution will be enhanced, while spectral resolution will be greatly improved. The volume of information included in hyperspectral photos will increase as hyperspectral imagers grow increasingly advanced, and the areas of application for hyperspectral images will expand. The rising volume of data and the use of varied occasions have created more sophisticated needs for hyperspectral remote sensing ground observation technology. The number of imaging bands in hyperspectral photographs is bigger than in multispectral photos, and the capacity to resolve objects is stronger, that is, the higher the spectral resolution. However, due to the high-dimensional nature of hyperspectral data, as well as the similarity of the spectra and mixed pixels, hyperspectral image classification technology still confronts a number of hurdles, the most pressing of which are the issues listed below.

(1) Hyperspectral image data has a high dimensionality. Because hyperspectral pictures are created by combining hundreds of bands of spectral reflectance values gathered by airborne or spaceborne imaging spectrometers, the spectrum information dimension of hyperspectral images may also be hundreds of dimensions.

(2) Samples that haven’t been labelled. In practical applications, collecting hyperspectral image data is rather simple, but obtaining picture-like label information is quite challenging. As a result, the categorization of hyperspectral pictures is sometimes hampered by a shortage of labelled samples.

(3) Variability in spectral information over space. The spectral information of hyperspectral images changes in the spatial dimension as a result of factors such as atmospheric conditions, sensors, the composition and distribution of ground features, and the surrounding environment, resulting in the ground feature corresponding to each pixel not being single.

(4) Image quality. The interference of noise and background elements during the capture of hyperspectral pictures has a significant impact on the quality of the data acquired. The categorization accuracy of hyperspectral images is directly influenced by picture quality.

APPLICATIONS

Figure 5: Two-dimensional projection of a hyperspectral cube

Hyperspectral remote sensing is utilised for a variety of purposes. Although hyperspectral imaging was originally developed for mining and geology (its ability to identify various minerals makes it ideal for the mining and oil industries, where it can be used to look for ore and oil), it has since spread to fields as diverse as ecology and surveillance, as well as historical manuscript research, such as the imaging of the Archimedes Palimpsest. This technology is becoming increasingly accessible to the general people. Catalogues of numerous minerals and their spectral signatures have been released online by organisations like NASA and the USGS to make them more accessible to researchers. On a smaller scale, NIR hyperspectral imaging may be used to monitor pesticide application to individual seedlings in real time for quality control of the optimum dose and uniform coverage. There are many applications which can take advantage of hyperspectral remote sensing.

1) Atmosphere: water vapor, cloud properties, aerosols

2) Ecology: chlorophyll, leaf water, cellulose, pigmemts, lignin

3) Geology: mineral and soil types

4) Coastal Waters: chlorophyll, phytoplankton, dissolved organic materials, suspended sediments

5) Snow/Ice: snow cover fraction, grainsize, melting

6) Biomass Burning: subpixel temperatures, smoke

7) Commercial: mineral exploration, agriculture and forest production

CONCLUSION

This article talked about basics of Hyperspectral Remote Sensing. Hopefully it was able to serve its purpose of making the concept clear to the readers. Hyperspectral image classification and identification are crucial components of hyperspectral image processing. Several approaches of hyperspectral image classification are discussed in this study, including supervised and unsupervised classification, as well as semisupervised classification. Although the supervised and unsupervised categorization methods mentioned in this article each have their own set of advantages, the application of each approach has its own set of constraints. For example, supervised classification necessitates a specific number of preceding conditions, and human variables will have an influence on classification outcomes.

As a result, numerous approaches must be integrated in order to obtain the required classification impact, based on varied application needs and the gathering of hyperspectral pictures containing huge information. Hyperspectral image categorization has been popular with the invention of hyperspectral image technology. Existing theories and methods still have certain limitations for more complicated hyperspectral image classification. Therefore, researching more targeted hyperspectral image classification methods will be an important research direction in the future. It is an expansion of the spectroscopic discipline, which has been rapidly increasing worldwide for many years, in addition to being a technology that can bring value to the remote sensing arena. All spectral approaches available today will be able to implement the HRS data and forward the applications in a couple of generations when sensors in the air and orbit domains begin to produce SNR values similar to those measured in the laboratory. HRS technology is being more widely used in the scientific community. The number of sensors is also on the rise, and new businesses are joining the market. The most critical stage in HRS data processing is collecting correct reflectance or emissivity information for each pixel in the image; from there, a complex analytical approach can be applied.

This means that the data must be physically dependable and steady at the sensor level, in addition to the atmospheric correction mechanism. Only a few examples of what this technology may accomplish include mixed-pixel analysis and spectral models to account for specific problems. HRS sensors on orbit are projected to propel this technology forward by providing low-cost temporal coverage of the globe and demonstrating to decision-makers that the technology may benefit other space missions significantly. The expanding sensor-development activity in the market will allow for a “sensor for everyone,” which will help to advance the technology. Because there are still numerous constraints, such as the TIR region not being fully covered, information only being obtained from a very thin layer, time investment, significant data processing costs, and great effort required to achieve a final result, investment in this technology is warranted. If the above limitations can be overcome, and other sensors’ capabilities merged with it, then HRS technology can be the vehicle to real success, moving from a scientific demonstration technology to a practical commercial tool for remote sensing of the Earth.

CITATIONS AND REFERENCES

1) Feng X, He L, Cheng Q, Long X, Yuan Y. Hyperspectral and Multispectral Remote Sensing Image Fusion Based on Endmember Spatial Information. Remote Sensing. 2020; 12(6):1009. https://doi.org/10.3390/rs12061009

2) B. Lu, Y. He and P. D. Dao, “Comparing the Performance of Multispectral and Hyperspectral Images for Estimating Vegetation Properties,” in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 12, no. 6, pp. 1784–1797, June 2019, doi: 10.1109/JSTARS.2019.2910558.

3) Lee, Kyu-Sung, et al. “Hyperspectral versus multispectral data for estimating leaf area index in four different biomes.” Remote Sensing of Environment 91.3–4 (2004): 508–520.

4) Sluiter, R., and E. J. Pebesma. “Comparing techniques for vegetation classification using multi-and hyperspectral images and ancillary environmental data.” International Journal of Remote Sensing 31.23 (2010): 6143–6161.

5) Marshall, Michael, and Prasad Thenkabail. “Advantage of hyperspectral EO-1 Hyperion over multispectral IKONOS, GeoEye-1, WorldView-2, Landsat ETM+, and MODIS vegetation indices in crop biomass estimation.” ISPRS Journal of Photogrammetry and Remote Sensing 108 (2015): 205–218.

6) Sun, Jia, et al. “Estimating rice leaf nitrogen concentration: influence of regression algorithms based on passive and active leaf reflectance.” Remote Sensing 9.9 (2017): 951.

7) https://www.geo.university/pages/hyperspectral-remote-sensing

8) https://www.malvernpanalytical.com/en/products/measurement-type/remote-sensing

9) https://www.microimages.com/documentation/Tutorials/hyprspec.pdf

10) https://www.neonscience.org/resources/learning-hub/tutorials/introduction-hyperspectral-remote-sensing-data#toggle-0

11)https://www.umbc.edu/rssipl/people/aplaza/Papers/BookChapters/2012.EUFAR.Hyperspectral.pdf

12) https://www.hindawi.com/journals/js/2020/4817234/

13) https://rslab.disi.unitn.it/papers/R30-TGARS-SVM-HYPER.pdf

--

--