Spectral image data is a cube, which consists of two spatial dimensions and one spectral dimension. Three color imaging (red, green, blue, RGB) commonly used in current imaging systems can only capture three spectral channels, and can not directly analyze the colorful spectral characteristics of the subject from the image; Hyperspectral imaging can capture more spectral channels (hundreds or even thousands) and provide more color information for the scenes or objects photographed, thus it can be widely used in environmental monitoring, military defense, cultural relics and archaeological analysis, medical diagnosis, food supervision, geological exploration and other fields.
In terms of the time dimension of imaging, hyperspectral imaging mainly has two ways: (1) scanning based method (multiple exposures): acquiring static spectral images by sacrificing time to scan; (2) Computational imaging method (hyperspectral imaging under single exposure conditions can be achieved): dynamic hyperspectral video can be obtained by calculating and reconstructing modulated spectral data. As hyperspectral imaging has the characteristics of accurate collection, fine analysis and accurate calculation of nanoscale spectral features, it can be used as a powerful tool for scientific research and engineering applications, and become one of the international frontier hot spots in the current image and video processing, computer vision and graphics research fields.

Principle and method:
1. Scanning based method
This method has gone through three stages: point scanning, line scanning and spectrum scanning. The point scan method captures the hyperspectral data of a single spatial location at each time, so it takes a lot of time to obtain a hyperspectral data cube. The line scanning method uses a slit aperture parallel to one spatial direction, and the spectrometer scans in another direction, so that a hyperspectral data cube can be obtained faster than the point scanning method. The spectral scanning method uses narrowband filters of different wavebands in the optical path of the imaging equipment to take a series of full space resolution single band images, which can be used to synthesize a hyperspectral data cube. There are other scanning methods, such as the Fourier transform spectral imaging method, which scans a mirror of the Michelson interferometer to obtain spectral data on multiple optical path difference (OPD) values (Fourier domain equivalence of the tunable filter camera). Sensitive spectral imaging IV uses diffraction gratings to disperse the incident light, and introduces a slit mask in the optical path to modulate the spectrum.
In the past 30 years, the scanning technology has made remarkable progress. For example, Schechner and Nayar fixed a filter that changes with the spatial scene to a mobile camera. When the camera moves, it will perceive every pixel in the scene many times, and each perception will be in a different spectral band. Different from previous scanning methods, AOTF and LCTF can achieve very fast scanning speed based on the principle of polarization. Xu et al. used LCTF and compressed sensing technology to achieve spatial spectrum coding and improve spatial spectrum resolution of spectral video imaging. In addition to the passive spectral scanning method (passive illumination method), Nayar et al. proposed an active illumination method, which uses a multi-channel illumination scheme from one band to another to achieve spectral imaging. The above principles and methods all sacrifice time resolution for spectral resolution, so they can only shoot hyperspectral data in static scenes. Since the scanning based method (multiple exposures) cannot capture hyperspectral data in dynamic scenes, a computational imaging method (single exposure) was born to capture hyperspectral video (the transformation of spectral imaging from "static" to "dynamic").
2. Computational imaging method
In recent decades, in order to overcome the limitations of scanning based methods, researchers have invented a variety of photographic (single exposure) spectral imaging methods. In the 1930s and 1960s, snapshot spectral imaging in astronomy usually used integral field technology (based on mirror array (IFS-M), fiber array IFS-F) and small lens array (IFS-L). The reason why it is so named is that each individual measurement of 3D data cube pixel is integrated on an area (object). In the 1970s, some scholars proposed a multi spectral beam splitting method, which uses multiple beam splitters to divide the incident light into multiple optical harmonic bands. A simpler method - Multi aperture filter camera (MAFC) uses a group of sensor arrays and places different filters in front of each sensor to collect part of the full spectrum band. In the adjustable stepped imager (TEI), the imaging output of the Fabry Perot interferometer is cross dispersed by the grating in one direction and dispersed by the stepped pattern in the vertical direction, which forms a mosaic image of different spectral narrowband images in the same area on the sensor.
About 10 years ago, the appearance of high-resolution sensors greatly improved the spatial resolution of spectral images, making it possible for snapshot spectral imaging to be applied. Inspired by tomographic medical imaging technology (CT), Descour and Dereniak m2 proposed a snapshot spectral imaging method and system based on computed tomography (CTIS), which is consistent with the principle of medical CT. CTIS can reconstruct a three-dimensional spectral cube with spatial and spectral dimensions from a set of acquired two-dimensional projections, These two-dimensional projections integrate spectral signals from different scene locations on the sensor. The main advantage of CTIS is that it can make the system layout very compact, but the main disadvantage is that it is difficult to manufacture dispersion elements such as kinoform lens and the problem of missing cones in CT reconstruction. Since the principle and method were proposed, CTIS has been continuously improving around its computational complexity, calibration difficulty and measurement process. Although hyperspectral video can be obtained through CTS, the "space spectrum" dimension of the reconstructed single frame is 100 × one hundred × 100 hyperspectral images take 20-30 minutes, which makes the computed tomography spectroscopy technology not suitable for real-time video applications.
Coded Aperture Snapshot Spectral Imaging (CASSI) is a method and system that attempts to use the principle of compressed sensing to perform snapshot spectral imaging: it reconstructs hyperspectral images by sparse sampling of 3D data cubes, assumes sparsity (a common attribute of natural scenes) on the basis of multi-scale wavelet transform, and thus reconstructs 3D data cubes of spectra under uncertain conditions. Based on the compressed sensing theory, the hyperspectral cube data at each time can be sparsely sampled in a low dimension before high-precision spectral reconstruction. It can be divided into double dispersive element (DD) CASSI and monochromatic dispersive element (SD) CASS according to different spectral dispersion modes. The compressed sensing spectral imaging system can be used in various fields more compactly and flexibly, and the cost is lower. However, the imaging method based on coded aperture still has limitations: (1) Due to the sparse assumption of natural scenes, reconstruction errors will inevitably occur; (2) The computational complexity of reconstruction algorithms, such as TwIST, ADMM and HS dictionary learning plus sparse constraints, is not satisfactory, and hyperspectral data cannot be reconstructed in real time. In order to overcome the above shortcomings, Wang et al. proposed a dual camera imaging system based on complementary observations, and proposed a hyperspectral image reconstruction algorithm combining data and prior knowledge.
The prism mask modulated spectral video imaging method and prototype system (PMVIS), guided by the computational imaging principle, makes full use of the dispersion effect of the prism and the sparse sampling of the mask to complete the hardware design and implementation of real-time spectral video acquisition, greatly improving the freedom of information acquisition. Different from the principles of CTIS and CASSI, one of the core contributions of PMVIS is the idea of parallel optical path, which breaks the inherent contradiction between high spatial and high spectral resolution; By adding another RGB camera to the system, a parallel RGB video with high spatial resolution is obtained, and a low spatial resolution spectral video is obtained on the original optical path. The spectral video information is superimposed on the RGB high spatial resolution, and finally a video with high spatial and high spectral resolution is obtained through fusion processing.
Since spectral resolution is the core factor for the success of many machine vision applications, some scholars have proposed snapshot Fourier transform spectrometer (SHIFT), multispectral Sagnack interferometer (MSI), image mapping spectrometer (IMS) 6 and light field spectrometer (LFIS). Their principles are to sacrifice spatial resolution for hyperspectral resolution. On this basis, Manakov et al. proposed a reconfigurable camera accessory, which uses the mechanism of physical replication to achieve all-optical imaging, copies the sensor image into multiple identical copies and recovers the required optical spectrum information through different filters. Yang et al. developed a new spectral resolution compressed ultrafast imaging (HCUP) technology, which can operate in a single exposure imaging mode, overcome the technical limitations of active lighting, and achieve ultrafast spectral imaging.
With the development of big data science and semiconductor device technology, a single exposure computational spectral imaging method has emerged. Compared with traditional scanning based methods, computational imaging method has two significant advantages: (1) since it does not need to scan in the time dimension, computational imaging method can obtain dynamic spectral video; (2) The spectrometer can be constructed without the use of specially manufactured complex mechanical structures, which has great potential in applications with low cost and strict size requirements. On the other hand, computational imaging methods also have limitations: due to the sparsity assumption of natural scenes, spectral reconstruction errors are inevitable, and the computational complexity of spectral reconstruction algorithms is high, so real-time high-precision spectral reconstruction still faces huge challenges. In various application fields, the computational imaging method is more convenient than the scanning method (the use process is the same as that of traditional cameras, and no scanning is required). At the same time, the development of computational imaging methods, new materials, and micro/nano optical technology paves the way for the future production of micro spectrometers with high performance, low cost, and the size of smart phone cameras. In general, there are still some defects in different spectral imaging systems. These defects need to be improved by considering the further combination of optical principles, compressed sensing theory, machine learning algorithms and semiconductor device manufacturing technology.
In the past few years, spectral cameras based on computational imaging principles have gradually met the needs of practical commercial applications. With the rapid development of semiconductor device manufacturing technology, some new spectral imaging principles and systems have begun to take shape: micro nano manufacturing technology can perform wavelength level modulation on the chip, such as colloidal quantum dot spectrometer (CQD) circle and IMEC's on-chip filter technology. These technological changes have replaced the traditional dispersion elements (such as prisms and gratings) that have been used for hundreds of years. Among them, IMEC provides high spatial resolution (7Mpx) and spectral resolution (150+band) with a compact, lightweight and mass manufacturing design. In addition, the hypersurface technology based on resonant subwavelength photonic structure realizes a new method of wavefront control and light focusing, which lays a foundation for the new generation of spectral imaging technology.
|
|