Introduction to Hyperspectral Imaging

Hyperspectral Imaging comes with some unique challenges when compared to traditional computer vision. This introduction describes the key concepts involved and the basic steps on how to get started.

Push Broom Cameras

The most common type of hyperspectral imaging camera used in industrial applications is the so-called push broom scanner. The most important difference from a regular camera is that push broom cameras only image a single line, not an entire image. The following image shows a simplified model of how a push broom camera works:

At the very basic level a slit is used to isolate just a single line of the image, which then passes through a diffraction grating in the same orientation as the slit to diffract the various wavelengths of light in different directions. Finally the light hits a standard optical sensor for the wavelength range in question. In practice the actual optics of a push broom camera is more complicated, and details may vary, but the basic principle remains the same.

Therefore, while a push broom camera returns a 2D image to the user, one dimension of that image is used to encode the wavelength information about each point, while the other dimension encodes the spatial dimension of the line.

Imaging Setups

In order to measure an actual image, push broom cameras require the user to create motion in the direction perpendicular to the slit and to acquire multiple frames during that motion. The frames then have to be combined to an image (or rather, a HSI data cube). It should be noted that the aspect ratio of the image that is generated by the motion of the imaged object relative to the camera is only correct if the speed of the motion is synchronized with the frame rate of the camera. Otherwise the image may appear streched or squished. There are various methods to generate the required motion.

While there are cameras that move the entire sensor assembly of the camera behind the objective lenses in order to obtain an image, this is the exception. More commonly the camera is fixed and the object moves.

There are two types of setups: desktop systems, typically used in a lab environment, where the measurement process is a manual process, and industrial setups, where the measurement occurs automatically.

A desktop measurement setup will typically consist of an assembly that contains a camera looking down on a linear stage that moves and is controlled by the software used to record the image. For example, the LuxFlux PolyScanner Inspection System, as shown in the following image,

consists of a linear stage, a light (see below), a hyperspectral camera, an industrial PC, and the fluxTrainer hyperspectral data acquisition and analysis software suite, all fitted into an enclosure. For more information please contact

In an industrial environment there are three primary methods of employing a push broom camera:

  • Above a conveyor belt that transports the objects that are to be measured. In that case the measurement is a continuous process in which the camera constantly images the line that is currently beneath it in order for the software to reconstruct an endless image. Data processing has to occur in real-time every time a new line is being recorded.

  • A chute where the objects to be imaged are falling down. This is similar to the convenyor belt, but in this case it is gravity that generates the motion of the objects. Note that objects should ideally fall at the same rate. Data processing has to occur in real-time every time a new line is being recorded. Imaging the objects in freefall is another variant of a chute.

  • As an inspection station, where an industrially controlled motion system moves a sample that is to be measured into position, a linear stage automatically passes the sample beneath the camera, and once the sample has been completely imaged, it is given back to the motion system. Data processing would typically take place once the entire sample has been imaged in order to provide an answer for the entire image.

In all of these industrial cases the hyerpspectral measurement system must be integrated into an existing workflow. LuxFlux provides the fluxRuntime real-time data processing embedded application to facilitate this. Please contact for further details.


The most underestimated part of performing hyperspectral measurements is how to provide a good lighting source. Note that as part of the PolyScanner Inspection System, LuxFlux provides a light source that fulfills all of the required criteria; for more information please contact

Broad Spectrum

Many lights emit only a set of various peaks at specific wavelengths. Flourescent lights are the worst contenders, but many types of LEDs are a close second. This is enough for the human eye or a standard RGB camera to pick up a somewhat accurate picture, but is not good enough for a hyperspectral camera. A camera only measures the light that is reflected by an object (under the assumption that the object does not emit light by itself), and if the light used to illuminate the object does not contain a specific wavelength, the object will not reflect it, and the camera will not pick up that specific wavelength. This completely negates the advantage hyperspectral cameras have over regular cameras.

For this reason it is imperative to use a light source that emits the entire spectrum that is to be imaged, in the ideal case with equal intensity.

Halogen lights provide a broad spectrum of comparable intensity in the short-wave infrared regime (700nm - 2500nm). That spectrum also extends down into the visible spectral range, although they are not ideal for blue light. For the visible range there are specialized LEDs that emit broad-spectrum light between around 420 nm and 700 nm.

For this reason it is recommended to use Halogen lights to illuminate samples when using a SWIR camera (900nm - 1700nm), and a combination of Halogen lights and broad-spectrum LEDs when using a VNIR camera (400nm - 1000nm).

Light Intensity

A hyperspectral camera requires a higher light intensity when compared with a regular camera for the same scene. The reason is quite simple: if light hits a single pixel on a regular camera, the same light is split up into its wavelengths components on a hyperspectral camera. A pixel of a hyperspectral camera therefore only sees a fraction of total light of the point in space that is projected onto it.

For example, a push broom camera that splits up into 300 bands will see 1/300th of the light per pixel as compared to a regular camera that images the same point in space with just a single pixel.


When imaging a scene the light applied to the scene should be roughly homogeneous across the entire scene. In the case of a push broom camera, for example, the entire line that is being imaged should be illuminated with approximately the same intensity. While this is true for computer vision in general, not just hyperspectral measurements, this is more difficult to achieve when performing hyperspectral measurements, due to the other criteria the light has to fulfill at the same time (broad spectrum, larger overall intensity).

Shadow Free

Finally, directional light produces shadows and highlights. These features affect the spectroscopy analysis of data negatively. Hence, it advisable to use a diffuse light source that reduces the shadowing.

Reference Measurements

While white balancing can be important in regular computer vision applications, it is especially important in hyperspectral applications. Hyperspectral imaging unleashes its full potential when spectroscopic methods can be applied to indivdiual pixels. For example, in the SWIR wavelength range, hyperspectral imaging may be used to distinguish various types of plastic from another. But spectroscopic methods rely on an accurately calculated absorbance, that is the fraction of light that is absorbed by the material. The amount of light that enters the camera is determined by both the lighting conditions and amount of light a material absorbs. A white reference measurement is the simplest method of removing the influence of the lighting conditions themselves, so that (in an ideal case) only the material properties themselves influence the amount of light the camera sees.

A white reference measurement should be performed in the same lighting conditions as the real measurements, with the same camera parameters (exposure, gain). The user should place a white reference of suitable material (such as optical PTFE, magnesium oxide or barium sulfate) underneath the camera at the same distance as the objects that are being imaged and perform a number of measurements that should be averaged to reduce the noise (typically one would average 10 or more measurements). The intensities of the actual measurements may then be divided by the intensities of the reference material to remove the influence of the lighting conditions, which also takes care of remaining inhomogeneities in the illumination.

As lighting conditions may change slightly over time, it is recommended to perform a white reference measurement as often as is feasible. For a lab environment a white reference measurement should be taken at least at the beginning of every measurement series. For the most accurate measurements the white reference measurement should be repeated in regular intervals, for example every 15 minutes.

Some light sources may require some time to reach an equilibrium state once they are switched on. During that time the spectrum of the light they emit may change slightly. For this reason it is considered best practice to wait some time after switching on the light source before performing any measurement (and especially before performing the white reference measurement). The warmup time will depend on the specific light source, but waiting for at least 5 minutes is good practice.

Note that the subtler the difference in spectra, the more important a good white reference measurement becomes. For example, when distinguishing PE from PP with a SWIR camera, the white reference does not need to be perfect, as the spectral differences are quite prominent, and one could reuse a white reference measurement made on a different day, for example. On the other hand, to accurately measure substance concentrations, or perform precise color measurements, an accurate white reference measurement is required that faithfully captures the lighting conditions.

Computer Setup Basics

In this section the prerequisites for connecting to HSI cameras from a computer are discussed. As HSI cameras use interfaces that are not used in the consumer market (they do not appear as webcams in the operating system, for example), there are some pitfalls that users may not be aware of that are discussed here.

System Requirements

Hyperspectral data is large. For example, if a camera images 1000 spatial pixels and has 300 wavelengths, a single measurement (which is just a line in space) will have 0.3 MP. As hyperspectral data processing with hyperspectral data happens in floating point (due to the user working in reflectances or absorbances, which are fractional quantities), assuming single precision this means that an individual frame for that specific camera has a size of 1.2 MB. If a spatial region is measured that is a square, 1000 frames have to be measured, leading to a spectral cube that uses 1.2 GB of RAM -- and no actual processing has been done yet at this point.

Live processing data line by line (for push broom cameras) is possible with smaller amounts of RAM, and even 2 GB might be sufficient, depending on the specific model. But as soon as data is being assembled into entire HSI cubes the amount of RAM required is larger. 32 GB or 64 GB are strongly recommended for training HSI data models if they contain large cubes.

In addition to a lot of RAM, a reasonably fast CPU is also recommended, for two reasons: when talking with a camera the CPU must be able to process the low-level interrupts generated by the incoming camera data fast enough. In addition to that, due to the fact that the amount of data is very large, a faster CPU will also be able to process the HSI data more quickly.

Generally speaking, a CPU on the level of a mid- to high-end consumer system with at least six or eight cores should be sufficient for the vast majority of use cases.

Calibration Files

Most HSI cameras come with calibration files. There are two types of calibration information that applies to HSI cameras:

  • Pixel to wavelength mapping: since a point in real space is mapped to an entire line of a push broom camera, with different wavelengths being mapped to different pixels, it is imperative that the software processing the data knows how which pixel corresponds to which wavelength. While the overall wavelength range will depend only on the camera model itself, manufacturing tolerances will dictate the precise values of each pixels and must be determined, which is typically done by the manufacturer.

  • Correction information: for all cameras there will be aberrations due to the optics involved. If these corrections are to be performed in software, the required information must be provided in some manner.

Note that this correction information is not always provided to the software processing the data: some cameras will perform the required corrections on the camera itself (typicall in an FPGA), while other cameras are constructed in such a manner that the aberrations are small enough not to matter in most use cases. The only information that a software processing HSI data from a camera must always have is the mapping between pixels and wavelengths.

In addition to calibration files, sometimes the calibration information may be stored on the camera itself, and can be obtained by the software during the connection process. The following constellations have been observed in actual cameras that are available on the market:

  • Pixel to wavelength mapping stored on the camera, part of the correction information stored on the camera, but other correction information stored in a calibration file. The corrections are performed purely in software.

  • All required correction information stored on the camera, and correction done in an FPGA on the camera itself, but the pixel to wavelength information stored in a calibration file.

  • No calibration information stored on the camera, but only pixel to wavelength mapping information provided by the manufacturer; the aberration effects being small enough to not be relevant for most use cases.

  • All calibration information (pixel to wavelength mapping, aberration corrections) stored on the camera, corrections performed on the camera, no calibration file required.

Other than in the last case this means that the calibration information must be provided to the software in the form of a calibration file, otherwise it is not possible to properly use the camera. Often manufacturers will provide the required calibration file on a USB stick within the packaging of the camera, or via email upon purchase.(DRAFT 1)

Note that there is currently no standardized format for calibration files for HSI cameras, each manufacturer has their own format they support.

USB3 Cameras

Some HSI cameras connect via a USB 3.x port. As a first step is important to note that in contrast to USB 2.0 there are more data lines used by USB 3.x to achieve higher transfer speeds. It is therefore important to use a port on the computer that supports USB 3.x (in the case of USB Type A the USB 3.x receptacles typically use a light blue plastic, while older USB versions typically use black or white plastic in the receptacles). The maximum distance for reliable data transfer over USB 3.x is 3 meters (approx. 10 feet), otherwise an active repeater or a hub should be used. In case the camera is connected to the hub, the camera should be the only device connected to that hub, and the hub should support USB 3.x, and be rated for the required transfer speeds.

If the camera has a USB Type Micro B connector, care should be taken when connecting and disconnecting the cable from the camera, as that connector, especially in its USB 3.x variant, is not quite as robust as other connectors and can be damaged more easily.


On Windows systems it is important to install the Windows kernel drivers so that the device is recognized by Windows. The manufacturer of the camera electronics will provide an installer that installs this driver. The camera will typically appear in the device manager as a "USB device" after the kernel driver has been installed successfully.


On Linux systems no kernel driver has to be installed, but the permissions for the camera must be adjusted so that the user can access it. In that case the user should create a file /etc/udev/rules.d/99-usb3-camera.rules with the following contents:

1 SUBSYSTEM=="usb", ACTION=="add", ATTRS{idVendor}=="0000", ATTRS{idProduct}=="0000", GROUP="plugdev"

The 0000 should be replaced by the actual vendor and product ids of the device in question. After the change, using sudo udevadm control -R the rules file can be read. The device must be physically reconnected for the rule to take effect. Additionally any user that is to be able to use the camera must be a member of the plugdev group.

Most camera manufacturers will provide such a rules file themselves.

Gigabit Ethernet Cameras

Many HSI cameras connect via a Gigabit Ethernet port to the computer – for example, but not exclusively, following the GigE Vision standard. While it is in principle possible to use a camera in a local network, it is not recommended. Instead, a dedicated link should exist between the camera and a free ethernet port on the computer.

Network configuration will happen over the operating system's standard configuration interface. The most important setting is to have the computer use an IP address in the same subnet as the camera.

When powered on the vast majority of cameras will try to obtain an IP address via DHCP. If that fails (because there is no DHCP server available) they will automatically select an IP address in the range between and, as defined by RFC 3927. It is important to note that cameras will typically try DHCP only once when powered on, and not retry it until a power cycle. For this reason we do not recommend to use DHCP here, because if the DHCP server is offline when the camera is powered on, it will not get an IP address from the range the user expects.

Hence the simplest solution to connecting the camera is to connect it directly to the computer via a dedicated link and have the computer pick an address from the same link-local address range.

Important note: if there is more than one interface that uses the link-local range then connecting to the camera will likely not work properly. This is due to limitations in how the IPv4 network stack works. In that case it is recommended to use a fixed IP address on both the camera and the computer connected to the camera. This is beyond the scope of this introduction though.


To ensure that automatic addressing is active on Windows systems, please do the following:

  • Go to "System Settings", "Network", "Status"

  • Look for the network interface the camera is connected to

  • Click on "Change connection settings" for that interface

  • Ensure that the "IP assignment" setting is set to "Automatic (DHCP)" (note that Windows falls back to link-local addressing as soon as DHCP times out with this setting)


To ensure that automatic addressing is enabled on macOS systems, please do the following:

  • Go to "System Preferences", "Network"

  • Select the network interface the camera is connected to

  • Ensure that the "Configure IPv4" is set to "Using DHCP" (note that macOS falls back to link-local addressing as soon as DHCP times out with this setting)


To ensure that link-local addressing is enabled on Linux systems, this depends on the desktop environment used. Note that Linux does not by default fall back to link-local addressing when using DHCP, but must be explicitly switched over.

On Ubuntu with GNOME, the following steps will reconfigure the interface:

  • Click on the network icon at the top right of the screen

  • Select the network adapter the camera is connected to

  • Click on "Wired Settings" for that adapter

  • Click on the Settings icon of the network adapter

  • Go to the "IPv4" tab and select the "IPv4 Method" to be "Link-Local only"

On Linux systems with KDE, the following steps will reconfigure the interface:

  • Click on the network icon at the bottom right of the screen

  • Select the icon "Configure network settings..."

  • Select the adapter to which the camera is connected to

  • Click on the "IPv4" tab

  • Ensure that "Method" is set to "Link-local"

As there are many ways to configure the network on Linux systems, the description here is provided for two of the most commonly used desktop environments. A full overview over all possible setups is beyond the scope of this introduction.