Radiometric Correction
Objectives
• To understand Empirical Line Calibration.
• To perform ELC using the ENVI system.
Part I - Empirical Line Calibration
________________________________________
The following description of Empirical Line Calibration (ELC) is directly from the text by
Dr. Jensen (2015). If you want more specific knowledge of ELC, please refer to Jensen (2015):
pp 220-223.
Atmospheric correction can be performed using Empirical Line Calibration (ELC). ELC has
made the remote sensing data match in situ spectral reflectance measurements, which are
obtained at approximately the same time as the remote sensing platform overflight. ELC is
based on the equation:
BVk = Ak * k + Bk,
where, BVk is the digital brightness value for a pixel of band k; k equals the in situ surface
reflectance of the materials within the remote sensor IFOV at a specific wavelength; Ak is a
multiplicative term (gain) affecting the BVk, and Bk is an additive term (offset). The
multiplicative term is associated primarily with atmospheric transmittance and instrumental
factors, and the additive term deals primarily with atmospheric path radiance and instrumental
offset (i.e., dark current).
To use ELC, the analyst usually selects two or more areas in the scene with different albedos
(e.g., one bright target such as a sand pile and one dark target such as a deep, nonturbid water
body). The areas should be as homogeneous as possible. In situ spectroradiometer
measurements of these targets are made on the ground. The in situ and remote sensing–derived
spectra are regressed and gain and offset values computed. The gain and offset values are then
applied to the remote sensor data on a band by band basis, removing atmospheric attenuation.
Note that the correction is applied band by band and not pixel by pixel.
Most multispectral remote sensing datasets can be calibrated using the empirical line
calibration. The difficulty arises when trying to locate homogeneous bright and dark targets in
the study, collecting representative in situ spectroradiometer measurements, and extracting
uncontaminated pixels of the calibration targets from the imagery. If the analyst does not have
access to in situ spectra obtained at the time of the remote sensing overflight, it might be possible
to use spectra of such fundamental materials as clear water and sand (quartz) that are stored in
spectral libraries (e.g., from NASA’s Jet Propulsion Laboratory (JPL), USGS, Johns Hopkins
University spectral library). Hopefully, some of these materials exist in the scene and the analyst
can locate the appropriate pixel and pair the image brightness values with the library in situ
spectroradiometer data.
You will use several spectral pairs to perform the Empirical Line Calibration. Those pairs
are used to plot a regression line, which is used to modify the input image. Since the purpose of
1
the ELC is to define the regression line, you should use at least one pair of spectra from both
bright and dark areas.
Part II - Perform Empirical Line Calibration Using the ENVI ELC Tools
________________________________________
WorldView-2 Imagery
File name: msr4c1p001.img
Location: Tampa Bay, FL, USA
Sensor: WorldView-2
Bands: Coastal Blue (425 nm), Blue (480 nm),
Green (545 nm), Yellow (605 nm), Red (660
nm), Red-Edge (725 nm), NIR1 (833 nm),
NIR2 (950 nm).
Spatial resolution: 2 m × 2 m
Acquisition date: May 1, 2011
In situ spectral measurements
In this exercise, we will use in situ spectral measurements taken from beach and bay waters
as ground based spectra. The in situ spectra, including two pints from beach (sand1.txt and
sand2.txt) and three points from bay/seawaters (water1.txt, water2.txt, and water3.txt), were
measured on May 3, 2011. See the in situ spectral measurements in Lab#2 folder.
Open msr4c1p001.img for Empirical Line Calibration
Now you are ready to use or open the msr4c1p001.img by clicking pull down menu File to
select Open (or directly click Open icon, then navigate to your flash drive (or to where you saved
the image file) to click image file msr4c1p001.img. Then you can find the image bands listed in
Data Manager panel. From the panel, Select Band7 (NIR1), Band 5 (Red) and Band3 (Green)
for RGB color guns to display a (standard) false color composite image.
Save Dark and Bright Image/pixel Spectra (spectral profiles) in Spectral Library
For the exercises, we need two white (from sand beach) and three dark (from seawaters)
image/pixel spectra (BV) and corresponding sands/waters in situ spectral measurements to
simulate regression lines for ELC. To do so, first locate and export the five image/pixel spectra
(two corresponding in situ sand spectra and three corresponding in situ seawater spectra) in
spectral library. So, from the Image Window Viewer to press Display/Profiles/Spectral (or click
its icon), then type in sample 325, line 2268 in
to locate the 1st
sand/beach pixel to show up a Spectral Profile for the pixel (spectrum, Figure 1a). You need to
export/save the sand (water) image/pixel spectrum (spectral profile in .sli) by pressing
Export/Spectral Library… from Spectral Profile panel (click OK to save the 1st sand/beach pixel
spectral profile in the folder with the lab data) (Figure 2). Repeatedly, you can locate/export
spectral profiles for the 2nd sand and three water pixels one by one by using the same steps to
locate/export the 1st sane pixel spectral profile. The sample # and line # for the 2nd sand/beach
and three water pixels are sample 306, line 2325 for the 2nd sane/beach pixel; sample 522, line
2
2086 for the 1st water pixel; sample 398, line 1760 for the 2nd water pixel; and sample 234, line
1480 for the 3rd water pixel. Their corresponding spectral profiles are Figures 1b, 1c, 1d, and 1e.
Figure 1a
Figure 1b
Figure 1c
Figure 1d
Figure 1e
Figure 2
Perform the Radiometric Correction (ELC)
After you exported/saved the five image (pixel) spectral profiles (.sli) and copied the five in
situ spectra (.txt) to your own folder/drive, it is ready to perform the radiometric correction from
WorldView-2 (WV2) image BV to reflectance through ELC. To do so, go Toolbox to press
Radiometric Correction, then press Empirical Line Compute Factors and Correct (if you already
have existing ELC factors, press Empirical Line Correct Using Existing Factors). Checking the
msr4c1p001.img from Empirical Line Input File panel and do Spatial and Spectral Subsets if
necessary, then click OK to pop-up an Empirical Line Spectra panel Figure 3). Fill the panel by
3
importing five image/pixel spectra (five .sli files) by pressing Data Spectra: Import Spectra and
corresponding five in situ spectra (five *.txt files) by pressing Field Spectra: Import Spectra for
simulating a regression/calibration model for converting WV2 BV image data to reflectance
format data.
To fill the panel, first import the five Data Spectra (image/pixel spectra) by clicking Import
Spectra and then import five Field Spectra by click the Import Spectra. While you click the
Import Spectra, a Data Spectral Collection panel pops up for your selecting import spectra. You
can press Import/from Spectral Library file…to select the five image/pixel spectra from Spectral
Library Input File panel one by one (.sli). When you click a .sli (e.g., sand1.sli) from the panel,
an Input Spectral Library panel pops up (Figure 4). Select the spectrum and click OK to input
the spectrum into Data Spectra Collection panel. If the image/pixel spectra are not in Spectral
Library Input File, press Open/Spectral Library… (from bottom) to find the saved image
spectrum (.sli) to click it to input into Spectral Library Input File (if you select/save the image
spectra and didn’t exit the ENVI, the image/pixel spectra should be already in Spectral Library
Input File). So far, the Data Spectral Collection panel looks like Figure 5 (with five image/pixel
spectra). Press Select All then press Apply to fill the five image spectra into the Empirical Line
Spectra panel. Close the Data Spectral Collection then back to the Empirical Line Spectra panel
to click Import Spectra for importing Field Spectra. To do the same thing as for importing the
image/pixel spectra to import the five in situ spectra (sand1.txt, sand2.txt, water1.txt, water2.txt
and water3.txt). Note this time you need pressing Import/from ASCII file… to import the five in
situ/field spectra (Figure 6) because the five in situ spectra are saved in ASCII (you can open and
check them with Notepad).
After you import the five pairs of image/field spectra, you can make pairs for the image
spectra and in situ spectra by selecting Data Spectra and Field Spectra then pressing Enter Pairs
similar to Figure 7. From the filled panel, click OK and then to fill a small window (Empirical
Line Calibration Parameters), just pop-up, with you output filename (for your calibrated image
file, e.g., msr4c1p001_elc.img) and output calibration filename (.cff) (see Figure 8), then Click
OK to perform the ELC.
4
Figure 3
Figure 5
Figure 7
Figure 4.
Figure 6
Figure 8.
5
So far, you can find that an ELC calibrated image file (in reflectance) has been created from
Data Manager panel and automatically been displayed in the View. You can press the ELC
image (i.e., calibrated WV2 imagery) by clicking Change RGB Bands… in Layer Manager in the
left of the View to display its color composite image you prefer. You may find its spectral profile
of the calibrated image with vertical axis of Digital Number (0−2047) and actual reflectance is
[0-1]. Therefore, we need editing the calibrated header file (.hdr). To do so, close the calibrated
image and the original WV2 image then go to the folder where the calibrated header file
(msr4c1p001_elc.hdr) was saved to click the .hdr file where you can find two lines:
z plot range = {0.00, 2047.00}
z plot titles = {Wavelength, Digital Number}.
Change them to
z plot range = {0.00, 1.00}
z plot titles = {Wavelength, Reflectance} and save the .hdr file.
Then you can open the msr4c1p001_elc.img and check the spectral profile with vertical axis of
Reflectance and its value from 0 to 1 (by pressing right button Auto Scale Y-Axis in the Spectral
profile to rescale the vertical axis).
Part III - Compare the Calibrated Image with the Original One
Now you need comparing the Empirical Line Calibrated image with the original WV2 data.
Please close all image windows. Open two images (original WV2 image: msr4c1p001.img and
Empirical Line Calibrated image: msr4c1p001_elc.img) in the two new views (either two vertical
views or two horizontal views). After display the two composite images, press Views/Link Views
to make the two composite images linked (either Geo Link or Pixel Link) by clicking View 1
and View 2 and then click OK in the Link Views to close it. Click Spectral Profile icon after
clicking View 1 and View 2 to pop up the two Spectral Profile plots and then type in the
coordinate as (2355, 873) into
and press Enter. Now you can find
that the two healthy vegetation curves show up in each plot. You can use the Crosshair cursor to
locate a sea water pixel that you think it is representative (note no data for the large low right
area of the image), and with the same procedure to create spectral profile for an urban pixel from
both images. You can see the profiles more clearly when you modify the profile chart option for
Y axis by pressing Right-button/Auto Scale Y-Axis in the Spectral Profile.
Questions:
1. Display and compare differences of the spectral profiles between the ELC image and the
original image for (Hint: You need to display, describe the differences, copy/paste spectral
profiles of ELC image and original image):
- Healthy Vegetation (sample 2290, line 620)
- Urban (sample 110, line 151)
- Sea Water (sample 1246, line 455)
2. Do you think the ELC method corrected the atmospheric effect very well? What errors might
be included in your ELC method (Hint: Spatial and temporal errors, not perfectly matching
between image spectra and in situ/field spectra; white point(s) is not whitest and black
point(s) is not blackest, leading to negative or >1 ELC pixel values)? Evaluate your ELC
6
result (Hint: Check ELC image profiles patterns and percentages of negative and > 1 ELC
pixels).
3. Which features (targets, land cover types) have a possible negative reflectance or the
reflectance larger than 1 per band (Hint: you might want to use Display/Profiles)? Why do
you think your result has the reflectance less than 0 or larger than 1 (Hint, Not so whitest and
blackest for image spectra for simulating calibration models)? How can you fix the problem
(Hint: Either re-do ELC or assign all unreasonable ELC pixel values within 0 – 1)?
________________________________________
References
Jensen, John R., 2015, 4th edition, Introductory Digital Image Processing: A Remote Sensing
Perspective, Prentice Hall: Upper Saddle River, NJ 07458, 623 pages.
7
Questions:
1. Display and compare differences of the spectral profiles between the ELC image and the original
image for (Hint: You need to display, describe the differences, copy/paste spectral profiles of ELC image
and original image):
- Healthy Vegetation (sample 2290, line 620)
- Urban (sample 110, line 151)
- Sea Water (sample 1246, line 455)
2. Do you think the ELC method corrected the atmospheric effect very well? What errors might be
included in your ELC method (Hint: Spatial and temporal errors, not perfectly matching between image
spectra and in situ/field spectra; white point(s) is not whitest and black point(s) is not blackest, leading
to negative or >1 ELC pixel values)? Evaluate your ELC result (Hint: Check ELC image profiles patterns
and percentages of negative and > 1 ELC pixels).
3. Which features (targets, land cover types) have a possible negative reflectance or the reflectance
larger than 1 per band (Hint: you might want to use Display/Profiles)? Why do you think your result has
the reflectance less than 0 or larger than 1 (Hint, Not so whitest and blackest for image spectra for
simulating calibration models)? How can you fix the problem (Hint: Either re-do ELC or assign all
unreasonable ELC pixel values within 0 – 1)?
Lecture #3
Electromagnetic Radiation Principle
and Radiometric Correction
Will cover relevant contents of
Chapter 6 in the text book:
Introductory Digital Image Processing
Also referred to
http://www.cnr.berkeley.edu/~gong/textbook: Chapter 5
Outline
1. Electromagnetic radiation principle
2. EMR interaction with atmosphere/terrain
3. Atmospheric transfer
4. Correcting remote sensing system detector
error
5. Remote sensing atmospheric correction
6. Correcting for slope and aspect effects
7. Summary
8. Lecture #3, complementary
Electromagnetic Radiation Principles
and Radiometric Correction
•
Remote sensing systems do not function perfectly.
•
Earth’s atmosphere, land, and water are complex and do not lend
themselves well to being recorded by remote sensing devices
•
Error with the data acquisition process can degrade the quality of
the remote sensor data collected.
•
Two most common types of error encountered in remotely sensed
data are radiometric and geometric.
Geometric correction is concerned with placing the reflected,
emitted, or back-scattered measurements or derivative products
in their proper planimetric (map) location so they can be
associated with other spatial information in a geographic
information system (GIS) or spatial decision support system
(SDSS).
Radiometric correction attempts to improve the accuracy of
spectral reflectance, emittance, or back-scattered measurements
obtained using a remote sensing system.
•
•
Electromagnetic Radiation Principles
and Radiometric Correction
•
Radiometric and geometric correction of remotely sensed data are
normally referred to as preprocessing operations because they are
performed prior to information extraction.
•
Image preprocessing hopefully produces a corrected image that is
as close as possible, both radiometrically and geometrically, to the
true radiant energy and spatial characteristics of the study area at
the time of data collection.
•
There are two types of errors: internal and external (radiometric
and geometric) errors that must be identified to correct the remotely
sensed data.
Electromagnetic Radiation Principles and
Radiometric Correction
•
Internal errors are introduced by the remote sensing system.
They are generally systematic (predictable) and may be
identified and then corrected based on prelaunch or in-flight
calibration measurements. For example, n-line striping in the
imagery may be caused by a single detector that has become
uncalibrated. In many instances, radiometric correction can
adjust for detector miscalibration.
•
External errors are introduced by phenomena that vary in
nature through space and time. External variables that can
cause remote sensor data to exhibit radiometric and geometric
errors include the atmosphere, terrain elevation, slope, and
aspect. Some external errors may be corrected by relating
empirical ground observations (i.e., radiometric and geometric
ground control points) to sensor measurements.
Radiometric Correction of
Remote Sensor Data
•
Radiometric correction requires knowledge about
electromagnetic radiation principles and what interactions
take place during the remote sensing data collection process.
•
To be exact, it also involves knowledge about the terrain
slope and aspect and bi-directional reflectance
characteristics of the scene.
•
Therefore, this lecture will review fundamental
electromagnetic radiation principles. It then discusses how
these principles and relationships are used to correct
radiometric distortion in remotely sensed data caused
primarily by the atmosphere and terrain.
Electromagnetic Energy Interactions
Energy recorded by remote sensing systems undergoes
fundamental interactions that should be understood to
properly preprocess and interpret remotely sensed data.
The energy:
• is radiated by the Sun,
• travels through the vacuum of space at the speed of light,
• interacts with the Earth’s atmosphere,
• interacts with the Earth’s surface,
• interacts with the Earth’s atmosphere once again, and
• finally reaches the remote sensor, where it interacts with
various optics, filters, film emulsions, or detectors.
1. Electromagnetic Radiation
Principle
Electromagnetic Radiation Models
•
To understand
•
how electromagnetic radiation is created,
•
how it propagates through space, and
•
how it interacts with other matter,
•
it is useful to describe the processes using two different
models:
- the wave model
- the particle model.
Wave Model of EM Energy
An electromagnetic wave is composed of electric and
magnetic vectors that are orthogonal to one another and
travel from the source at the speed of light (3 x 108 m s-1).
The Wave Model of
Electromagnetic Energy
Frequency: the number of wavelengths that pass a
point per unit time
Wavelength: the mean distance between maximums
(or minimums)
Common units: micrometers (µm) or nanometers
(nm).
One cycle per second is termed one hertz (1Hz)
Wave Model of Electromagnetic Energy
The relationship between the wavelength, , and
frequency, , of electromagnetic radiation is based on
the following formula, where c is the speed of light:
c = λ ⋅v
c
v=
λ
c
λ=
v
Note that frequency, , is inversely proportional to
wavelength,
The longer the wavelength, the lower the frequency,
and vice-versa.
Wave Model of Electromagnetic Energy
Sources of Electromagnetic Energy
The Sun yields a
continuous spectrum
of EM energy
This process
produces a large
amount of short
wavelength energy
(e.g., from 0.4 - 0.7
µm; blue, green, and
red light)
Interacts with the atmosphere and surface
materials. (reflect, absorb)
Absorption: absorb the short wavelength energy
and then re-emit it at a longer wavelength
Stephen Boltzmann Law
The total emitted radiation (M) from a blackbody is
proportional to the fourth power of its absolute
temperature. This is known as the Stefan-Boltzmann law
and is expressed as:
M = T4
where is the Stefan-Boltzmann constant, 5.6697 x 10-8
W m-2 K-4. T: absolute temperature (in degree Kelvin)
The greater the T, the greater the amount of radiant
energy exiting the object
The temperature 0oC (in the common Celsius scale) corresponds to 273K
Wien’s Displacement Law
Wien’s Displacement Law is used for computing
dominant wavelength (λmax) as:
λmax = k / T
where k is a constant equaling 2898 µm K, and T is
temperature in degrees Kelvin.
The Sun approximates a 6,000 K blackbody, therefore its
dominant wavelength is:
0.483 µm = 2898 µm ˚K / 6000 ˚ K
T determines the wavelength.
Therefore from the (λmax ) information, T can be
calculated
Blackbody Radiation Curves
Blackbody radiation curves for
the Sun: temperature
approximate 6,000 K
For Earth: 300 K
As the temperature of the object
increases, its dominant
wavelength shifts toward the
short wavelength portion of the
spectrum.
Particle Model of EM Energy
Quantum theory of electromagnetic radiation: energy
is transferred in discrete packets called quanta or
photons.
The relationship between the frequency of radiation
and the quantum is:
Q=hν
where Q is the energy of a quantum measured in
Joules (J), h is the Planck constant (6.626 x 10-34 J s1), and ν is the frequency of the radiation.
Particle Model of EM Energy
The Electromagnetic energy travel equation :
= c/v, v = c/
By substituting Q for h (Q = h ν = hc/λ), the
wavelength is associated with a quantum of
energy as:
= h c / Q,
or
Q=hc/
Thus, the energy of a quantum is inversely
proportional to its wavelength, i.e. the longer
the wavelength involved, the lower its energy
content. (Linked to Wien’s Displacement Law)
2. EMR Interaction with
Atmosphere/terrain
Atmospheric Interactions
• Scattering
• Absorption
Atmospheric Scattering
Electromagnetic radiation is propagated
through the Earth's atmosphere almost at the
speed of light in a vacuum.
• Unlike a vacuum in which nothing happens,
however, the atmosphere may affect
•
speed of radiation
•
wavelength
•
intensity
•
spectral distribution,
•
direction.
Atmospheric Scattering
The type of scattering is a function of:
•
the wavelength of the incident radiant
energy, and
•
the size of the gas molecule, dust
particle, or water vapor droplet
encountered.
Atmospheric Scattering
Reflection: the direction
predictable
Scattering: direction
unpredictable
Based on wavelength of incident
radiant energy, the size of the
gas molecule, dust particle, or
water vapor droplet essentially
three types of scattering:
• Rayleigh,
• Mie, and
• non-selective scattering.
Rayleigh Scattering
Rayleigh scattering occurs when the diameter
of the matter (usually air molecules) are many
times smaller than the wavelength of the
incident electromagnetic radiation.
Rayleigh named after the English physicist
All scattering is through absorption and reemission procedure
Rayleigh
Scattering
The amount of scattering
is inversely related to the
fourth power of the
radiation's wavelength
(λ-4).
For example, blue light
(0.4 µm) is scattered 16
times more than nearinfrared light (0.8 µm).
Mie Scattering
Mie scattering: when essentially spherical particles
present in the atmosphere with diameters approximately
equal to the wavelength of radiation
For visible light, water vapor, dust, and other particles
ranging from a few tenths of a micrometer to several
micrometers in diameter are the main scattering agents.
The amount of scatter is greater than Rayleigh scatter and
the wavelengths scattered are longer.
Pollution also contributes to beautiful sunsets and
sunrises, caused by Mie scattering. The greater the
amount of smoke and dust particles in the atmospheric
column, the more violet and blue light will be scattered
away and only the longer orange and red wavelength light
will reach our eyes.
Non-selective Scattering
• Non-selective scattering: when particles in the
atmosphere several times (>10) greater than
the wavelength of the radiation.
•
All wavelengths of light are scattered, not just blue,
green, or red. Thus, water droplets scatter all
wavelengths of visible light equally well, causing the
cloud to appear white (a mixture of all colors of light in
approximately equal quantities produces white).
• Scattering can severely reduce the information
content of remotely sensed data to the point
that the imagery looses contrast and it is
difficult to differentiate one object from another.
Color of the sky
• Two questions:
• Why is the sky blue?
• When the air is clear the sunset will appear yellow
Color theory
Additive describes the resultant color when light is mixed, primary
colors: red, green, and blue to form complementary colors: cyan,
magenta, and yellow
Subtractive describes the resultant color when dye is mixed,
complementary colors : cyan, magenta, and yellow
Light
mixing
follows
this
Dye/pigment
mixing follows
this
Use for paint/filter
Visual
display
Color photograph
is based on
subtractive mixing
of complementary
colors
TV
screen
Monitor
White = Blue+Green+Red
Subtractive color = noncolors
Black
Black = Yellow+Magenta+Cyan
White = no color
Color of the sky
• Why is the sky blue?
A clear cloudless day-time sky is blue because molecules in the
air scatter blue light from the sun more than they scatter red light
• When the air is clear the sunset will appear yellow
When we look towards the sun at sunset, we see red and orange
colors because the blue light has been scattered out and away
from the line of sight
http://math.ucr.edu/home/baez/physics/General/BlueSky/blue_sky.html
Atmospheric Absorption
• Absorption is the process by which radiant energy is
absorbed and converted into other forms of energy.
An absorption band is a range of wavelengths (or
frequencies) in the electromagnetic spectrum within which
radiant energy is absorbed by substances such as water
(H2O), carbon dioxide (CO2), oxygen (O2), ozone (O3), and
nitrous oxide (N2O).
• The cumulative effect of the absorption by the various
constituents can cause the atmosphere to close down in
certain regions of the spectrum. This is bad for remote
sensing because no energy is available to be sensed.
Absorption
In certain parts of the spectrum such as the visible region
(0.4 - 0.7 µm), the atmosphere does not absorb all of the
incident energy but transmits it effectively.
Parts of the spectrum that transmit energy effectively are
called “atmospheric windows”.
•
The atmosphere essentially
“closes down” in certain
portions of the spectrum
while “atmospheric
windows” exist in other
regions that transmit
incident energy effectively
to the ground. It is within
these windows that remote
sensing systems must
function.
•
The combined effects of
atmospheric absorption,
scattering, and reflectance
reduce the amount of solar
irradiance reaching the
Earth’s surface at sea level.
Absorption
Absorption occurs when the energy transformed
into heat motion and re-radiated at a longer
wavelength.
Transmission is inversely related to the
thickness of the layer (of atmosphere).
Certain wavelengths of radiation are affected far
more by absorption than by scattering.
This is particularly true of infrared and
wavelengths longer than visible light.
Absorption of the Sun's Incident Electromagnetic Energy in the
Region from 0.1 to 30 µm by Various Atmospheric Gases
An atmospheric window
Terrain Energy-Matter Interactions
We begin with the simple radiation budget equation, which states
that
the total amount of radiant flux in specific wavelengths ()
incident to the terrain ( Φ i ) must be accounted for
λ
•
by evaluating the amount of radiant flux reflected from the
surface ( Φ reflected λ ),
•
the amount of radiant flux absorbed by the surface ( Φ absorbed λ ), and
•
the amount of radiant flux transmitted through the surface
( Φ transmitted λ ):
Φ iλ = Φ reflected λ + Φ absorbed λ + Φ transmitted λ
Hemispherical Reflectance,
Absorption, and Transmittance
The Hemispherical reflectance () is defined as the dimensionless ratio
of the radiant flux reflected from a surface to the radiant flux incident to
it:
Φ
ρλ =
reflected
Φ iλ
Hemispherical transmittance () is defined as the dimensionless ratio of
the radiant flux transmitted through a surface to the radiant flux incident
to it:
Φ transmitted
τλ =
Φ iλ
Hemispherical absorptance () is defined by the dimensionless
relationship:
Φ
α λ = absorbed
Φ iλ
Hemispherical Reflectance,
Absorption, and Transmittance
These radiometric quantities are useful for producing general statements
about the spectral reflectance, absorptance, and transmittance
characteristics of terrain features. In fact, if we take the simple
hemispherical reflectance equation and multiply it by100, we obtain an
expression for percent reflectance ( ρ λ ):
%
ρλ =
%
Φ reflected λ
Φ iλ
×100
This quantity is used in remote sensing research to describe the general
spectral reflectance characteristics of various phenomena.
Reflectance
Reflectance is the process whereby radiation
“bounces off” an object.
various types of reflecting surfaces:
• Specular reflection
• Diffuse reflection
• Lambertian surface
Reflectance
• Specular reflection(a):
smooth (i.e. the average
surface profile is several
times smaller than the
wavelength of radiation).
• Diffuse reflection (b):
rough, the reflected rays
go in many directions.
• Lambertian surface (d)
the radiant flux leaving
the surface is constant
for any angle of
reflectance to the
surface normal.
3. Atmosphere Transfer
Radiance
The concept of radiance
(Lλ) leaving a specific
projected source area (A)
on the ground, in a specific
direction (θ), and within a
specific solid angle (Ω)
The Lλ is measured in watts
per meter squared per
steradian (W m-2 sr -1 µm-1).
We are only interested in the
radiant flux in certain
wavelengths (Φλ) leaving the
projected source area (A)
within a certain direction (θ)
and solid angle (Ω)
Atmospheric transfer
•
Radiance (LT) from paths 1,
3, and 5 contains intrinsic
valuable spectral
information about the
target of interest.
•
The path radiance (Lp)
from paths 2 and 4
includes diffuse sky
irradiance or radiance from
neighboring areas on the
ground.
•
This path radiance
generally introduces
unwanted radiometric
noise in the remotely
sensed data and
complicates the image
interpretation process.
Atmospheric transfer
Path 1 contains spectral solar
irradiance ( Eo ) that was
attenuated very little before
illuminating the terrain within
the IFOV. Notice in this case
that we are interested in the
solar irradiance from a
specific solar zenith angle ( θ o
) and that the amount of
irradiance reaching the
terrain is a function of the
atmospheric transmittance at
this angle ( Tθ ).
λ
o
If all of the irradiance makes
it to the ground, then the
atmospheric transmittance ( Tθ
) equals one. If none of the
irradiance makes it to the
ground, then the atmospheric
transmittance is zero
o
Atmospheric transfer
Path 2 contains spectral
diffuse sky irradiance ( Ed )
that never even reaches the
Earth’s surface (the target
study area) because of
scattering in the atmosphere.
Unfortunately, such energy is
often scattered directly into
the IFOV of the sensor
system.
λ
It contains much unwanted
diffuse sky irradiance that
was inadvertently scattered
into the IFOV of the sensor
system. Therefore, if
possible, we want to minimize
its effects.
Atmospheric transfer
Path 3 contains energy
from the Sun that has
undergone some
Rayleigh, Mie, and/or
nonselective scattering
and perhaps some
absorption and reemission
before illuminating the
study area.
Thus, its spectral
composition and
polarization may be
somewhat different from
the energy that reaches
the ground from path 1.
Path 4 contains radiation
that was reflected or
scattered by nearby terrain
( ρλ ) covered by snow,
concrete, soil, water,
and/or vegetation into the
IFOV of the sensor
system. The energy does
not actually illuminate the
study area of interest.
Therefore, if possible, we
would like to minimize its
effects.
n
Path 2 and Path 4
combine to produce what
is commonly referred to as
Path Radiance, Lp.
Atmospheric transfer
Path 5 is energy that was
also reflected from nearby
terrain into the
atmosphere, but then
scattered or reflected onto
the study area.
Atmospheric transfer
The total radiance reaching the sensor
is:
1
LS = ρ Tθ v (Eo∆λTθ o cos θ o ∆λ + Ed ) + L p
π
This may be
summarized as:
LS = LT + L p
Atmospheric transfer
Outline
1. Electromagnetic radiation principle
2. EMR interaction with atmosphere/terrain
3. Atmospheric transfer
4. Correcting remote sensing system detector
error
5. Remote sensing atmospheric correction
6. Correcting for slope and aspect effects
7. Summary
8. Lecture #3, complementary
4. Correcting Remote Sensing
System Detector Error
Correcting Remote Sensing
System Detector Error
•
Ideally, the radiance recorded by a remote sensing system in various
bands is an accurate representation of the radiance actually leaving the
feature of interest (e.g., soil, vegetation, water, or urban land cover) on
the Earth’s surface.
•
Unfortunately, noise (error) can enter the data-collection system at
several points. For example, radiometric error in remotely sensed data
may be introduced by the sensor system itself when the individual
detectors do not function properly or are improperly calibrated.
•
Several of the more common remote sensing system–induced
radiometric errors are:
•
random bad pixels (shot noise),
•
line-start/stop problems,
•
line or column drop-outs,
•
partial line or column drop-outs, and
•
line or column striping.
Random Bad Pixels (Shot Noise)
Sometimes an individual detector does not record spectral data for an
individual pixel. When this occurs randomly, it is called a bad pixel.
When there are numerous random bad pixels found within the scene, it
is called shot noise because it appears that the image was shot by a
shotgun.
Shot noise is identified and repaired using the following methodology. It
is first necessary to locate each bad pixel in the band k dataset. A simple
thresholding algorithm makes a pass through the dataset and flags any
pixel (BVi,j,k) having a brightness value of zero (assuming values of 0
represent shot noise and not a real land cover such as water).
Once identified, it is then possible to evaluate the eight pixels
surrounding the flagged pixel, as shown below:
Random Bad Pixels (Shot Noise)
The mean of the eight surrounding pixels is computed using the
equation and the value substituted for BVi,j,k in the corrected image:
BVi , j ,k
8
∑ BVi
= int i =1
8
Shot Noise Removal
a) Landsat Thematic Mapper
band 7 (2.08 – 2.35 mm) image
of the Santee Delta in South
Carolina. One of the 16
detectors exhibits serious
striping and the absence of
brightness values at pixel
locations along a scan line.
b) An enlarged view of the bad
pixels with the brightness
values of the eight surrounding
pixels annotated.
c) The brightness values of the
bad pixels after shot noise
removal. This image was not
destriped.
Line or Column Drop-outs
•
An entire line containing no spectral information may be produced if an
individual detector in a scanning system (e.g., Landsat MSS or Landsat
7 ETM+) fails to function properly.
•
If a detector in a linear array (e.g., SPOT XS, IRS-1C, QuickBird) fails
to function, this can result in an entire column of data with no spectral
information. The bad line or column is commonly called a line or
column drop-out and contains brightness values equal to zero.
•
For example, if one of the 16 detectors in the Landsat Thematic Mapper
sensor system fails to function during scanning, this can result in a
brightness value of zero for every pixel, j, in a particular line, i. This line
drop-out would appear as a completely black line in the band, k, of
imagery.
•
This is a serious condition because there is no way to restore data that
were never acquired. However, it is possible to improve the visual
interpretability of the data by introducing estimated brightness values
for each bad scan line.
Line or Column Drop-outs
It is first necessary to locate each bad line in the dataset. A simple
thresholding algorithm makes a pass through the dataset and flags any
scan line having a mean brightness value at or near zero. Once
identified, it is then possible to evaluate the output for a pixel in the
preceding line (BVi – 1,j,k) and succeeding line (BVi + 1,j,k) and assign the
output pixel (BVi,j,k) in the drop-out line the average of these two
brightness values:
BVi , j ,k
BVi −1, j ,k + BVi +1, j ,k
= int
2
This is performed for every pixel in a bad scan line. The result is an
image consisting of interpolated data every nth line that is more visually
interpretable than one with horizontal black lines running systematically
throughout the entire image.
This same cosmetic digital image processing procedure can be applied
to column drop-outs produced by a linear array remote sensing system.
N-line Striping
Noticeable lines (almost uniformly 20 brightness values greater than the other
detectors for the same band) that are brighter than adjacent lines. This is
referred to as n-line striping.
To repair systematic n-line striping, it is first necessary to identify the
miscalibrated scan lines in the scene. This is usually accomplished by
computing a histogram of the values for each of the n detectors that collected
data over the entire scene (ideally, this would take place over a homogeneous
area, such as a body of water). If one detector’s mean or median is significantly
different from the others, it is probable that this detector is out of adjustment.
Consequently, every line and pixel in the scene recorded by the maladjusted
detector may require a bias (additive or subtractive) correction or a more severe
gain (multiplicative) correction.
This type of n-line striping correction
a) adjusts all the bad scan lines so that they have approximately the
same radiometric scale as the correctly collected data and
b) improves the visual interpretability of the data. It looks better.
N-line Striping
N-line Striping
a) Original band 10 radiance
(W m-2 sr-1) data from a GER
DAIS 3715 hyperspectral
dataset of the Mixed Waste
Management Facility on the
Savannah River Site near
Aiken, SC. The subset is
focused on a clay-capped
hazardous waste site covered
with Bahia grass and
Centipede grass. The 35-band
dataset was obtained at 2 × 2
m spatial resolution. The
radiance values along the
horizontal (X) and vertical (Y)
profiles are summarized in the
next figure.
b) Enlargement of band 10
data.
c) Band 10 data after
destriping.
d) An enlargement of the
destriped data
N-line Striping
a). The radiance values
along the horizontal (X)
profile of the original band
10 radiance values in the
previous figure.
b). The radiance values
along the vertical (Y)
profile of the original band
10 radiance values in the
previous figure.
c). The radiance values
along the vertical (Y)
profile of the destriped
band 10 radiance values.
Note the reduction of the
saw-toothed pattern in
the destriped data
5. Remote Sensing
Atmospheric Correction
Why Atmospheric Correction?
Image radiometry can be affected by factors, such as
system noise, sensor malfunction and atmospheric
interference
The purpose of atmospheric correction is to remove or
reduce the effect of atmospheric interference on RS
images.
Atmospheric molecules and aerosols scatter solar radiance
and ground reflected radiance (mostly affect λ1 ELC pixel values)? Evaluate your ELC result (Hint: Check ELC image profiles patterns
and percentages of negative and > 1 ELC pixels).
3. Which features (targets, land cover types) have a possible negative reflectance or the reflectance
larger than 1 per band (Hint: you might want to use Display/Profiles)? Why do you think your result has
the reflectance less than 0 or larger than 1 (Hint, Not so whitest and blackest for image spectra for
simulating calibration models)? How can you fix the problem (Hint: Either re-do ELC or assign all
unreasonable ELC pixel values within 0 – 1)?
Healthy vegetation:
Sea water
Purchase answer to see full
attachment