ABOUT THE IMAGES

An Introduction to Satellite Image Technology

The collection of satellite images to study the surface of the Earth is called Remote Sensing. Remote Sensing is detecting the qualities of something from a distance, without coming into contact with it – pretty much what our eyes do, or a camera. But unlike the images formed on our retinas, satellite images are digitized, meaning they're made up of pixels, each of which contains a number that represents a level of brightness. And unlike photographs, satellite images are georeferenced, meaning each pixel's location is precisely defined with reference to the Earth. This is what makes them so useful for studying processes on the Earth, from deforestation to melting glaciers to urban growth. It's also what makes "before and after" comparisons so precise.

Satellites have been gathering data about the Earth since the early 1970s. Once the Cold War took to space, satellites became a major source of information about what was happening behind the Iron Curtain. More recently they have been widely used to gather data regarding the Earth itself, to better understand its processes and address its problems with more rigorous information.

There are many different data-gathering satellites in our skies, collecting various types of information. This site uses mainly Landsat, a series of satellites that have been in action since 1972. Currently there are two Landsat satellites in orbit, numbers 5 and 7. Landsat is a good medium-range sensor with a 30-meter pixel, useful for a broad array of purposes. If you were to zoom into a scene using Google Earth, for instance, the sensors and pixel sizes would continually change as you moved in, trading coverage for detail. Landsat would fall somewhere in the middle of that process.

NASA Goddard Space Flight Center and US Geological Survey

The red boxes in the first two images of California's Bay Area show the extent of the next image. Modis offers a 250-meter
pixel and therefore a much larger scene area; Landsat has a 30-meter pixel; and Ikonos offers 4 and 1 meters.

"Fun Facts" About Landsat:

  • Landsat orbits the Earth at an altitude of over 400 miles.
  • It travels at nearly 5 miles per second (18,000 mph), completing each orbit in approximately 99 minutes.
  • It covers the entire Earth every 16 days. This means that if you wanted to see images of current events such as the eruption of a volcano or the spread of an oil spill, depending on the timing, you might be able to get the imagery immediately, or you might have to wait a couple of weeks.

The Electromagnetic (EM) Spectrum

Energy from the sun comes in many different wavelengths, which taken together are called the Electromagnetic (EM) Spectrum. The shorter the wavelength, the greater the energy.

NASA Goddard Space Flight Center and US Geological Survey

More of the sun's energy is given off in the visible light portion of the spectrum than in any other. Given the survival advantage that sight confers, it's not surprising this is the portion the eye gradually evolved to detect.

Stan Aronoff

The "signatures" of various surfaces, showing the proportion of energy reflected
(the scale on the left) in the various wavelengths (the scale on the bottom).

The various surfaces that this energy comes into contact with, such as seawater, huckleberry bushes, or asbestos shingles, will respond differently to different wavelengths, reflecting or absorbing more of some and less of others. Therefore each surface has its own "signature" made up of the proportions of energy it reflects in each wavelength. All of this can be measured by a sensor designed to capture that information. This is what makes it possible for a satellite to identify what's on the ground.

Each sensor is designed to collect information from particular wavelengths, depending on its mission. For Landsat, these include three segments of the visible light portion of the spectrum (blue, green, and red) and four from the infrared portion (one near infrared, two shortwave infrared, and one thermal or longwave infrared). Each of these segments of the spectrum is called a band.

NASA Goddard Space Flight Center and US Geological Survey

Landsat's seven bands, each of which collects reflected light
from one narrow segment of the EM spectrum.

From each band an image is produced in which the brightest pixels are the ones that reflect the most energy. Since each surface responds differently to the various wavelengths, the brightest pixels will not be the same ones in every case; compare, for instance, bands 3 and 4.

Band 1 – Blue

Band 2 – Green

Band 3 – Red

Band 4 – NIR

Band 5 – SWIR

Band 6 – Thermal

Band 7 - SWIR

There are 256 shades of gray available to distinguish among different levels of brightness, thanks to Landsat's 8-bit technology. In a binary system, each bit doubles the number of choices, so one bit gives you two options (black or white), two bits give you four (black, two shades of gray, and white), three bits give you eight, etc. This enables the detection of minute differences between different surfaces, so that for instance lodgepole pine can be distinguished from yellow pine, or concrete from cement – at least in ideal circumstances.

NASA Goddard Space Flight Center and US Geological Survey

These images are displayed using 1, 2, and 8 bits, or 2, 4, and 256 shades of gray.

The analyst then chooses which bands to work with, depending on the subject of interest. Our eyes are designed to see three primary colors (blue, green, and red) and their combinations. So when viewing the images with image processing software, the analyst will often view three bands at a time, and display them in those three colors so the content of each can be distinguished from the others. If the three visible light bands are selected, and displayed in their matching colors, the result looks much as the scene would to our eyes if we were viewing it from space. (See Image 1, a portion of the coast of Greece, below.) This is called a "true color" image.

Image 1

Image 2

Image 3

If the analyst were studying vegetation and was most interested in the near infrared band (see below), they would generally assign the red display to that band, and the green and blue displays to bands 3 and 2 (see Image 2). This is called a "false color" image.

Any band can, of course, be displayed in any color. In Image 3 above, band 7 is displayed in red, band 4 in green, and band 1 in blue. Since vegetation is most reflectant in band 4, the green display dominates; but since it is also quite reflectant in band 7, the composite of a lot of green and some red produces a shade of green with a lot of yellow in it (yellow falls between green and red on the color spectrum).

Canadian Centre for Remote Sensing

The "Invisible" Bands

Band 4, the Near Infrared (NIR) band, is the band most often used to study vegetation health. When plants absorb sunlight in photosynthesis, they actually use only certain wavelengths, primarily red and blue visible light. This is why they most often appear green to us, because much of the green light is reflected. However, far more of the NIR band is reflected, making it by far the most reflectant band for healthy vegetation. In fact, the entire infrared (IR) portion of the spectrum is highly reflected.
Band 6, the Thermal Infrared band, has a larger pixel size, 60 meters instead of 30, which is why it looks fuzzier in the band images above. This is also why it can't be combined in an image with other bands, but must be used by itself. Band 6 can be used to create thermal maps to show temperature differences, such as can be seen on the Rachel Carson page, where the sea surface temperatures in the Florida Keys have increased in recent years.

The upper portion of the Florida Keys in June 1985 and July 2005, demonstrating long-term warming of the sea surface.

The three infrared bands are useful for studying a variety of surfaces, from vegetation to rocks and minerals. And while not quite in the thermal range, they are close enough to it to detect heat better than the visible bands, as you can see in these two images of Mt. Etna in Sicily, the first showing the three visible light bands, the second showing the three infrared bands. The information that can be derived from each image is quite different: The visible light image shows the plume of steam drifting from the crater, while the infrared image shows the heat of the magma below the surface.

NASA Goddard Space Flight Center and US Geological Survey

More Ways of Viewing Data

Sometimes the content of an image can be better understood by applying a color scheme to identify the various land cover elements that make it up. This is called a Land Cover Classification. While the eye can see only a composite color for each pixel, the image contains far more data. The software uses this data to identify materials that look similar but are not, and assigns them to different color classes. This enables the quick identification of similar land cover in different places, as well as dissimilar land cover in the same place, as in these images of drought-affected farmland in Nevada.

In the True Color images, on the left, from August 1989 and August 2009, it can be difficult to identify what is
happening to this farmland near the upper reaches of Nevada's Lake Mead. In the classified images, where the
productive farmland is green and fallow land is dark brown, it is easy to see how much land has gone out of
production as irrigation water declines.

The data gathered by the sensor can not only be displayed visually but also can be analyzed mathematically and statistically in many ways, using a variety of formulas. Some of these formulas are called Vegetation Indexes, which can analyze vegetation health and changes to it over time, as on the Thoreau page, which explores the effect of acid rain on the Maine Woods. Vegetation indexes compare the red and near infrared bands to learn how robustly photosynthesis is occurring. The last two of these images, showing an area near Mt. Katahdin, demonstrate an increase in vegetation stress in some areas.

True color and NDVI (a commonly used Vegetation Index) images from June 1987, and an NDVI
image from June 2008. The darkening of some areas from the second to the third images
(see the red boxes) shows increased vegetation stress. The white/black area is bare rock.

Another type of analysis is called Tasseled Cap, so named because when the data is plotted on a graph, the shape it forms looks like a knitted cap with a tassel. This formula reconfigures the data from the six non-thermal bands into six new bands, three of which display particular features of the landscape: Brightness, Greenness, and Wetness. An example of this is on the Mark Twain page, where an area in the Sierra foothills – jumping frog country – is losing its wetlands. The presence, absence, and change in wetlands are much more evident in a Tasseled Cap image than in a True Color image.

True color images from July 1989 and July 2009, and Tasseled Cap versions. The substantial loss of wetlands
in the region is far more evident in the latter. For a more detailed explanation see the Twain page.

While there are many additional ways of using satellite images to study the Earth, these are the methods that have been put to most use on this website, to help make visible the environmental changes that are damaging and disrupting our literary landscapes, and aid the people who care about them in understanding what might be done to restore and protect them.