Uncertainty in spectral decomposition

From SubSurfWiki
Jump to navigation Jump to search

Over the last five years or so, spectral decomposition has become a mainstream seismic interpretation process. Most often applied qualitatively, for seismic geomorphologic analysis (eg Marfurt & Kirlin 2001[1]), it is increasingly being applied quantitatively, to compute stratal thickness (Partyka et al. 1999[2]). It has also been applied to direct hydrocarbon detection (Castagna et al. 2003[3]).

To date, most of this work has been done with the windowed (‘short-time’) Fourier transform, but other ways of computing spectra are joining our collective toolbox: the S transform, wavelet transforms, and matching pursuit decomposition (e.g. Castagna & Sun 2006[4]), to name a few. There is frequent mention of the 'resolution' of these various techniques. Perhaps surprisingly, Heisenberg’s Uncertainty Principle is sometimes cited as a basis for one technique having better resolution than another. In this article, I look at what exactly we mean by 'resolution' in this context, what the limits of resolution are, and what on earth quantum theory has to do with it.

A property of nature

Heisenberg’s Uncertainty Principle is one of the cornerstones of quantum theory. Here’s how Werner himself explained it:

At the instant of time when the position is determined, that is, at the instant when the photon is scattered by the electron, the electron undergoes a discontinuous change in momentum. This change is the greater the smaller the wavelength of the light employed, i.e., the more exact the determination of the position. At the instant at which the position of the electron is known, its momentum therefore can be known only up to magnitudes which correspond to that discontinuous change; thus, the more precisely the position is determined, the less precisely the momentum is known, and conversely. Heisenberg (1927), p 174–5[5]

The most important thing to understand about the Uncertainty Principle is that, while it was originally expressed in terms of observation and measurement, it is not a consequence of any limitations of our measuring equipment or the mathematics we use to describe our results. The Uncertainty Principle does not limit what we can know, it describes the way things actually are: an electron does not possess arbitrarily precise position and momentum simultaneously. This troubling insight is the heart of the so-called Copenhagen Interpretation of quantum theory, which Einstein was so famously upset by (and wrong about).

Gabor (1946)[6] was probably the first to realize that the Uncertainty Principle translates into information theory and signal processing. Thanks to wave-particle duality, signals turn out to be exactly analogous to quantum systems. As a result, the exact time and frequency of a signal can never be known: a signal cannot plot as a point on the time-frequency plane. Just to be clear: this uncertainty is a property of signals, not a limitation of mathematics.

Heisenberg’s uncertainty principle is usually written in terms of the standard deviation of position σx, the standard deviation of momentum σp, and Planck’s constant h:

In other words, the product of the uncertainties of position and momentum is no smaller than some non-zero constant. For signals, we do not need Planck’s constant to scale the relationship to quantum dimensions, but the form is the same. If the standard deviations of the time and frequency estimates are σt and σf respectively, then we can write Gabor’s uncertainty principle thus:

So the product of the standard deviations of time, in milliseconds, and frequency, in Hertz, must be at least 80 (the units are ms.Hz, or millicycles). I will return to this uncertainty later.

If we can quantify uncertainty with such exactitude, what’s the problem? I think part of it is that there are at least two sides to what is usually called ‘resolution’. These are sampling and localization. These related concepts each apply to both the time and frequency domains.


Resolution and localization in digital photography. (a) Sharp image at full resolution, 512 × 512 pixels, shown here at about 340 dpi. Very small-scale features are resolved. (b) The same image with a 25 × 25 pixel Gaussian smoothing filter applied. (c) The same image downsampled by a 10 × 10 pixel filter, to 51 × 51 pixels, or 34 dpi at this scale. (d) The Gaussian-filtered image in (b) after downsampling every 10th pixel. Sampling the smoothed image is not as detrimental as sampling the sharp image.

As scientists in the digital age, we are very familiar with the concept of sampling. It is analogous to resolution in digital photography, where it is usually measured in dots per inch (see Figure 1). It tells us how detailed our knowledge of something is.

In the same way that digital images are sampled in space, and digital signals are sampled in time, digital spectra are sampled in frequency. Intuitively, one might expect frequency sampling Δf to depend on time sampling Δt. In fact, time sampling determines maximum bandwidth (the Nyquist frequency is half the sampling frequency), but it does not affect sample spacing in the frequency domain. For the Fourier transform, the only control on frequency sampling is the length of the analysis window (see Figure 2). Specifically, sample spacing in frequency Δf is given by 1/T, where T is the window length in seconds. A 100 ms window means we get frequency information every 1/0.100 = 10 Hz; reduce the window to 40 ms and the chunks are much bigger: 1/0.040 = 25 Hz.

Sampling and localization. (a) A 128 ms, 32 Hz signal, padded to 256 ms. (b) Its spectrum, computed by Fast Fourier Transform (FFT); the short window means that the spectrum is sparsely sampled, since ∆f = 1/0.256 = 3.9 Hz. (c) The same 128 ms, 32 Hz signal, padded to 1024 ms. (d) Its spectrum; the longer window has improved the sampling in frequency (∆f = 1/1.024 = 0.977 Hz), but the uncertainty in the signal’s frequency is the same, indicated by the relatively wide central peak in the spectrum. (e) A 768 ms, 32 Hz signal, padded to 1024 ms. (f) Its spectrum, which has the same sampling as (d), but much better frequency localization. Adapted from Stockwell (1996[7]).
Measuring a single frequency. (a) Three 64 ms signals sampled at 1000 Hz with a frequency sample spacing of 1/0.064 = 15.6 Hz. The signals are 43 Hz (blue), 47 Hz (red), and 51 Hz (green) sine waves, with Hann windows applied. (b) The spectra are very similar, and would be unlikely to be discriminated, especially in the presence of noise.

What is the practical consequence of, say, poor frequency sampling? For one thing, it can limit our ability to accurately estimate the frequency of a signal. The true frequency of a given signal is most likely between two samples (see Figure 3). The discrepancy is on the order of plus or minus one quarter of the sample spacing, or 1/4T, and could therefore be large if the time window is short. For example, in Figure 3 the uncertainty is around ±4 Hz.

As well as hindering our estimation of the frequency of a single signal, sampling impacts our ability to determine whether two or more distinct frequencies are present in the signal. Following the analogy with the time domain, or with digital images, this type of discrimination is normally what we mean by 'resolution'. Figure 4 shows that, in order to be able to accurately resolve the frequencies of two signals, they must be about twice the sample spacing apart or more.

Intuitively, we see that we need local maxima to be separated by a minimum. If they are closer than this, they may be ambiguous in the spectrum (the exact result depends on the window function and the actual frequencies). For the 64 ms Hann window shown in Figure 4, resulting in a 15.6 Hz sampling, we cannot use the Fourier transform to recognize the presence of two signals less than about 32 Hz apart. Any closer than this and they look like one broad spectral peak.

Resolving two signals. (a) A 64 ms signal sampled at 1000 Hz with a frequency sample spacing of 1/0.064 = 15.6 Hz. The signal consists of a 32 Hz and a 63 Hz sine wave, with a Hann window applied. (b) The spectrum does not resolve the two frequencies. (c) A 32 Hz with a 65 Hz signal, also with a Hann window. (d) The signals are more than twice the frequency sample spacing apart and are resolved.

What about non-Fourier techniques? There is an increasingly wide acceptance of other spectral decomposition algorithms, especially the S transform and various wavelet transforms (Castagna & Sun 2006). Unfortunately, the jargon and mathematics behind these techniques sometimes get in the way. I strongly recommend Hubbard (1998[8]) for those who are interested but perplexed; her book is very well written and accessible for any scientist.

One important aspect of these alogrithms, from the point of view of understanding resolution, is that they are extensions of Fourier analysis. The difference is in the analyzing function. In Fourier analysis and in the S transform, it is a Gaussian function; in wavelet analysis it is a family of wavelets at different scales. It is noteworthy that one of the original families of wavelets (the often-used Morlet wavelets) uses the Gaussian function as the wavelet envelope. It turns out that the difference with Fourier analysis is not really in the mathematics, but in the way this function—a window—is applied to the signal. You may have heard it said that wavelet transforms do not use a window; this is not strictly true.

Heisenberg boxes convey the concept of varying time and frequency localization by visualizing an idealized time-frequency plane. (a) In the so-called Dirac basis the Heisenberg boxes become infinite vertical lines. (b) On the other hand, in the Fourier basis the spectrum is unlocalized in time. (c) The Fourier transform with a short time window gives poor localization in frequency. (d) A long window gives better frequency localization at the expense of time localization. (e) Wavelet transforms, and the S transform, have frequency- dependent localization: low frequencies are well localized in frequency but poorly localized in time, while the reverse is true for high frequencies.

Frequency domain sampling is less easily understood for these other transforms. What the S transform, wavelet transforms, and matching pursuit have in common is frequency-dependent resolution. That is, the sample spacing depends on the frequency. The easiest to grasp is perhaps the S transform (Stockwell et al. 1996[9]). It is essentially a Fourier transform for which the window length varies with the frequency content of the signal, being short for high frequencies and long for low frequencies. This results in poor sampling for high frequencies, and good sampling for low frequencies (see Figure 5). Wavelet transforms take a more sophisticated (and computationally expensive) approach, but the principle is the same. These schemes are often collectively called 'adaptive' or, more descriptively, 'multiresolution'.

In a typical wavelet transform implementation, we end up with a sample at the Nyquist frequency fN, another at fN/2, then at fN/4, fN/8, and so on. (It is not quite as simple as this because wavelets do not have precise frequencies but 'scales' instead). Thus, for a seismic signal sampled at 500 Hz (2 ms sample rate), Nyquist is 250 Hz and the samples are at about 250, 125, 63, 32, 16, 8, 4, 2, and 1 Hz. The highest possible frequencies in this case would be around 0.8 × Nyquist = 200 Hz, and the lowest frequencies are usually close to 10 Hz. This means there would be perhaps five useful coefficients (incidentally, this explains why wavelet transforms are very useful for data compression: the signal can be faithfully reconstructed from relatively few numbers).


Although arguably less tangible than sampling, the concept of localization is also familiar to geophysicists. Broadly speaking, it is analogous to focus in digital photography: a measure of how smeared out something is. Note that this is quite different from sampling resolution, since something can be smeared out, but highly sampled (as in Figures 1b and 2b).

Localization in the time domain is fairly intuitive: it depends on bandwidth, tuning effects, and how stable the data are in phase. It limits how accurately and consistently we can interpret a given reflectivity spike in the seismic.

Leakage. (a) A relatively long-lived signal and (b) its spectrum. (c) The same signal, windowed with a boxcar function (shown) and (d) its spectrum, showing sidelobes and increased peak width. (e) The same signal, windowed with a Hann function (shown) and (f) its spectrum, showing no sidelobes but an even wider peak.

Localization in the frequency domain depends on leakage. If we compute the Fourier transform of any finite signal, its spectrum displays the phenomenon known as 'leakage'. For example, it is well-known that the Fourier transform of a boxcar—an untapered, rectangular window—is a sinc function of infinite length (see Figure 6). The infinite sidelobes of the sinc function, and the fact that the peak is not a spike but is spread out a bit, constitute leakage. Windowing, the process of selecting a region of data to analyse, causes leakage; it is not an artifact of Fourier analysis per se.

So there are two distinct aspects to localization in the frequency domain: spectral sidelobes and peak width. I will look at each of these in turn.

Tapering the window reduces sidelobes. While the spectrum of the boxcar function, and of any signal windowed with a boxcar, has infinite sidelobes, they are much reduced for a tapered function (see Figure 6). This is known as apodization, literally 'de-footing'. The most commonly used apodizing functions are the Gaussian, Hann, and Hamming functions, but there are many others. Like everything else in signal processing, there is a price to pay for suppressing the sidelobes: the spectral peak is spread out, and reduced in magnitude. The degree of spreading and the reduction in magnitude depend on the specific function selected.

Spectral peak width is the other key component of localization. It is affected by the taper function but primarily controlled by the signal itself. It is approximately equal to 2/S, where S is the length of the non-zero part—the 'support'—of the signal, not the length of the window. In the case of seismic reflection data, we are dealing with an overlapping train of signals representing reflections from subsurface strata. The length S of each individual signal—the ones we are trying to characterize—is equal to the length of the seismic wavelet. How long is that? In theory it is infinite, but in practice the energy falls below the noise level at some point. In my experience, wavelets are effectively on the order of 64–128 ms long. So for a typical seismic dataset, the width of the spectral peak is about 16–31 Hz, with the larger uncertainty for the shorter wavelet. Of course, if the windowing function is shorter than the seismic wavelet, then the peak is even wider because a maximum cannot be less than two samples wide. We see that there is no benefit to making the window shorter than the seismic wavelet.

It is instructive to examine the effect of increasing window length on the spectrum of a short-lived signal, and comparing to the same window operating on a longer-lived signal (Figure 2). We find that signals with short support are poorly localized in frequency. This frequency smearing has nothing to do with windowing: it is a property of nature.

Representing uncertainty

The relationship between time and frequency resolution is illustrated in Figure 5 with 'Heisenberg boxes'. Gabor called these boxes 'logons', which I prefer since we are dealing with Gabor's uncertainty principle. These idealized cartoons show how the time-frequency plane can be tiled with rectangles of size σt × σf, where σt is the standard deviation of the time estimate and σt is the standard deviation of the frequency estimate. As we saw earlier, this product must be at least 1/4π. In practice, this lower bound is usually not reached by time-frequency transforms, though some get closer to it than others. It would be interesting to see researchers publish the areas of the Gabor logons with which their algorithm tiles the time-frequency plane.

It is instructive to consider a specific example of Gabor uncertainty. If you know the time localization to within ±5 ms, say, then the frequency localization is the reciprocal of ±0.005 × 4π = ±16 Hz. It is not possible to know the frequency of a signal more accurately than this without compromising the timing accuracy. Uncertainties of this magnitude are important to us as seismic interpreters, especially if we are using quantitative spectral methods.

The effect of sampling is also measurable. Imagine you are trying to use spectral decomposition for a quantitative analysis of thin beds in your seismic. As discussed above, the length of the window you select determines the sample spacing in the frequency domain. If you select a 50 ms window, for example, then you will have information about the 20 Hz response, the 40 Hz response, the 60 Hz response, and so on. Any other information (that is, other slices in the so-called ‘tuning cube’), is unknowable with this window length and is therefore essentially interpolated data. A 20 ms window, which offers improved temporal resolution, provides you with spectral information at 50 Hz, 100 Hz, 150 Hz, etc. In striving to probe those thin beds, you may have ended up with only three or four data points within the bandwidth of your data. Any thickness estimates cannot hope to do better than ‘thin’, ‘thick’, and ‘thicker’. Chances are, you can interpret the ‘thick’ and ‘thicker’ beds with conventional interpretation of reflectors.

The effect of noise

The effect of adding noise. (a) A 32 Hz signal, with 128 ms Hann taper applied. The total signal length is 512 ms, to give good sampling in frequency. (b) The spectrum shows a good peak at 31.25 Hz (the nearest sample to 32 Hz), but with some leakage. (c) The same signal with 30% Gaussian noise added. (d) Its spectrum, showing considerable localization uncertainty, with a peak anywhere in the range 25 to 37 Hz.

The effect of noise is, predictably, to increase the uncertainty. Of course, the sampling in frequency does not change, but the localization does, as shown in Figure 7. This example also shows that the power of the frequency peak does not change significantly, but the background level does, so the net result is a decrease in the detectability of anomalies in the time-frequency plane.

Some transforms are affected more than others by noise. For example, wavelet transforms resolve high frequencies poorly, so are relatively immune to noise. On the other hand, so-called ‘atomic’ decomposition methods like matching pursuit are strongly affected by the presence of noise. These techniques attempt to preserve time localization by matching small pieces of the signal with time- frequency ‘atoms’ from an extensive ‘dictionary’ (surely it should be a periodic table?).


The spectral analysis of signals is governed by Gabor’s uncertainty principle, itself derived from the Cauchy-Schwartz inequality and analogous to Heisenberg’s uncertainty principle. Rather than limiting what we can measure, it describes a fundamental aspect of reality: signals do not have arbitrarily precise time and frequency localization. It doesn’t matter how you compute a spectrum, if you want time information, you must pay for it with frequency information. Specifically, the product of time uncertainty and frequency uncertainty must be at least 1/4π.

For good localization in time, use a short window T for Fourier methods (for wavelet transforms, analyse with small-scale—high frequency—wavelets). When you do this, you increase the frequency sampling interval, so you lose frequency resolution. Because of this sampling, you cannot tell exactly what frequency a signal is (±1/2T), and you cannot discriminate similar frequencies (closer than 2/T).

For good localization in frequency, use a long window (or analyse with large-scale wavelets). When you do this, you decrease the frequency sampling interval and get better frequency resolution, but the long window results in poor time localization.

Regardless of Fourier windows or wavelet scales, short-lived signals have wide spectral peaks and thus poor frequency localization. Only long-lived signals have narrow spectral peaks and correspondingly good frequency localization. This is the essence of Gabor’s uncertainty principle.


  1. Marfurt, K. and Kirlin, R. [2001]. Narrow-band spectral analysis and thin-bed tuning. Geophysics 66, 1274–1283.
  2. Partyka, G., Gridley, J. and Lopez, J. [1999]. Interpretational applications of spectral decomposition in reservoir characterization. The Leading Edge 18, 353–360.
  3. Castagna, J., Sun, S. and Siegfried, R. [2003]. Instantaneous spectral analysis: Detection of low-frequency shadows associated with hydrocarbons. The Leading Edge 22, 127–129.
  4. Castagna, J. and Sun, S. [2006]. Comparison of spectral decomposition methods. First Break 24 (3), 75–79.
  5. Heisenberg, W. [1927]. Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik, Zeitschrift für Physik, 43, 172–198. English translation: Quantum Theory and Measurement, J. Wheeler and H. Zurek [1983]. Princeton University Press, Princeton.
  6. Gabor, D. [1946]. Theory of communication. Journal of the Institute of Electrical Engineering, 93, 429–457.
  7. Stockwell, R. [1999]. S-transform analysis of gravity wave activity from a small scale network of airglow imagers. PhD thesis, University of Western Ontario, London, Canada.
  8. Hubbard, B. [1998]. The world according to wavelets: the story of a mathematical technique in the making. A.K. Peters, Natick.
  9. Stockwell, R., Mansinha, L. and Lowe, R. (1996). Localization of the complex spectrum: the S transform. Institute of Electrical and Electronic Engineers Transactions on Signal Processing 44, 998–1001.