Gabor uncertainty

From SubSurfWiki
Jump to: navigation, search

Gabor uncertainty is a limit to the accuracy of time-frequency analysis, analogous to Heisenberg's uncertainty principle.

Limitation on time-frequency reoslution

Dennis Gabor (1946[1]) was the first to realize that the uncertainty principle applies to information theory and signal processing. Thanks to wave-particle duality, signals turn out to be exactly analogous to quantum systems. As a result, the exact time and frequency of a signal can never be known simultaneously: a signal cannot plot as a point on the time-frequency plane. This uncertainty is a property of signals, not a limitation of mathematics.

Heisenberg’s uncertainty principle is usually written in terms of the standard deviation σ of position x, the standard deviation of momentum p, and Planck’s constant h[2]:

\sigma_x \sigma_p \geq \frac{\mathrm{h}}{4\mathrm{\pi}}\ \approx \ 5.3 \times 10^{-35}\ \mathrm{m}^2\,\mathrm{kg.s}^{-1}

In other words, the product of the uncertainties of position and momentum is no smaller than some non-zero constant. For signals, we do not need Planck’s constant to scale the relationship to quantum dimensions, but the form is the same. If the standard deviations of the time and frequency estimates are t and f respectively, then we can write Gabor’s uncertainty principle thus:

\sigma_t \sigma_f \geq \frac{1}{4\pi} \approx 0.08\ \mathrm{cycles}

So the product of the standard deviations of time, in milliseconds, and frequency, in Hertz, must be at least 80 (the units are ms.Hz, or millicycles).

References

  1. Gabor, D. [1946] Theory of communication. Journal of the Institute of Electrical Engineering 93 429–457.
  2. Hall, M. (2006). Resolution and uncertainty in spectral decomposition. First Break 24 (12), December 2006. {{doi:10.3997/1365-2397.2006027}}

Reading

Sorry, this is just a link dump for now...

Questions

I am still struggling with the question of whether Gabor uncertainty is a feature of the Fourier transform, windowing, or of signals themselves. Not sure exactly what that means, but perhaps something like 'a property of all resonators'. Or 'a consequence of the definitions of time and frequency'.

People talk about the STFT as having 'inaccurate' time-frequency resolution, but it simply computes what you need for perfect signal reconstruction. It's no less 'true' than any other decomposition. Other transforms may result in 'sparser' decompositions, so then what are our criteria for 'good' t-f decompositions? High contrast? Few coefficients? (Are those the same thing?). If we don't know the t-f character of the input, how can we tell what the best decomposition is? Does it just come back to this question of whether seismic data is 'sparse'?

An experiment I'd like to do: analog leakage. Can you hold a tuning fork near a slightly de-tuned instrument, and see it resonate more as shorted and shorter notes are played? If so, then I feel like it's a property of signals.

External links