=== Continuous-time white noise === In order to define the notion of "white noise" in the theory of [[continuous-time]] signals, one must replace the concept of a "random vector" by a continuous-time random signal; that is, a random process that generates a function w of a real-valued parameter t. Such a process is said to be '''white noise''' in the strongest sense if the value w(t) for any time t is a random variable that is statistically independent of its entire history before t. A weaker definition requires independence only between the values w(t_1) and w(t_2) at every pair of distinct times t_1 and t_2. An even weaker definition requires only that such pairs w(t_1) and w(t_2) be uncorrelated. [http://economics.about.com/od/economicsglossary/g/whitenoise.htm ''White noise process'']. By Econterms via About.com. Accessed on 2013-02-12. As in the discrete case, some authors adopt the weaker definition for "white noise", and use the qualifier '''independent''' to refer to either of the stronger definitions. Others use '''weakly white''' and '''strongly white''' to distinguish between them. However, a precise definition of these concepts is not trivial, because some quantities that are finite sums in the finite discrete case must be replaced by integrals that may not converge. Indeed, the set of all possible instances of a signal w is no longer a finite-dimensional space \mathbb{R}^n, but an infinite-dimensional [[function space]]. Moreover, by any definition a white noise signal w would have to be essentially discontinuous at every point; therefore even the simplest operations on w, like integration over a finite interval, require advanced mathematical machinery. Some authors require each value w(t) to be a real-valued random variable with some finite variance \sigma^2. Then the covariance \mathrm{E}(w(t_1)\cdot w(t_2)) between the values at two times t_1 and t_2 is well-defined: it is zero if the times are distinct, and \sigma^2 if they are equal. However, by this definition, any integral : W_{[a,b]} = \int_a^b w(t)\, dt over an interval with positive width b-a must be zero with probability one. This property would render the concept inadequate as a model of physical signals. Therefore, most authors define the signal w indirectly by specifying non-zero values for the integrals of w(t) and |w(t)|^2 over any trivial interval [a,b], as a function of the width b-a. In this approach, however, the value of w(t) at an isolated time cannot be defined as a real-valued random variable. Also the covariance \mathrm{E}(w(t_1)\cdot w(t_2)) becomes infinite when t_1=t_2; and the [[autocorrelation]] function R(t_1,t_2) must be defined as N \delta(t_1−t_2), where N is some real constant and \delta is [[Dirac delta function|Dirac's "function"]]. In this approach, one usually specifies that the integral W_I of w(t) over an interval I=[a,b] is a real random variable with normal distribution, zero mean, and variance (b-a)\sigma^2; and also that the covariance \mathrm{E}(W_I\cdot W_J) of the integrals W_I, W_J is r\sigma^2, where r is the width of the intersection I∩J of the two intervals I,J. This model is called a '''Gaussian white noise signal''' (or '''process'''). ==References== {{reflist}}