Talk:Convolution

From Wikipedia, the free encyclopedia

WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, which collaborates on articles related to mathematics.
Mathematics rating: Start Class Mid Priority  Field: Analysis
One of the 500 most frequently viewed mathematics articles.


Contents

[edit] Derivative of convolution

The Convolution#Differentiation rule section was recently updated from:

\mathcal{D}(f  * g) = \mathcal{D}f  * g = f  * \mathcal{D}g \,

to

\mathcal{D}(f  * g) = \mathcal{D}f  * g + f  * \mathcal{D}g \,

I'm pretty sure it was correct the first time.

We know (using Laplace transform#Proof of the Laplace transform of a function's derivative) that:

 \mathcal{L}\{\mathcal{D}f\} = s\mathcal{L}\{f\}
 \mathcal{L}\{\mathcal{D}g\} = s\mathcal{L}\{g\}

and that:


\begin{align}
  \mathcal{L}\{\mathcal{D}\{f * g\}\} &= s\mathcal{L}\{f * g\}  \\
                                      &= s\mathcal{L}\{f\}\mathcal{L}\{g\}  \\
                                      &= \mathcal{L}\{\mathcal{D}f\}\mathcal{L}\{g\} \\
                                      &= \mathcal{L}\{f\}\mathcal{L}\{\mathcal{D}g\}
\end{align}

Therefore, I've changed it back for now. Oli Filth 15:36, 29 August 2007 (UTC)

Sorry, my mistake, thank you for correcting me :) Crisófilax 16:16, 30 August 2007 (UTC)

Mathworld lists it as the sum of the two terms: http://mathworld.wolfram.com/Convolution.html Can someone look it up in a textbook or verify numerically in Matlab? I'm changing it back to a sum. AhmedFasih (talk) 18:45, 25 February 2008 (UTC)

Update: I tried a simple test (convolving a triangle with a sinusoid, then differentiating) in Matlab, the Mathworld version, D(f*g)=Df*g+f*Dg, is numerically equivalent to the sum of the two expressions previously given here. I am inclined to believe the Mathworld version. AhmedFasih (talk) 18:56, 25 February 2008 (UTC)
A few points:
  • I'm aware that Mathworld differs, but I'd stake my life on the fact that it's incorrect on this one.
  • See the derivation above for why I think Mathworld is wrong.
  • Numerical evaluations of discrete-time convolution can't prove anything about the continuous-time convolution. (The most they can do is indicate what may be the case.)
  • However, it would seem you've messed up your experiment; try the code below:
t = [-4*pi:0.01:4*pi];
f = sin(t);
g = zeros(size(t));
g(length(t)/2 - 1 - (0:200)) = linspace(1,0,201);
g(length(t)/2 + (0:200)) = linspace(1,0,201);
Df = f(2:end) - f(1:end-1);
Dg = g(2:end) - g(1:end-1);
Df_g = conv(Df, g);
f_Dg = conv(f, Dg);
fg = conv(f, g);
Dfg = fg(2:end) - fg(1:end-1);
figure
cla, hold on, plot(Dfg, 'b'), plot(f_Dg, 'r'), plot(Df_g, 'k'), plot(Df_g + f_Dg, 'm')
Obviously, if D(f*g) = D(f)*g = f*D(g), then clearly D(f)*g + f*D(g) = 2.D(f)*g, which is what the example above shows.
  • Either way, you and I playing around with Matlab is original research; this can't be the basis of anything in the article.
Based on all of this, I'm going to remove the statement of the "convolution rule" until we can get this straightened out. Oli Filth(talk) 20:04, 25 February 2008 (UTC)
Actually, I'm not. See the ref that Michael Slone cited below, or [1], or p.582 of "Digital Image Processing", Gonzalez + Woods, 2nd. ed. I think we can safely assume that Mathworld is wrong on this one. Oli Filth(talk) 20:14, 25 February 2008 (UTC)
Yes, MathWorld just flubbed it. The derivative is just another impulse response convolution, and these commute. There's no add involved; someone who was editing that page probably got confused, thinking the * was a mutiply. Dicklyon (talk) 03:56, 26 February 2008 (UTC)
FWIW, Mathworld is now corrected. Oli Filth(talk) 22:05, 2 April 2008 (UTC)

In the discrete case (if one sums over all of Z), one can directly compute that D(f * g) = (Df * g). Theorem 9.3 in Wheeden and Zygmund asserts (omitting some details) that if f is in Lp and K is a sufficiently smooth function with compact support, then D(f*K) = f*(DK). The proof appears on pp. 146 – 147. I am no analyst, but this appears to support the claim that convolution does not respect the Leibniz rule. Michael Slone (talk) 20:03, 25 February 2008 (UTC)

I feel you, I just looked it up in my Kamen/Heck "Fundamentals of signals and systems," 2nd ed., p. 125 and you are 100% right. Whew, a research problem just got a little bit easier, thanks much. AhmedFasih (talk) 13:11, 26 February 2008 (UTC)

[edit] The visualization figure

...in the article is good. But it would be even better if the resultant convoluted function was shown. It can be a little bit hard to image in ones brain what the integral of the product of the two shown functions look like as the two functions slide over each other. I am not new to convolution I just have not used it for seven years or so and went here to see and quickly recap what it is all about. And for such a use case of the article a good figure is very powerfull. -- Slaunger 14:15, 24 October 2007 (UTC)

I Agree that the visualization figure is very powerfull, but am reverting to previous version because I believe that now the Visual explanation of convolution figure is clutterring the article and is essentially a helper to the text. It was cleaner before. This is my opinion, if anyone else disagrees we could discuss. --D1ma5ad (talk) 22:28, 2 March 2008 (UTC)

[edit] Why the time inversion?

The article doesn't explain why g is reversed. What is the point of time inverting it? Egriffin 17:46, 28 October 2007 (UTC)

What do you mean by "the point"? Convolution is defined with time inversion, and as such, happens to have many useful applications. If you don't perform time-inversion, you have cross-correlation instead; which also has lots of useful applications. Oli Filth(talk) 17:56, 28 October 2007 (UTC)
Or why g instead of f? If you look at it, it makes no difference, since the variable of integration could just as well run the other way, and gets integrated out. In the result, you'll find that if either f or g is shifted to later, then their convolution shifts to later. For this to work this way, the integral needs to measure how they align against each other in opposite order. But think of the variable of integration as some "sideways" dimension, not time, and there's not no "time reversal" to bother you. Or think in terms of the PDF of the sum of two independent random variables: their PDFs convolve, as you can work out, but there is no time involved and no reversal except relative inside the integral. Dicklyon (talk) 16:27, 26 February 2008 (UTC)


It helps to think of \int_{-\infty}^{\infty} f(\tau) g(t - \tau)\, d\tau as a weighted average of the function g(τ):
    • up to the moment "t", if f(τ) happens to be zero for all negative values of τ, or
    • centered around the moment "t", if f(τ) happens to be symmetrical around τ = 0.
The weighting coefficient, f(τ), for a positive value of τ, is the weight applied to the value of function g that occurred τ units (e.g. "seconds") prior to the moment "t". You may either infer that from the formula, or you may define  f(τ) that way and infer (i.e. derive) the formula from that definition.
Maybe your point is that something like this needs to be stated in the article, not here.
--Bob K (talk) 12:03, 1 May 2008 (UTC)

[edit] Note to associativity

(H * δ') * 1 = (H' * δ) * 1 = (δ * δ) * 1 = δ * 1 = 1

H * (δ' * 1) = H * (δ * 1') = H * (δ * 0) = H * 0 = 0

where H represents heaviside's step function whose derivative is dirac's delta function

    • sorry for the form in which I am presenting this, but I am not well familiarized with input of math equations

[edit] intro section

The statement: "Such a convolution is the summation of the impulse responses weighed by the input amplitude " is true when the input comprises only impulses. But that is not the context of the first section.

--Bob K (talk) 15:20, 2 May 2008 (UTC)