Monthly Archives: October 2019

Summability, Tauberian theorems, and Fourier series

Consider a periodic function f. Its Fourier series \sum_{n=-\infty}^{\infty} \hat{f}(n) e^{2\pi inx} always exists formally, but questions of whether its Fourier series converges to itself can be rather subtle. In general, f may be different from its Fourier series: a Fourier coefficient is the evaluation of an integral, so f can be changed pointwise without affecting the value of any Fourier coefficient. It may be natural to believe that convergence holds when f is continuous. However, an example constructed by du Bois-Reymond showed that continuity is not sufficient. The most general result in this area is Carleson’s theorem, showing that L^2-functions have convergent Fourier series almost everywhere.

Here we will prove that the Fourier series of f converges to f in the sense of Cesàro (or Abel), and we will highlight through a Tauberian-type theorem that convergence holds when f is “smooth”. In particular, the main theorem we will show is that for a periodic, continuous function, its Fourier series converges pointwise if its Fourier coefficients are decaying at a linear rate.

If f:[-\pi, \pi] \rightarrow \mathbb{C} and \abs{\hat{f}(n)}=o(1/n) and f is continuous at x, then \sum_{|n| \leq N} \hat{f}(n)e^{2\pi inx} \rightarrow f(x) as N\rightarrow \infty.
The sufficient condition o(1/n) in the above theorem can be improved to O(1/n), but we won’t prove the stronger statement here.

What types of functions are characterized by decaying Fourier coefficients?
A simple calculation with integration by parts shows that if f is differentiable, then \abs{n\hat{f}(n)} = \abs{\hat{f'}(n)}. Since \abs{\hat{f'}(n)}=o(1) by the Riemann-Lesbesgue lemma, we have the following:

If f:[-\pi, \pi] \rightarrow \mathbb{C} is differentiable, then its Fourier series converges pointwise to f.

In general, convergence also holds for other types of smoothness conditions.

Summability

We first take a detour (without the Fourier analysis setting) to investigate the different notions of summability for a sequence. The standard and familiar notion of summability captures the intuitive notion that as one adds more numbers from a sequence, the sum gets closer to its limit. More precisely,

A sequence \{a_n\}, a_n \in \mathbb{C} is summable to s if its sequence of partial sums \{s_n\} where s_n = \sum_{i=1}^n a_i converges to s.

Under this definition, the partial sums of Grandi’s series \{1, -1, 1, -1, \ldots \} oscillate between 1 and 0, and thus the series diverges. However, one could argue that this series “hovers” around 1/2 since the partial sums oscillate between 1 and 0. In fact, the average partial sums of the sequence \{1, 0, 1, 0, \ldots \} converge to 1/2. This motivates the following alternate definition of summability:

A sequence \{a_n\}, a_n \in \mathbb{C} is Cesàro-summable to s if its sequence of Cesàro means \{\sigma_N\} converges to s, where \sigma_N = \frac{1}{N} \sum_{n=1}^N s_n and s_n is the n-th partial sum \sum_{i=1}^n a_i.

It is instructive to rewrite the Cesàro mean as

    \begin{eqnarray*} \sigma_N & = & \frac{1}{N} \sum_{n=1}^{^N} s_n \\  & = & \frac{1}{N} \sum_{n=1}^{N} \sum_{i=1}^{n} a_i \\  & = & \sum_{n=1}^{N}  \left(\frac{N - n + 1}{N}\right) a_n. \end{eqnarray*}

We can see that the Cesàro mean is a weighted average of the a_n‘s, where the later terms contribute with a discount linear factor. With this insight, it is intuitive to conclude if a sequence is summable, then it is also Cesàro-summable: a summable sequence must have vanishing “tail”, implying that its Cesàro means form a Cauchy sequence and thus converge.

We can consider another definition of summability where the discount factor is exponential:

A sequence \{a_n\}, a_n \in \mathbb{C} is Abel summable to s if for every r \in [0, 1), A(r):=\sum_{n=1}^{\infty} r^n a_n converges and \lim_{r \rightarrow 1} A(r) = s.

Intuitively, Abel summability ought to capture a larger class of sequences. In fact, one can summarize this entire section in the following:

If a sequence is summable to s, then it is Cesàro-summable to s. If a sequence is Cesàro-summable to s, then it is also Abel-summable to s.
Suppose first that \{a_n\} is summable to s. For every \epsilon > 0, there exists N_0 such that for every N\geq N_0, |s_n - s| \leq \epsilon / 2. Let D:= \sum_{i=1}^{N_0} |s_i - s| and N_1 := \max(N_0, 2D/\epsilon). Then for every N \geq N_0, we have

    \begin{eqnarray*} \abs{\sigma_N - s} & = & \abs{\frac{1}{N} \sum_{n=1}^N (s_n - s)} \\  & \leq & \frac{1}{N} \sum_{n=1}^N \abs{s_n - s} \\  & = & \frac{1}{N} \sum_{i=1}^{N_0} \abs{s_n - s} + \frac{1}{N} \sum_{n > N_0}^N \abs{s_n - s} \\  & \leq & \frac{1}{N} \sum_{n=1}^{N_0} \abs{s_n - s} + \frac{\epsilon}{2}, \end{eqnarray*}

and when N is sufficiently large, \sum_{n=1}^{N_0} \abs{s_n - s} \leq  \epsilon N/2,
implying that \sigma_n \rightarrow s as n \rightarrow \infty.

Now suppose that \{\sigma_n\} converges to s. Define s_0 = 0. Then we can write

    \begin{eqnarray*} A(r) & = & \sum_{n=1}^{\infty} a_n r^n \\  & = & \sum_{n=1}^{\infty} (s_n - s_{n-1}) r^n \\  & = & \sum_{n=1}^{\infty} s_n r^n - r \sum_{n=1}^{\infty} s_{n-1} r^{n-1} \\  & = & (1 - r) \sum_{n=1}^{\infty} s_n r^n. \end{eqnarray*}

Since s_n = n \sigma_n - (n-1) \sigma_{n-1}, using a similar calculation as above, we can write

    \[ A(r) = (1-r)^2 \sum_{n=1}^{\infty} n \sigma_n r^n. \]

Since \{\sigma_n\} converges (and is thus bounded), \sum_{n=1}^{\infty} n \sigma_n r^n converges for every r \in [0, 1) and so does A(r).

Now it remains to show that \lim_{r\rightarrow 1} A(r) = s. Let \epsilon > 0, then there exists N such that for each n \geq N, |\sigma_n - s| \leq \epsilon. We can split A(r) into two sums:

    \[ (1-r)^2 \sum_{n=1}^{N} n \sigma_n r^n + (1-r)^2 \sum_{n>N}^{\infty} n \sigma_n r^n. \]

Since one can exchange limit with finite sums, \lim_{r\rightarrow 1}(1-r)^2 \sum_{n=1}^{N} n \sigma_n r^n is 0. So

    \begin{eqnarray*} \lim_{r\rightarrow 1} A(r) & \leq & (s + \epsilon) \lim_{r\rightarrow 1} (1-r)^2 \sum_{n>N}^{\infty} n r^n \\  & = & (s + \epsilon) \lim_{r\rightarrow 1} (1-r)^2 r \sum_{n>N}^{\infty} n r^{n-1} \\  & = & (s + \epsilon) \lim_{r\rightarrow 1} (1-r)^2 r \left( \sum_{n>N}^{\infty} r^n \right)^' \\  & = & (s + \epsilon) \lim_{r\rightarrow 1} (1-r)^2 r \left( \frac{r^{N+1}}{1-r} \right)^' \\  & = & (s + \epsilon) \cdot 1. \end{eqnarray*}

A similar shows that \lim_{r\rightarrow 1} A(r) \geq s - \epsilon, and hence \{a_n\} is Abel summable to s.

A Tauberian-type theorem

In the previous section, we showed that the class of summable sequences is a subset of Cesàro-summable sequeences, which in turn is a subset of Abel-summable sequences. It is not difficult to see that these containments are strict. However, by imposing certain conditions, these containments can also be reversed, and statements of this type are called “Tauberian-type theorems”. Here we prove a simple version where the magnitude of the sequence decays linearly.

(Tauberian) Suppose the sequence \{a_n\} satisfies \abs{a_n} = o(1/n).
(a) If \{a_n\} is Cesàro-summable to s, then \{a_n\} is also summable to s.
(b) If \{a_n\} is Abel-summable to s, then \{a_n\} is also summable to s.
(a) Fix \eps > 0. Let \sigma_N, s_N be the Cesàro mean and partial sum of \{a_n\}, respectively. By assumption, \abs{\sigma_N -s} \leq \eps / 2 when N is sufficiently large, and by triangle inequality, \abs{s_N - s} is at most \abs{s_N - \sigma_N} + \eps / 2, so it suffices to prove that \abs{s_N - \sigma_N} \leq \eps / 2.

Recall that in the previous section, we proved that

    \[ \sigma_N = \sum_{n=1}^{N}  \left(\frac{N - n + 1}{N}\right) a_n. \]

Then we have

    \begin{eqnarray*} \abs{s_N - \sigma_N} & = & \abs{\sum_{n=1}^N a_n - \sum_{n=1}^N \frac{N - n + 1}{N} a_n} \\ & = & \sum_{n=1}^N \frac{n - 1}{N} \abs{a_n}. \end{eqnarray*}

By assumption, there is some N_0 such that for every n \geq N_0, (n-1)\abs{a_n} \leq \eps/4, implying \sum_{n=N_0}^N \frac{n - 1}{N} \abs{a_n} \leq \eps / 4. Then when N is sufficiently large, \sum_{n<N_0} \frac{n - 1}{N} \abs{a_n} \leq \eps / 4, proving that \sum_{n=1}^N \frac{n - 1}{N} \abs{a_n} \leq \eps / 2 as desired.

(b) Let A(r) = \sum_{n=1}^N r^n a_n. Define r_N = 1- 1/N. By assumption, when N is sufficiently large, \abs{A_N(r_N) - s} \leq \eps/2, and by triangle inequality, \abs{s_N - s} is at most \abs{s_N - A_N(r_N)} + \eps/2, so it suffices to prove that \abs{s_N - A_N(r_N)} \leq \eps/2. By triangle inequality again, we have

    \begin{eqnarray*} \abs{s_N - A_N(r_N)} & = & \abs{\sum_{n=1}^N a_n - r_N^n a_n} \\ & \leq & \sum_{n=1}^N \paren{1 - r_N^n} \abs{a_n}. \end{eqnarray*}

By assumption, there is some N_0 such that for every n \geq N_0, (n-1)\abs{a_n} \leq \eps/4. Again we can break the the above sum into two, and for the later terms, we have

    \begin{eqnarray*} \sum_{n=N_0}^N \paren{1 - r_N^n} \abs{a_n} & = & \sum_{n=N_0}^N \paren{1-r_N}\paren{1 + r_N + \ldots + r_N^{n-1}} \abs{a_n} \\ & \leq & \sum_{n=N_0}^N \frac{n|a_n|}{N} \\ & \leq & \eps/4. \end{eqnarray*}

For the initial terms, we have

    \begin{eqnarray*} \sum_{n<N_0} \paren{1 - r_N^n} \abs{a_n} & \leq & \paren{1-r_N^{N_0}} \sum_{n<N_0}\abs{a_n} \\ & = & \paren{1-\paren{1 - \frac{1}{N}}^{N_0}} \sum_{n<N_0}\abs{a_n} \\ & \leq & \frac{N_0}{N} \sum_{n<N_0}\abs{a_n}, \end{eqnarray*}

where the last inequality is due to Bernoulli’s inequality, and is bounded above by \eps/4 when N is sufficiently large. Together we have \abs{s_N - A_N(r_N)} \leq \eps/2 as desired.

Fourier series

We now come back to the Fourier setting. First recall that

For a periodic function f, we define its N-th partial Fourier sum to be S_N(f)(x):= \sum_{|n| \leq N -1} \hat{f}(n) e_n(x), with e_n(x):=e^{2\pi inx}, and \hat{f}(n)=\frac{1}{2\pi}\int_{-\pi}^{\pi} f(x)e^{-inx} dx.

We now introduce Dirichlet and Fejér kernels, which when convolved with a function f, we obtain the partial sum and Cesàro mean of f, letting us invoke Tauberian Theorem.

D_N(x):= \sum_{|n| \leq N-1} e_n(x) is the N-th Dirichlet kernel, and F_N(x):= \frac{1}{N} \sum_{|n| \leq N-1} D_n(x) is the N-th Fejér kernel.
(Tauberian theorem restated) Let f be a periodic function with \abs{\hat{f}(n)}=o(1/n). Then for any x \in [-\pi, \pi], if (F_N*f)(x)\rightarrow s for some s as N\rightarrow \infty, then S_N(f)(x) \rightarrow s as N\rightarrow \infty.
For any nonnegative integer n, let a_n := \hat{f}(n) e_n(x) + \hat{f}(-n) e_n(-x). Note that by assumption a_n = o(1/n). By the Convolution Theorem, the N-th partial sum of \{a_n\} is simply

    \[ S_N(f)(x) = (D_N * f)(x), \]

and similarly, the N-th Cesàro mean of \{a_n\} is

    \begin{eqnarray*} \frac{1}{N} \sum_{n=0}^{N-1} \sum_{i=0}^n a_i & = & \frac{1}{N} \sum_{n=0}^{N-1} (D_n * f)(x) \\ & & = (F_N * f)(x), \end{eqnarray*}

where the second line follows from the linearity of convolution. Thus, by Tauberian theorem, if (F_N*f)(x) converges, then S_N(f)(x) converges to the same value as well.

To prove the main theorem, it now suffices to show that (F_N*f)(x) approaches f(x). We make one last definition:

Let K_N be a periodic function. We say that \{K_N\}_{n=0}^{\infty} is a family of good kernels if the following three conditions all hold:
(a) for each N\geq 0, \frac{1}{2\pi} \int_{-\pi}^{\pi} K_N = 1,
(b) for each N\geq 0, \int_{-\pi}^{\pi} |K_N| = O(1), and
(c) for any \delta > 0, \int_{\delta \leq |x| \leq \pi} |K_N| \rightarrow 0 as N \rightarrow \infty.

The idea is that a family of good kernels essentially approximate the Dirac delta function, which is a single point with infinite mass at 0 and vanishes everywhere else.

The Fejér kernels \{F_N\}_{N=0}^{\infty} is a family of good kernels.

We will skip the proof of this fact as it is a straightforward calculation once one establishes the trignometric identity F_N(x) = \frac{\sin^2 (Nx/2)}{N \sin^2 (x/2)}. (However, the Dirichlet kernels cannot be a family of good kernels, otherwise every Fourier series converges to its function, which is not true.) Intuitively, since F_N*f(x) is a weighted average of f with highest weights around x (by the definition of a good family of kernels), we can expect that this value is close to f(x), as long as the f does not change its value too much around x. Now we formalize and prove this intuition, which will finish the proof of our main theorem.

Suppose \{K_N\}_{N=0}^{\infty} is a family of good kernels. Let f be a periodic, integrable function that is continuous at x. Then (K_N * f)(x) \rightarrow f(x) as N\rightarrow \infty.
We first write

    \begin{eqnarray*} \abs{(K_N*f)(x) - f(x)} & = & \frac{1}{2\pi} \abs{\int_{-\pi}^{\pi} K_N(y) f(x-y)dy - f(x)} \\ & = & \frac{1}{2\pi} \abs{\int_{-\pi}^{\pi} K_N(y) f(x-y)dy - \int_{-\pi}^{\pi} K_N(y) f(x)dy} \\  & \leq & \frac{1}{2\pi} \int_{-\pi}^{\pi} \abs{K_N(y)} \abs{f(x-y) - f(x)} dy, \end{eqnarray*}

where the first through third lines follow from the definition of convolution, Condition (a) of a good family of kernels, and triangle inequality.

The idea is that when integrating over y, \abs{K_N(y)} is small if y is bounded away from 0, and \abs{f(x-y) - f(x)} is small if y is close to 0. Formally, let \eps > 0. Since f is continuous at x, there is some \delta > 0 such that

    \[ \int_{|y| \leq \delta} \abs{K_N(y)} \abs{f(x-y) - f(x)} dy \leq \eps M, \]

for some M by Condition (b) of a good family of kernels. Since \eps can be arbitrary, the above integral is 0.

Now for the other values of y, note that since f is integrable, the function is bounded by some B. Then

    \[ \int_{\delta \leq |y| \leq \pi} \abs{K_N(y)} \abs{f(x-y) - f(x)} dy \leq  2B \int_{\delta |y| \leq \pi} \abs{K_N(y)} dy, \]

which tends to 0 as N \rightarrow \infty, finishing the proof.

In the previous section, we showed that the Abel mean can play a similar role to Cesàro. Similarly, one can define the Poisson kernel P_r(\theta) = \sum_{n=-\infty}^{\infty} r^{|n|} e^{in\theta} and show that it is a good family of kernels as r \rightarrow 1 to prove the main theorem through P_r instead.