Category Archives: Fourier analysis

Dirichlet’s Theorem for arithmetic progressions with difference 4

From antiquity, Euclid’s Theorem asserts that the number of primes is infinite. Since all primes except 2 are odd, a re-phrasement of Euclid’s theorem is that the set of odd integers \{2k+1\}_{k\in\mathbb{Z}} contains an infinite number of primes. A natural generalization, proved by Dirichlet, is that every arithmetic progression of the form \{qk+\ell\}_{k\in\mathbb{Z}} contains an infinite number of primes, provided that q and k are relatively prime.

(Dirichlet’s Theorem) If (q,\ell)=1, then the arithmetic progression \{qk+\ell\}_{k\in\mathbb{Z}} contains infinitely many primes, where (q, \ell) denotes the greatest common divisor of q and \ell.

The proof is one of the first application of Fourier analysis on finite abelian groups. Here in this post, we prove Dirichlet’s theorem when q=4, i.e., both the arithmetic progressions \{4k+1\}_{k\in\mathbb{Z}} and \{4k+3\}_{k\in\mathbb{Z}} contain an infinite number of primes. The restriction to q=4 highlights the main structure of Dirichlet’s proof without needing as much background from number theory.

Before jumping directly into the proof, let us first discuss some number-theoretic background and the high-level strategy. First recall Euclid’s argument. For the sake of contradiction, there are exactly N primes p_{1},\ldots,p_{N} of the 2k+1. Then the number M:=2\prod_{i=1}^{N}p_{i}+1 is larger than every p_{i} and cannot be a prime of the form 2k+1. On the other hand, from the Fundamental Theorem of Arithmetic, M is the product of primes, and one of them must be congruent to 1 \Mod 2 since M itself is. So some p_{i} must divide both M and and 2\prod_{i=1}^{N}p_{i}, implying that p divides 1=M-2\prod_{i=1}^{N}p_{i}, a contradiction. One can easily verify that this argument can be adapted to \{4k+3\}_{k\in\mathbb{Z}} by taking M=4\prod_{i=1}^{N}p_{i}+3. However, the argument breaks down when applied to the arithmetic progression \{4k+1\}_{k\in\mathbb{Z}}. The reason is that the number 4\prod_{i=1}^{N}p_{i}+1 is no longer guaranteed to have a prime divisor of the form 4k+1.

Instead, Euler’s analytic proof of Euclid’s Theorem provides the proper framework to extend the infinitude of primes to arithmetic progressions, where the zeta function \zeta(s)=\sum_{n=1}^{\infty}\frac{1}{n^{s}} can be written as the product of geometric series of primes:

(Euler-Dirichlet Product Formula) For every s>1,

    \[ \sum_{n=1}^{\infty}\frac{1}{n^{s}}=\prod_{p}\frac{1}{1-\frac{1}{p^{s}}}. \]

Furthermore, if f:\mathbb{Z}\rightarrow \mathbb{R} is multiplicative, i.e., f(xy)=f(x)f(y), then

    \[ \sum_{n=1}^{\infty}\frac{f(n)}{n^{s}}=\prod_{p}\frac{1}{1-\frac{f(p)}{p^{s}}}. \]

(A note on notation: p always denotes a prime, and \sum_{p} and \prod_{p} range over all primes.) The identity can be viewed as an analytic version of the Fundamental Theorem of Arithmetic: every positive integer is the product of some prime powers, and the expression 1/(1-p^{-s}) is a geometric series. Since \zeta(1) diverges, The Product Formula also implies that the number of primes is infinite. With additional calculation, Euler proved the stronger form of Euclid’s Theorem:

    \[ \sum_{p}\frac{1}{p^{s}}\rightarrow\infty \text{ as } s\rightarrow1^{+}. \]

Now to prove Dirichlet’s Theorem, we will show that the sum \sum_{p}\frac{D(p)}{p^{s}}\rightarrow\infty as s\rightarrow1^{+}, where D_{\ell} is the indicator function on \mathbb{Z} for being congruent to \ell \Mod q. By Fourier analysis, \sum_{p}\frac{D_{\ell}(p)}{p^{s}} can be decomposed into two sums. Roughly, the first corresponds to the zeroth Fourier coefficient of D_{\ell} and evaluates to \sum_{p}\frac{1}{p^{s}}, which diverges as s\rightarrow1^{+}, and the second sum corresponds to the non-zero Fourier coefficients and is always bounded. In our writeup, the advantage of restricting to q=4 is that the estimate of the second sum becomes significantly simpler, without further tools from number theory.

(Proof of Dirichlet’s Theorem restricted to q=4.) Let G:=\mathbb{Z}_{4}^{*} be the multiplicative group of integers modulo 4, and \delta_{\ell}:G\rightarrow\{0,1\} be the indicator function such that \delta_{\ell}(x)=1 if x\equiv\ell\Mod 4 and 0 otherwise. Then by Fourier expansion,
for every n\in G,

    \begin{eqnarray*} \delta_{\ell}(n) & = & \sum_{e\in\hat{G}}\hat{\delta_{\ell}}(e)e(n),\\  & = & \frac{1}{|\hat{G}|}\sum_{e\in\hat{G}}\overline{e(\ell)}e(n), \end{eqnarray*}

where e ranges over all characters of the dual group of G. Since G is a finite abelian group, it is is also isomorphic to its dual, and with |G|=2, we can conclude that the only characters of \hat{G} are: the trivial character e_{0} that is identically 1, and the character e_{1} where e_{1}(x)=1 if
x\equiv1\Mod 4 and -1 if x\equiv3\Mod 4. Thus, we can simplify and rewrite \delta_{\ell} as

    \begin{eqnarray*} \delta_{\ell}(n) & = & \frac{1}{2}\left(\overline{e_{0}(\ell)}e_{0}(n)+\overline{e_{1}(\ell)}e_{1}(n)\right)\\ & = & \frac{1}{2}\left(e_{0}(n)+e_{1}(\ell)e_{1}(n)\right). \end{eqnarray*}

(For sanity check, one can easily verify via direct computation, without Fourier analysis, that the identities \delta_{1}(n)=\frac{1+e_{1}(n)}{2} and \delta_{3}(n)=\frac{1-e_{1}(n)}{2} hold.) For the rest of this proof, we will rely on the the fact that the characters e_{0} and e_{1} are real-valued.

To prove the theorem, we have to sum over all integers instead of just the multiplicative subgroup of \mathbb{Z}_{4}^{*}. To this end, for every character e\in\hat{G}, we define its Dirichlet character \chi to be the extension to all of \mathbb{Z} as follows: \chi(n)=e(n) if (n,4)=1 and 0 otherwise. Let D_{\ell}:\mathbb{Z}\rightarrow\{0,1\} be the indicator function of being congruent to \ell \Mod 4. Then it is easy to see that

    \[ D_{\ell}(n)=\frac{1}{2}\left(\chi_{0}(n)+\chi_{1}(\ell)\chi_{1}(n)\right), \]

where \chi_{0},\chi_{1} are the Dirichlet characters of e_{0},e_{1}, respectively. Since we can express the indicator function D_{\ell} in terms of Dirichlet characters, we have

    \begin{eqnarray*} \sum_{p\equiv\ell\Mod 4}\frac{1}{p^{s}} & = & \sum_{p}\frac{D_{\ell}(p)}{p^{s}}\\ & = & \frac{1}{2}\sum_{p}\frac{\chi_{0}(p)}{p^{s}}+\frac{\chi_{1}(\ell)}{2}\sum_{p}\frac{\chi_{1}(p)}{p^{s}}. \end{eqnarray*}

It suffices to prove that \sum_{p\equiv\ell\Mod 4}\frac{1}{p^{s}} \rightarrow \infty as s\rightarrow1^{+}. In particular, we will show that

  • \sum_{p}\frac{\chi_{0}(p)}{p^{s}} \rightarrow \infty as s\rightarrow1^{+}, which coincides with Euler’s argument since \chi_{0}(p)=1 for all p > 2, and
  • \sum_{p}\frac{\chi_{1}(p)}{p^{s}}=O(1).

To estimate these sums, we prove the following:

Let \chi be a Dirichlet character for G. Then \sum_{p}\frac{\chi(p)}{p^{s}}=\log(L(s,\chi))+O(1), where L(s,\chi)=\sum_{n=1}^{\infty}\frac{\chi(n)}{n^{s}}.
(of Proposition) Since \chi is multiplicative, from the Euler-Dirichlet Product Formula,

    \[ L(s,\chi)=\prod_{p}\frac{1}{1-\frac{\chi(p)}{p^{s}}}. \]

Since \chi is real-valued, taking log of both sides, we have

    \begin{eqnarray*} \log(L(s,\chi)) & = & \sum_{p}-\log\left(1-\frac{\chi(p)}{p^{s}}\right)\\ & = & \sum_{p}\frac{\chi(p)}{p^{s}}+O\left(\frac{\chi(p)^{2}}{2p^{2s}}\right)\\ & = & \sum_{p}\frac{\chi(p)}{p^{s}}+O(1), \end{eqnarray*}

where the second line follows from the estimate of the power series \log(1-x)=\sum_{n=1}^{\infty}x^{n}/n=x+O(x^{2}) when |x|<1, and the third from the fact that \sum_{n}\frac{1}{n^{2}} converges.

Now we estimate the two sums using the above proposition. For the trivial Dirichlet character \chi_{0}, we have

    \begin{eqnarray*} \sum_{p}\frac{\chi_{0}(p)}{p^{s}} & = & \log\left(\sum_{n=1}^{\infty}\frac{\chi_{0}(n)}{n^{s}}\right)+O(1)\\ & = & \log\left((1-\frac{1}{2^{s}})\sum_{n=1}^{\infty}\frac{1}{n^{s}}\right)+O(1)\\ & = & \log\left(\sum_{n=1}^{\infty}\frac{1}{n^{s}}\right)+O(1), \end{eqnarray*}

which diverges as s\rightarrow1^{+} since \sum_{n}\frac{1}{n^{s}} diverges as s\rightarrow1^{+}.

For the non-trivial character \chi_{1}, \sum_{n}\frac{\chi_{1}(n)}{n} is simply the alternating series 1-\frac{1}{3}+\frac{1}{5}-\frac{1}{7}+\dots, which converges to a positive number. Thus,

    \begin{eqnarray*} \sum_{p}\frac{\chi_{1}(p)}{p^{s}} & = & \log\left(\sum_{n=1}^{\infty}\frac{\chi_{1}(n)}{n^{s}}\right)+O(1)\\ & \leq & \log\left(\sum_{n=1}^{\infty}\frac{\chi_{1}(n)}{n}\right)+O(1)\\ & = & O(1), \end{eqnarray*}

completing the proof.

Most steps in the preceding proof can be easily extended to an arbitrary q. The Euler-Dirichlet Product Formula works for a multiplicative function taking values in \mathbb{C}. The Fourier expansion of D_{\ell} works as before. The Proposition \sum_{p}\frac{\chi(p)}{p^{s}}=\log(L(s,\chi))+O(1) still holds, but extra care has to be taken since the L-function is now complex-valued. The sum \sum_{p}\frac{\chi_{0}(p)}{p^{s}} diverges, but for a non-trivial Dirichlet character \chi, the fact that the sum \sum_{p}\frac{\chi(p)}{p^{s}} is bounded is significantly more involved. See for instance Apostol or Stein-Shakarchi for further details.

We finish by observing that the proof above has a quantitative form.

(Quantitative version of Dirichlet’s Theorem) For every s > 1,

    \[ \sum_{p\equiv\ell\Mod 4}\frac{1}{p^{s}}=\frac{1}{2}\log\frac{1}{s-1}+O(1). \]

In general, for an arbitrary q, the expression \frac{1}{2}\log\frac{1}{s-1} is replaced by \frac{1}{|\mathbb{Z}_q^*|}\log\frac{1}{s-1}, with |\mathbb{Z}_q^*|=\phi(q), where \phi is the Euler totient function.
Proof proceeds the same as before, where we showed that

    \[ \sum_{p\equiv\ell\Mod 4}\frac{1}{p^{s}}=\frac{1}{2}\sum_{p}\frac{\chi_{0}(p)}{p^{s}}+\frac{\chi_{1}(\ell)}{2}\sum_{p}\frac{\chi_{1}(p)}{p^{s}}, \]

with \sum_{p}\frac{\chi_{0}(p)}{p^{s}}=\log\left(\sum_{n=1}^{\infty}\frac{1}{n^{s}}\right)+O(1) and \sum_{p}\frac{\chi_{1}(p)}{p^{s}}=O(1). Previously, we simply noted that \sum_{n=1}^{\infty}\frac{1}{n^{s}} diverges as s\rightarrow1^{+}. To finish the proof, it suffices to note that

    \begin{eqnarray*} \sum_{n=1}^{\infty}\frac{1}{n^{s}} & \leq & \sum_{n=1}^{\infty}\frac{1}{(n+1)^{s}}+O(1)\\ & \leq & \int_{1}^{\infty}\frac{1}{x^{s}}+O(1)\\ & = & \frac{1}{s-1}+O(1). \end{eqnarray*}

Exponential sum and the equidistribution theorem

Consider the sequence x \Mod 1, 2x \Mod 1, 3x \Mod 1,\ldots, where \Mod 1 gives the fractional part of a real number. The equidistribution theorem states that x is irrational iff for any sub-interval [a, b) of the unit interval, for sufficiently large N, roughly (b-a)N of the numbers x\Mod 1, \ldots, Nx\Mod 1 fall inside [a, b). More precisely,

(Equidistributed sequence) A sequence \{x_n\}_{n=1}^{\infty} with x_n \in \mathbb{R} is equidistributed mod 1 if for every 0 \leq a \leq b \leq 1,

    \[ \lim_{N \rightarrow \infty} \frac{\abs{\{x_n : a \leq x_n \Mod 1 < b \}}}{N}  = b - a. \]

(Equidistribution theorem) The sequence \{nx\}_{n=1}^{\infty} is equidistributed mod 1 iff x is irrational.

Let us try to understand why the sequence behaves differently depending on whether x is rational or not. If x = p/q for some integers p and q, then roughly N/q of the first N multiples of x are integers, i.e., 0 \Mod 1. In fact, it is not difficult to see that for i \in \{0, 1, \ldots, q-1\}, 1/q fraction of the sequence equals i/q \Mod 1. So equidistribution does not occur when x is rational.

Now for k\neq 0 \in \mathbb{Z}, consider the exponential function e_k(x):=e^{2\pi i k x}, which has period 1. As discussed above, roughly 1/q of the sequence \{e_k(nx)\}_{n=1}^{\infty} is 1, so the exponential sum \sum_{n=1}^{\infty} e_k(nx) contains an infinite number of ones. For example if q=2, the exponential sum evaluates to -1 + 1 - 1 + 1 - \ldots, which is Grandi’s series and diverges. For an arbitrary q > 1, the exponential sum is a repeating sum of all q-th roots of unity, and one can use the same technique showing that Grandi’s series diverges to establish that this exponential sum diverges too. However, the case when x is irrational differs: every nonzero integer multiple of x is an non-integer, and thus e_k(nx)\neq 1 for any n\neq 0. Writing r=e^{2\pi i x}, the exponential sum \sum_{n=1}^{\infty} e_k(nx) converges to the geometric series \sum_{n=1}^{\infty} r^n = \frac{r}{r-1}, since r\neq1.

In fact, the above intuition can be summarized into the following criterion due to Weyl:

(Weyl’s criterion) A sequence \{x_n\}_{n=1}^{\infty} is equidistributed mod 1 iff for all integers k\neq 0,

    \[ \lim_{N\rightarrow \infty} \frac{1}{N} \sum_{n=1}^N e^{2\pi i kx_n} = 0. \]

From the above discussion, Weyl’s criterion immediately implies the equidistribution theorem.

(of equidistribution theorem) Suppose x is rational, i.e., x=p/q for some integers p and q. Without loss of generality, we assume q > 1. Then for all integer n, by Euclid’s algorithm, nx \Mod 1 is in \{0, 1/q, \ldots, (q-i)/q\}. So the sequence \{nx\}_{n=1}^{\infty} is not equidistributed, e.g., no multiples of x fall inside [1/4q, 1/2q).

Now suppose x is irrational. Since r:= e^{2\pi i kx}\neq 1, we have

    \[ \frac{1}{N} \sum_{n=1}^N e^{2\pi i kx_n} = \frac{1}{N} \frac{r^{N+1} - 1}{r - 1}, \]

which tends to 0 as N \rightarrow \infty.

Before proving Weyl’s criterion, it is instructive to state the notion of equidistribution analytically. For 0\leq a \leq b \leq 1, let \chi_{[a,b)}(x)=1 iff x \Mod 1 \in [a, b). Note that \chi_{[a,b)} is a periodic function on all of \mathbb{R}.

(Weyl’s criterion restated) Let \{x_n\}_{n=1}^{\infty} be a sequence in \mathbb{R}. The following two statements are equivalent:
1) For 0\leq a < b \leq 1,

    \[ \lim_{N\rightarrow \infty} \frac{1}{N} \sum_{n=1}^N \chi_{[a, b)}(x_n) = \int_0^1 \chi_{[a, b)}. \]

2) For all integers k\neq 0,

    \[ \lim_{N\rightarrow \infty} \frac{1}{N} \sum_{n=1}^N e^{2\pi i kx_n} = \int_0^1 e^{2\pi i k}. \]

Written in this form, Weyl’s criterion is asserting that if the sum of every interval function at a sequence converges to its integral, then every exponential sum converges to its integral as well, and vice versa. The reason is that both the space of interval functions and the space of exponential functions can approximate one another. More precisely, an exponential function is clearly periodic and continuous (and thus uniformly continuous), and every uniformly continuous function can be approximated by a finite sum of interval functions. For the other direction, a periodic, interval function can be approximated by a periodic, continuous function, which by Fejér’s Theorem can also be approximated by a finite sum of exponential functions. Now we have all the machinery to write down a proof.

(of Weyl’s criterion) For the forward direction (1) => (2), fix k\neq 0 and \epsilon > 0. Since the exponential function e_k(x):=e^{2\pi i kx} continuous on the closed interval [0, 1], it is also uniformly continuous, i.e., there exists some \delta > 0 such that for every x, x', |e_k(x) - e_k(x')| < \epsilon. Let f_S be a step function where f_S(x) = e_k(m \delta) if x \in [m\delta, (m+1)\delta). By construction, f_S is a finite sum of interval functions with \norm{f_S - e_k}_{\infty} < \epsilon.

By linearity, we have

    \[ \lim_{N\rightarrow \infty}\frac{1}{N} \sum_{n=1}^N f_S(x_n) = \int_0^1 f_S. \]

By taking N sufficiently large and repeatedly applying triangle inequality, we have

    \begin{eqnarray*} \abs{\frac{1}{N} \sum_{n=1}^N e_k(x_n) - \int_0^1 e_k} & \leq & \abs{\frac{1}{N} \sum_{n=1}^N e_k(x_n) - \frac{1}{N} \sum_{n=1}^N f_S(x_n)} + \\ & & \abs{\frac{1}{N} \sum_{n=1}^N f_S(x_n) - \int_0^1 e_k} \\ & \leq & \epsilon + \abs{\frac{1}{N} \sum_{n=1}^N f_S(x_n) - \int_0^1 f_S} + \abs{\int_0^1 f_S - \int_0^1 e_k} \\ & \leq & 3\epsilon, \end{eqnarray*}

implying that \frac{1}{N} \sum_{n=1}^N e_k(x_n) converges to \int_0^1 e_k as N\rightarrow \infty.

Now we consider the reverse direction (2) => (1). Fix 0\leq a < b \leq 1 and let \epsilon > 0. Define

    \[ f^+(x) =  \begin{cases} 1 & \text{if } x \in [a, b) \\ 1/\epsilon (x - (a - \epsilon)) & \text{if } x \in [a - \epsilon, a) \\ -1/\epsilon (x - (b - \epsilon)) + 1 & \text{if } x \in [b, b + \epsilon) \\ 0 & \text{everywhere else.} \end{cases} \]

Note that f^+ is continuous, periodic, and dominates \chi_{[a, b)}. By Fejér’s Theorem, there exists P(x) = \sum_{k=1}^d p_k e^{2\pi i kx} such that \norm{f^+ - P}_{\infty} < \epsilon. By linearity, we have

    \[ \lim_{N\rightarrow \infty}\frac{1}{N} \sum_{n=1}^N P(x_n) = \int_0^1 P. \]

By taking N sufficiently large and repeatedly applying triangle inequality, we have

    \begin{eqnarray*} \abs{\sum_{n=1}^N f^+(x_n) - \int_0^1 f^+} & \leq & \abs{\sum_{n=1}^N f^+(x_n) - \sum_{n=1}^N P(x_n)} + \abs{\sum_{n=1}^N P(x_n) - \int_0^1 f^+} \\ & \leq & \epsilon + \abs{\sum_{n=1}^N P(x_n) - \int_0^1 P} + \abs{\int_0^1 P - \int_0^1 f^+} \\ & \leq & 3\epsilon, \end{eqnarray*}

implying that

    \[ \lim_{N\rightarrow \infty}\frac{1}{N} \sum_{n=1}^N f^+(x_n) = \int_0^1 f^+. \]

Since f^+ dominates \chi_{[a, b)}, we have

    \begin{eqnarray*} \limsup_{N \rightarrow \infty}  \frac{1}{N} \sum_{n=1}^N \chi_{[a,b)}(x_n) & \leq & \limsup_{N \rightarrow \infty}  \frac{1}{N} \sum_{n=1}^N f^+(x_n) \\ & = & \int_0^1 f^+ \\ & = & \int_0^1 \chi_{[a, b)} + \epsilon. \end{eqnarray*}

One can similarly define a periodic, continuous function f^- that approximates \chi_{[a, b)} from below, showing that \liminf_{N \rightarrow \infty}  \frac{1}{N} \sum_{n=1}^N \chi_{[a,b)}(x_n) is bounded below by \int_0^1 \chi_{[a, b)} - \epsilon, which together with the proceeding argument will show that the limit of \frac{1}{N} \sum_{n=1}^N \chi_{[a,b)}(x_n) must exist and converge to \int_0^1 \chi_{[a, b)}.

Summability, Tauberian theorems, and Fourier series

Consider a periodic function f. Its Fourier series \sum_{n=-\infty}^{\infty} \hat{f}(n) e^{2\pi inx} always exists formally, but questions of whether its Fourier series converges to itself can be rather subtle. In general, f may be different from its Fourier series: a Fourier coefficient is the evaluation of an integral, so f can be changed pointwise without affecting the value of any Fourier coefficient. It may be natural to believe that convergence holds when f is continuous. However, an example constructed by du Bois-Reymond showed that continuity is not sufficient. The most general result in this area is Carleson’s theorem, showing that L^2-functions have convergent Fourier series almost everywhere.

Here we will prove that the Fourier series of f converges to f in the sense of Cesàro (or Abel), and we will highlight through a Tauberian-type theorem that convergence holds when f is “smooth”. In particular, the main theorem we will show is that for a periodic, continuous function, its Fourier series converges pointwise if its Fourier coefficients are decaying at a linear rate.

If f:[-\pi, \pi] \rightarrow \mathbb{C} and \abs{\hat{f}(n)}=o(1/n) and f is continuous at x, then \sum_{|n| \leq N} \hat{f}(n)e^{2\pi inx} \rightarrow f(x) as N\rightarrow \infty.
The sufficient condition o(1/n) in the above theorem can be improved to O(1/n), but we won’t prove the stronger statement here.

What types of functions are characterized by decaying Fourier coefficients?
A simple calculation with integration by parts shows that if f is differentiable, then \abs{n\hat{f}(n)} = \abs{\hat{f'}(n)}. Since \abs{\hat{f'}(n)}=o(1) by the Riemann-Lesbesgue lemma, we have the following:

If f:[-\pi, \pi] \rightarrow \mathbb{C} is differentiable, then its Fourier series converges pointwise to f.

In general, convergence also holds for other types of smoothness conditions.


We first take a detour (without the Fourier analysis setting) to investigate the different notions of summability for a sequence. The standard and familiar notion of summability captures the intuitive notion that as one adds more numbers from a sequence, the sum gets closer to its limit. More precisely,

A sequence \{a_n\}, a_n \in \mathbb{C} is summable to s if its sequence of partial sums \{s_n\} where s_n = \sum_{i=1}^n a_i converges to s.

Under this definition, the partial sums of Grandi’s series \{1, -1, 1, -1, \ldots \} oscillate between 1 and 0, and thus the series diverges. However, one could argue that this series “hovers” around 1/2 since the partial sums oscillate between 1 and 0. In fact, the average partial sums of the sequence \{1, 0, 1, 0, \ldots \} converge to 1/2. This motivates the following alternate definition of summability:

A sequence \{a_n\}, a_n \in \mathbb{C} is Cesàro-summable to s if its sequence of Cesàro means \{\sigma_N\} converges to s, where \sigma_N = \frac{1}{N} \sum_{n=1}^N s_n and s_n is the n-th partial sum \sum_{i=1}^n a_i.

It is instructive to rewrite the Cesàro mean as

    \begin{eqnarray*} \sigma_N & = & \frac{1}{N} \sum_{n=1}^{^N} s_n \\  & = & \frac{1}{N} \sum_{n=1}^{N} \sum_{i=1}^{n} a_i \\  & = & \sum_{n=1}^{N}  \left(\frac{N - n + 1}{N}\right) a_n. \end{eqnarray*}

We can see that the Cesàro mean is a weighted average of the a_n‘s, where the later terms contribute with a discount linear factor. With this insight, it is intuitive to conclude if a sequence is summable, then it is also Cesàro-summable: a summable sequence must have vanishing “tail”, implying that its Cesàro means form a Cauchy sequence and thus converge.

We can consider another definition of summability where the discount factor is exponential:

A sequence \{a_n\}, a_n \in \mathbb{C} is Abel summable to s if for every r \in [0, 1), A(r):=\sum_{n=1}^{\infty} r^n a_n converges and \lim_{r \rightarrow 1} A(r) = s.

Intuitively, Abel summability ought to capture a larger class of sequences. In fact, one can summarize this entire section in the following:

If a sequence is summable to s, then it is Cesàro-summable to s. If a sequence is Cesàro-summable to s, then it is also Abel-summable to s.
Suppose first that \{a_n\} is summable to s. For every \epsilon > 0, there exists N_0 such that for every N\geq N_0, |s_n - s| \leq \epsilon / 2. Let D:= \sum_{i=1}^{N_0} |s_i - s| and N_1 := \max(N_0, 2D/\epsilon). Then for every N \geq N_0, we have

    \begin{eqnarray*} \abs{\sigma_N - s} & = & \abs{\frac{1}{N} \sum_{n=1}^N (s_n - s)} \\  & \leq & \frac{1}{N} \sum_{n=1}^N \abs{s_n - s} \\  & = & \frac{1}{N} \sum_{i=1}^{N_0} \abs{s_n - s} + \frac{1}{N} \sum_{n > N_0}^N \abs{s_n - s} \\  & \leq & \frac{1}{N} \sum_{n=1}^{N_0} \abs{s_n - s} + \frac{\epsilon}{2}, \end{eqnarray*}

and when N is sufficiently large, \sum_{n=1}^{N_0} \abs{s_n - s} \leq  \epsilon N/2,
implying that \sigma_n \rightarrow s as n \rightarrow \infty.

Now suppose that \{\sigma_n\} converges to s. Define s_0 = 0. Then we can write

    \begin{eqnarray*} A(r) & = & \sum_{n=1}^{\infty} a_n r^n \\  & = & \sum_{n=1}^{\infty} (s_n - s_{n-1}) r^n \\  & = & \sum_{n=1}^{\infty} s_n r^n - r \sum_{n=1}^{\infty} s_{n-1} r^{n-1} \\  & = & (1 - r) \sum_{n=1}^{\infty} s_n r^n. \end{eqnarray*}

Since s_n = n \sigma_n - (n-1) \sigma_{n-1}, using a similar calculation as above, we can write

    \[ A(r) = (1-r)^2 \sum_{n=1}^{\infty} n \sigma_n r^n. \]

Since \{\sigma_n\} converges (and is thus bounded), \sum_{n=1}^{\infty} n \sigma_n r^n converges for every r \in [0, 1) and so does A(r).

Now it remains to show that \lim_{r\rightarrow 1} A(r) = s. Let \epsilon > 0, then there exists N such that for each n \geq N, |\sigma_n - s| \leq \epsilon. We can split A(r) into two sums:

    \[ (1-r)^2 \sum_{n=1}^{N} n \sigma_n r^n + (1-r)^2 \sum_{n>N}^{\infty} n \sigma_n r^n. \]

Since one can exchange limit with finite sums, \lim_{r\rightarrow 1}(1-r)^2 \sum_{n=1}^{N} n \sigma_n r^n is 0. So

    \begin{eqnarray*} \lim_{r\rightarrow 1} A(r) & \leq & (s + \epsilon) \lim_{r\rightarrow 1} (1-r)^2 \sum_{n>N}^{\infty} n r^n \\  & = & (s + \epsilon) \lim_{r\rightarrow 1} (1-r)^2 r \sum_{n>N}^{\infty} n r^{n-1} \\  & = & (s + \epsilon) \lim_{r\rightarrow 1} (1-r)^2 r \left( \sum_{n>N}^{\infty} r^n \right)^' \\  & = & (s + \epsilon) \lim_{r\rightarrow 1} (1-r)^2 r \left( \frac{r^{N+1}}{1-r} \right)^' \\  & = & (s + \epsilon) \cdot 1. \end{eqnarray*}

A similar shows that \lim_{r\rightarrow 1} A(r) \geq s - \epsilon, and hence \{a_n\} is Abel summable to s.

A Tauberian-type theorem

In the previous section, we showed that the class of summable sequences is a subset of Cesàro-summable sequeences, which in turn is a subset of Abel-summable sequences. It is not difficult to see that these containments are strict. However, by imposing certain conditions, these containments can also be reversed, and statements of this type are called “Tauberian-type theorems”. Here we prove a simple version where the magnitude of the sequence decays linearly.

(Tauberian) Suppose the sequence \{a_n\} satisfies \abs{a_n} = o(1/n).
(a) If \{a_n\} is Cesàro-summable to s, then \{a_n\} is also summable to s.
(b) If \{a_n\} is Abel-summable to s, then \{a_n\} is also summable to s.
(a) Fix \eps > 0. Let \sigma_N, s_N be the Cesàro mean and partial sum of \{a_n\}, respectively. By assumption, \abs{\sigma_N -s} \leq \eps / 2 when N is sufficiently large, and by triangle inequality, \abs{s_N - s} is at most \abs{s_N - \sigma_N} + \eps / 2, so it suffices to prove that \abs{s_N - \sigma_N} \leq \eps / 2.

Recall that in the previous section, we proved that

    \[ \sigma_N = \sum_{n=1}^{N}  \left(\frac{N - n + 1}{N}\right) a_n. \]

Then we have

    \begin{eqnarray*} \abs{s_N - \sigma_N} & = & \abs{\sum_{n=1}^N a_n - \sum_{n=1}^N \frac{N - n + 1}{N} a_n} \\ & = & \sum_{n=1}^N \frac{n - 1}{N} \abs{a_n}. \end{eqnarray*}

By assumption, there is some N_0 such that for every n \geq N_0, (n-1)\abs{a_n} \leq \eps/4, implying \sum_{n=N_0}^N \frac{n - 1}{N} \abs{a_n} \leq \eps / 4. Then when N is sufficiently large, \sum_{n<N_0} \frac{n - 1}{N} \abs{a_n} \leq \eps / 4, proving that \sum_{n=1}^N \frac{n - 1}{N} \abs{a_n} \leq \eps / 2 as desired.

(b) Let A(r) = \sum_{n=1}^N r^n a_n. Define r_N = 1- 1/N. By assumption, when N is sufficiently large, \abs{A_N(r_N) - s} \leq \eps/2, and by triangle inequality, \abs{s_N - s} is at most \abs{s_N - A_N(r_N)} + \eps/2, so it suffices to prove that \abs{s_N - A_N(r_N)} \leq \eps/2. By triangle inequality again, we have

    \begin{eqnarray*} \abs{s_N - A_N(r_N)} & = & \abs{\sum_{n=1}^N a_n - r_N^n a_n} \\ & \leq & \sum_{n=1}^N \paren{1 - r_N^n} \abs{a_n}. \end{eqnarray*}

By assumption, there is some N_0 such that for every n \geq N_0, (n-1)\abs{a_n} \leq \eps/4. Again we can break the the above sum into two, and for the later terms, we have

    \begin{eqnarray*} \sum_{n=N_0}^N \paren{1 - r_N^n} \abs{a_n} & = & \sum_{n=N_0}^N \paren{1-r_N}\paren{1 + r_N + \ldots + r_N^{n-1}} \abs{a_n} \\ & \leq & \sum_{n=N_0}^N \frac{n|a_n|}{N} \\ & \leq & \eps/4. \end{eqnarray*}

For the initial terms, we have

    \begin{eqnarray*} \sum_{n<N_0} \paren{1 - r_N^n} \abs{a_n} & \leq & \paren{1-r_N^{N_0}} \sum_{n<N_0}\abs{a_n} \\ & = & \paren{1-\paren{1 - \frac{1}{N}}^{N_0}} \sum_{n<N_0}\abs{a_n} \\ & \leq & \frac{N_0}{N} \sum_{n<N_0}\abs{a_n}, \end{eqnarray*}

where the last inequality is due to Bernoulli’s inequality, and is bounded above by \eps/4 when N is sufficiently large. Together we have \abs{s_N - A_N(r_N)} \leq \eps/2 as desired.

Fourier series

We now come back to the Fourier setting. First recall that

For a periodic function f, we define its N-th partial Fourier sum to be S_N(f)(x):= \sum_{|n| \leq N -1} \hat{f}(n) e_n(x), with e_n(x):=e^{2\pi inx}, and \hat{f}(n)=\frac{1}{2\pi}\int_{-\pi}^{\pi} f(x)e^{-inx} dx.

We now introduce Dirichlet and Fejér kernels, which when convolved with a function f, we obtain the partial sum and Cesàro mean of f, letting us invoke Tauberian Theorem.

D_N(x):= \sum_{|n| \leq N-1} e_n(x) is the N-th Dirichlet kernel, and F_N(x):= \frac{1}{N} \sum_{|n| \leq N-1} D_n(x) is the N-th Fejér kernel.
(Tauberian theorem restated) Let f be a periodic function with \abs{\hat{f}(n)}=o(1/n). Then for any x \in [-\pi, \pi], if (F_N*f)(x)\rightarrow s for some s as N\rightarrow \infty, then S_N(f)(x) \rightarrow s as N\rightarrow \infty.
For any nonnegative integer n, let a_n := \hat{f}(n) e_n(x) + \hat{f}(-n) e_n(-x). Note that by assumption a_n = o(1/n). By the Convolution Theorem, the N-th partial sum of \{a_n\} is simply

    \[ S_N(f)(x) = (D_N * f)(x), \]

and similarly, the N-th Cesàro mean of \{a_n\} is

    \begin{eqnarray*} \frac{1}{N} \sum_{n=0}^{N-1} \sum_{i=0}^n a_i & = & \frac{1}{N} \sum_{n=0}^{N-1} (D_n * f)(x) \\ & & = (F_N * f)(x), \end{eqnarray*}

where the second line follows from the linearity of convolution. Thus, by Tauberian theorem, if (F_N*f)(x) converges, then S_N(f)(x) converges to the same value as well.

To prove the main theorem, it now suffices to show that (F_N*f)(x) approaches f(x). We make one last definition:

Let K_N be a periodic function. We say that \{K_N\}_{n=0}^{\infty} is a family of good kernels if the following three conditions all hold:
(a) for each N\geq 0, \frac{1}{2\pi} \int_{-\pi}^{\pi} K_N = 1,
(b) for each N\geq 0, \int_{-\pi}^{\pi} |K_N| = O(1), and
(c) for any \delta > 0, \int_{\delta \leq |x| \leq \pi} |K_N| \rightarrow 0 as N \rightarrow \infty.

The idea is that a family of good kernels essentially approximate the Dirac delta function, which is a single point with infinite mass at 0 and vanishes everywhere else.

The Fejér kernels \{F_N\}_{N=0}^{\infty} is a family of good kernels.

We will skip the proof of this fact as it is a straightforward calculation once one establishes the trignometric identity F_N(x) = \frac{\sin^2 (Nx/2)}{N \sin^2 (x/2)}. (However, the Dirichlet kernels cannot be a family of good kernels, otherwise every Fourier series converges to its function, which is not true.) Intuitively, since F_N*f(x) is a weighted average of f with highest weights around x (by the definition of a good family of kernels), we can expect that this value is close to f(x), as long as the f does not change its value too much around x. Now we formalize and prove this intuition, which will finish the proof of our main theorem.

Suppose \{K_N\}_{N=0}^{\infty} is a family of good kernels. Let f be a periodic, integrable function that is continuous at x. Then (K_N * f)(x) \rightarrow f(x) as N\rightarrow \infty.
We first write

    \begin{eqnarray*} \abs{(K_N*f)(x) - f(x)} & = & \frac{1}{2\pi} \abs{\int_{-\pi}^{\pi} K_N(y) f(x-y)dy - f(x)} \\ & = & \frac{1}{2\pi} \abs{\int_{-\pi}^{\pi} K_N(y) f(x-y)dy - \int_{-\pi}^{\pi} K_N(y) f(x)dy} \\  & \leq & \frac{1}{2\pi} \int_{-\pi}^{\pi} \abs{K_N(y)} \abs{f(x-y) - f(x)} dy, \end{eqnarray*}

where the first through third lines follow from the definition of convolution, Condition (a) of a good family of kernels, and triangle inequality.

The idea is that when integrating over y, \abs{K_N(y)} is small if y is bounded away from 0, and \abs{f(x-y) - f(x)} is small if y is close to 0. Formally, let \eps > 0. Since f is continuous at x, there is some \delta > 0 such that

    \[ \int_{|y| \leq \delta} \abs{K_N(y)} \abs{f(x-y) - f(x)} dy \leq \eps M, \]

for some M by Condition (b) of a good family of kernels. Since \eps can be arbitrary, the above integral is 0.

Now for the other values of y, note that since f is integrable, the function is bounded by some B. Then

    \[ \int_{\delta \leq |y| \leq \pi} \abs{K_N(y)} \abs{f(x-y) - f(x)} dy \leq  2B \int_{\delta |y| \leq \pi} \abs{K_N(y)} dy, \]

which tends to 0 as N \rightarrow \infty, finishing the proof.

In the previous section, we showed that the Abel mean can play a similar role to Cesàro. Similarly, one can define the Poisson kernel P_r(\theta) = \sum_{n=-\infty}^{\infty} r^{|n|} e^{in\theta} and show that it is a good family of kernels as r \rightarrow 1 to prove the main theorem through P_r instead.

The Basel problem via Fourier series

The Basel problem asks to compute the summation of the reciprocal of the natural numbers squared, and it is known that

    \[\sum_{n=1}^{\infty} \frac{1}{n^2} = \frac{\pi^2}{6}.\]

While there are numerous known proofs (our presentation resembles Proof 9 most closely), we give a self-contained one that motivates the usage of Fourier analysis for periodic functions.

Recall that a function f:\mathbb{R}\rightarrow\mathbb{C} is 2\pi-periodic if for every x \in \mathbb{R}, f(x+2\pi)=f(x). From Fourier analysis, every such function can be expressed as a series of cosines and sines. More precisely,

If f is continuous and 2\pi-periodic, then

    \[\left\Vert f - \sum_{n=-\infty}^{\infty} \hat{f}(n) e_n\right\Vert_2 = 0,\]

where e_n(x) = e^{inx}, and \hat{f}(n) is the n-th Fourier coefficient equal to \frac{1}{2\pi}\int_{-\pi}^{\pi} f(x)e^{-inx} dx.

We say that \sum_{n=-\infty}^{\infty} \hat{f}(n) e_n is the Fourier series of f. When we define the inner product of two functions to be \frac{1}{2\pi}\int_{-\pi}^{\pi} f(x) \overline{g(x)} dx, then it is easily seen that the “characters” \{e_n\} form an orthonormal system, and the n-th Fourier coefficient can be viewed as the projection of f along the n-th character. The Fourier theorem asserts that f can be approximated in L_2-norm by its Fourier series. In fact, by applying the Weierstrass M-test, we can make the following statement with a stronger convergence guarantee:

If f is continuous and 2\pi-periodic and \sum_{n=-\infty}^{\infty} \left|\hat{f}(n)\right| is convergent, then its Fourier series \sum_{n=-\infty}^{\infty} \hat{f}(n) e_n converges uniformly to f.

Furthermore, the Fourier transform \hat{f} itself encodes many of the properties of the original function f. In particular, we note that the symmetry of f is preserved in its Fourier transform.

If f is even, then for every n \in \mathbb{Z}, \hat{f}(-n) = \hat{f}(n).
If f is odd, then for every n \in \mathbb{Z}, \hat{f}(-n) = -\hat{f}(n).

Now we have the machinery to compute \sum_{n=1}^{\infty} 1/n^2.

(Of Basel) Define f:[-\pi, \pi]\rightarrow \mathbb{R} to be f(x)=|x|. We compute the Fourier coefficient of f. First note that

    \[\hat{f}(0) = \frac{1}{2\pi}\int_{-\pi}^{\pi} |x| dx = \frac{\pi}{2}.\]

For n \neq 0 \in \mathbb{Z}, we have

    \[2\pi\hat{f}(n) = \int_{0}^{-\pi}xe^{-inx} dx + \int_{0}^{\pi}xe^{-inx} dx. \]

Using integration by parts, we have for every a, b \in \mathbb{R},

    \begin{eqnarray*} \int_a^b xe^{-inx}dx & = & \frac{x e^{-inx}}{-in}\Biggr|_a^b - \int_a^b \frac{e^{-inx}}{-in}dx \\ & = & \frac{x e^{-inx}}{-in}\Biggr|_a^b + \frac{e^{-inx}}{n^2}\Biggr|_a^b. \end{eqnarray*}


    \begin{eqnarray*} 2\pi\hat{f}(n) & = & \frac{-\pi(-1)^n}{-in} + \frac{(-1)^n -1 }{n^2} + \frac{\pi(-1)^n}{-in} + \frac{(-1)^n -1 }{n^2} \\ & = & \frac{2((-1)^n - 1)}{n^2}.  \end{eqnarray*}


    \[ \hat{f}(n) =  \begin{cases} \frac{-2}{\pi n^2} & \text{if $n$ is odd,} \\ 0 & \text{if $n$ is even and nonzero,} \\ \frac{\pi}{2} & \text{if $n=0$.}  \]

From Euler’s formula, the Fourier series of f can be written as

    \[  \hat{f}(0) + \sum_{n=1}^{\infty} \left[ (\hat{f}(n) + \hat{f}(-n)) \cos(nx) + i(\hat{f}(n) - \hat{f}(-n)) \sin(nx) \right]. \]

Since f is even, by Proposition, \hat{f}(-n)=\hat{f}(n). From our calculation, the Fourier series of f is explicitly

    \[  \frac{\pi}{2} + \sum_{n\geq1, n \text{ odd}} \frac{-4}{\pi n^2} \cos(nx). \]

Note that \sum_{n=1}^{\infty} 1/n^2 is convergent from the Cauchy condensation test. Thus, by the Convergence Theorem, we conclude that

    \[ \frac{\pi}{2} + \sum_{n\geq1, n \text{ odd}} \frac{-4}{\pi n^2} \cos(nx) \longrightarrow |x|, \]

where the convergence is uniform (and in particular, pointwise). Setting x=0, we see that

    \[ \sum_{n\geq1, n \text{ odd}} \frac{1}{n^2} = \frac{\pi^2}{8}. \]

Lastly, observe that

    \begin{eqnarray*} \sum_{n=1}^{\infty} \frac{1}{n^2} & = & \sum_{n\geq1, n \text{ even}} \frac{1}{n^2} + \sum_{n\geq1, n \text{ odd}} \frac{1}{n^2} \\ & = & \frac{1}{4} \sum_{n=1}^{\infty} \frac{1}{n^2} + \frac{\pi^2}{8}, \end{eqnarray*}

finishing the proof.