Pointwise convergence is often the first type of convergence encountered when studying sequences of functions. It seems like a natural extension of the familiar concept of convergence for sequences of numbers. However, pointwise convergence can exhibit some unexpected and, for many applications, undesirable behaviors. To illustrate this, let’s consider a specific sequence of functions, defined as follows:
f_n(x) =
|x|-n if x ∈ (-∞, -n) ∪ (n, ∞)
0 if x ∈ [-n, n]
Let’s visualize these functions graphically. Intuitively, does this sequence of functions, ${f_n}$, seem to approach some limiting function as $n$ grows? If your initial thought is “yes,” you might guess that the limit function is $f(x) = 0$. Indeed, for any fixed value of $x$, as $n$ becomes sufficiently large (larger than $|x|$), $f_n(x)$ becomes 0. This demonstrates that $f_n$ converges to $f(x) = 0$ pointwise.
However, consider this: for any chosen magnitude $M geq 0$, we can always find a value of $x$, for every $n$, such that the difference between $f_n(x)$ and $f(x)$ is greater than $M$. Can you identify such an $x$? Try setting $x = 2M + n$. For this $x$, $f_n(x) = |2M+n| – n = 2M+n – n = 2M$, and $f(x) = 0$, so $|f_n(x) – f(x)| = 2M > M$.
This behavior might seem counterintuitive if we expect convergence to behave “nicely”. It implies that even though $f_n$ converges to $f$ pointwise, for any given level of closeness $M$, we can always find points $x$ where $f_n(x)$ is not within $M$ of $f(x)$, regardless of how large we make $n$. In fact, we can construct a sequence ${x_n}$ of real numbers (for instance, $x_n = n$) such that $|f_n(x_n) – f(x_n)| = |f_n(x_n) – 0| = |(|x_n| – n)| = |(|n| – n)| = |n-n| = 0$ if we choose $x_n$ in $[-n,n]$ region, but if we choose $x_n = 2M + n$, then $|f_n(x_n) – f(xn)| = 2M > M$. This reveals that $lim{ntoinfty} f_n(x_n)$ does not necessarily equal $f(x)$ (or even exist in some cases, as $f_n(x)$ can grow without bound depending on how $x$ is chosen relative to $n$).
The Problem with Pointwise Convergence
The core issue with pointwise convergence is that while it ensures convergence at each individual point in the domain, the rate of convergence can vary significantly across different points. In our example, for points closer to zero, $f_n(x)$ becomes zero “quickly” as $n$ increases. However, for points further away from zero, specifically as $x$ grows with $n$, $f_n(x)$ does not uniformly approach zero.
This non-uniformity in convergence rate is what makes pointwise convergence a “weak” property in many contexts. It doesn’t guarantee that properties that hold for each $f_n$ will also hold for the limit function $f$. For instance, continuity is not necessarily preserved under pointwise limits.
Introducing Uniform Convergence: A Stronger Notion
This is where the concept of Uniform Convergence becomes essential. Uniform convergence is a stronger form of convergence that addresses the shortcomings of pointwise convergence by requiring the entire function $f_n$ to converge to $f$ “at the same rate” across the entire domain.
Intuitively, uniform convergence means that for a given level of closeness $epsilon > 0$, there exists an index $N$ such that for all $n geq N$, the difference $|f_n(x) – f(x)|$ is less than $epsilon$ for all $x$ in the domain simultaneously. It’s a global property, concerning the behavior of $f_n$ over its entire domain, in contrast to pointwise convergence, which is a local property, focusing on individual points.
Uniform Convergence and the Sup-Norm
A more formal and practical way to understand uniform convergence is through the sup-norm (or supremum norm). For a function $g$ defined on a domain $D$, the sup-norm is defined as:
$$ ||g||infty = sup{x in D} |g(x)| $$
A sequence of functions ${f_n}$ converges uniformly to $f$ if and only if the sup-norm of the difference, $||fn – f||infty$, approaches zero as $n$ approaches infinity:
$$ lim_{ntoinfty} ||fn – f||infty = 0 $$
In our example with $f_n(x)$, we can see that $||fn – f||infty = ||fn – 0||infty = infty$ for all $n$. This is because for any $n$, $f_n(x)$ can become arbitrarily large as $|x|$ increases, meaning the supremum of $|f_n(x)|$ over the entire real line is infinite. This provides a second way to confirm that $f_n$ does not converge uniformly to $f(x) = 0$.
Conclusion
Uniform convergence is a critical concept in mathematical analysis because it provides a more robust and reliable form of convergence for sequences of functions compared to pointwise convergence. It ensures that the convergence is “even” across the entire domain, preventing the kind of pathological behavior we observed in our example. Understanding uniform convergence is vital for studying the preservation of important properties under limits, such as continuity and integrability, and for various applications in advanced mathematics and related fields. Exploring further examples and working through exercises will deepen your understanding of this essential concept.