A sequence of numbers x1, x2, x3, ... is symbolized by {xk}. Usually attention rests on the sums of these numbers or terms, x1 + x2 + x3 + ... + xn = Σ(1,n)xk. [The limits of sums and integrals are here put in parentheses after the symbol.] If one imagines an unending sequence, then the sum is an infinite series. In this case, the terms of the sum, or the elements of the sequence, can be placed in 1-to-1 correspondence with the natural integers. They are said to be denumerably infinite. We assume that there is some rule for determining the kth member of the sequence for any k. Without this, nothing can be said about the sequence or the series. Given a rule, we can imagine what happens when we take an unlimited number of terms, though physically this would not be possible. Understanding of infinite processes is a great advance, giving us Calculus, which depends on infinite limiting processes. Strange things happen at infinity, and our intuition with respect to finite processes cannot be trusted.
Consider the sum 1 + 1/2 + 1/4 + 1/8 + ..., each term half of the preceding one. This means we take 1, then add 1/2 to get 1.5, then add 1/4 to get 1.75, and so on. Common sense tells us that since we always add something to the sum with each additional term, this sum must eventually increase steadily past any fixed value; that is, it diverges to infinity. Common sense is wrong. In fact, we never get past 2.0. This series converges to 2. Use a calculator to find the finite sum of a few terms. Six terms gives us 1.000, 1.500, 1.750, 1.875, 1.9375, 1.9688, and so on. The rate of increase becomes very slow as we proceed further.
If the division indicated by 1/(1 - x) is performed by long division, we find that 1/(1 - x) = 1 + x2 + x3 + x4 + ... . When |x| > 1, or x = 1, this series clearly diverges and is useless, because the sum increases at a constant rate. If x = -1, it becomes 1 - 1 + 1 - 1 + ... , which cannot make up its mind. However, if |x| < 1, things are better. In fact, the series then converges to 1/(1 - x). This series is called the geometric series, where every term is a constant fraction of the preceding term. If x = 1/2, we find the series first mentioned, 1 + 1/2 + (1/2)2 + (1/2)3 + ... , whose sum is, then, 2. It is clear that any series whose terms are equal to or smaller than the terms of a geometric series is a convergent series.
If x = -1/2, then the series is 1 - 1/2 + 1/4 - 1/8 + ... , and its sum is 2/3, less than 2 because now we have put in some negative terms. If we form the series of the absolute values of the terms, we get the original series whose sum is 2. A series such that the series of the absolute values of its terms converges is called absolutely convergent, and this gives it some special properties. A geometric series is absolutely convergent in -1 < x < 1.
The series 1 + x + x2 + x3 + ... is a power series in x. If x is replaced by the complex variable z, we find that this series converges for any |z| < 1, that is, within a circle of convergence |z| = 1. The whole theory of analytic functions can be based on the properties of such power series in a very concrete and transparent way. A very useful way to find power series is by means of the Taylor series, f(x) = f(0) + xf'(0) + (x2/2!)f"(0) + ... + (xn/n!)f(n)(0) + ... . The successive derivatives are evaluated at x = 0. We can expand about any point x = a by replacing x by (x - a) in this series, and evaluating the derivatives at x = a.
Power series can also be found from differential equations. This is the normal way that "special functions" are defined. Other methods of definition may be more powerful, but power series can be investigated without a great deal of advanced theory. When quantum mechanics came along, physicists suddenly had to deal with special functions defined by differential equations. This was attacked by series solutions so that an advanced knowledge of analysis would not be required.
The factor n! in the denominators of the terms of the Taylor series is a powerful aid to convergence. In fact, the series ez = 1 + z + z2/2! + ... + zn/n! + ... converges everywhere; that is, for any z. We'll call it the exponential series. By comparison with a geometric series, it is clear that it converges absolutely. Setting z = 1, we have e = 1 + 1 + 1/2! + 1/3! + 1/4! + ... . This series converges quite rapidly. Ths sum of the terms shown is already 2.7083 on its way to 2.7183. 1/5! brings the sum to 2.7166. ez is, of course, the exponential function. The derivative of the power series (we can differentiate an absolutely convergent series term by term) is d(ex)/dx = 1 + x + x2/2 + x3/3! + ... = ex. This is the characteristic property of the exponential, that it equals its own derivative. Using this property alone, we see that its Taylor series is just the series we have given for ex. It's very easy to get series for the trigonometric functions and the hyperbolic functions from their expressions in terms of ez.
The Taylor series for ln(1 + x) is x - x2/2 + x3/3 - x4/4 + ... . This converges a lot less rapidly than the exponential series, since the factorials are replaced by natural integers. For x = 1, we have ln 2 = 1 - 1/2 + 1/3 - 1/4 + ... . The sum of this series oscillates above and below the value of ln 2 = 0.69315, and converges very slowly to this value. The first ten terms give 0.695 if the last two values are averaged (these values are 0.745 and 0.645). For x = -1, we have ln 0 = -∞ = -1 - 1/2 - 1/3 - 1/4 - ..., or ∞ = 1 + 1/2 + 1/3 + 1/4 + ... , which is called the harmonic series. The symbol ∞ is shorthand for "increases beyond any limit." The series for ln 2 is, therefore, not absolutely convergent. Such a series is said to be conditionally convergent. Any series with terms of alternating sign for which the nth term approaches zero is conditionally convergent (Leibniz's test). In a general harmonic series, the denominators form an arithmetic progression. These series diverge.
The harmonic series increases with n like ln n. In fact, the difference 1 + 1/2 + 1/3 + ... + 1/n - ln n approaches the constant γ = 0.577215664..., called Euler's constant, as n increases without limit. No simpler way of expressing the value of this constant has been found. For n = 10, the difference is 0.6264, so convergence is not rapid.
Let's rearrange the summation in the series for ln 2 and see what happens. Suppose we pull some negative terms forward to get 1 - 1/2 - 1/4 + 1/3 - 1/6 - 1/8 + 1/5 - 1/10 - 1/12 + 1/7 - ... . Each reciprocal of an odd integer is followed by two negative terms of reciprocals of even integers. In the nth group of three terms, they are 1/(2n - 1) - 1/(4n - 2) - 1/4n or (1/2)[1/(2n - 1) - 1/2n]. Therefore, we have just half of each term in the series for ln 2. This series will converge to (1/2)ln 2 = 0.34657. If we bring all the positive terms forward, we get +∞, and with all the negative terms first, -∞. In general, a conditionally convergent series can be rearranged to yield any value whatsoever, from +∞ to -∞. The rearrangement must be performed by some rule, as in this case, that can be carried out any number of times. We must treat conditionally convergent series with delicacy, and be suspicious of them. On the other hand, the terms of an absolutely convergent series can be rearranged in any way desired, and the sum of the series will not change. This is similar to the same property of a finite series. Absolutely convergent series are reliable friends.
In any infinite series, we may add or remove any finite number of terms without affecting the convergence or divergence of the series. Of course, this changes the sum, as does multiplying or dividing every term by the same number. Let sn be the sum of n terms of an absolutely convergent series of positive terms. If bi are the terms of the same series rearranged, then let tm be the sum of m terms of this series, where m is large enough that all the terms in sn are included. Clearly, then, sn ≤ tm. If sn' is the sum of the first n' terms of the original series, where n' is large enough that all bi in tm are included, then sn' ≥ tm. As n, m and n' become larger, tm is squeezed between sn and sn', each of which approaches the sum S of the original series. Therefore, the sum of the series is not changed by rearrangement. If we had begun with a conditionally convergent series, we could not have made the assertion that sn ≤ tm, and so forth, and would not have been able to bracket the partial sums. The sum of an absolutely convergent series can be considered as the sum of the positive terms less the sum of the negative terms with changed sign. A conditionally convergent series cannot be thought of in this way.
A celebrated series is Gregory's, π/4 = 1 - 1/3 + 1/5 - 1/7 + ... . To get this series, start by expanding 1/(1 + x2), and then integrate term-by-term to get tan-1 x = x - x3/3 + x5/5 - x7/7 + ... . This series is absolutely convergent for x2 ≤ 1, as we can see from the ratio test (see below). For x = 1, we get the conditionally convergent series given above, using tan-1 = π/4. This series converges too slowly to be a good means of calculating π, but it can be modified to be applicable to this calculation. If we consider the hyperbolic tangent instead, then the minus signs become plus signs, and the resulting harmonic series diverges for x = 1.
A series appearing more often in theory than in applications is Σ1/ns = 1 + 1/2s + 1/3s + ... . If s = 1, we have the known divergent harmonic series 1 + 1/2 + 1/3 + 1/4 ... . If s < 1, the series diverges a fortiori. We can sum the series by grouping consecutive terms (this is not a rearrangement, so we are safe). 1/2s + 1/3s < 2/2s = 1 / 2s - 1. The sum of the next four terms is less than 4/4s = 1/4s - 1. Continuing this process, we find that Σ1/ns < 1 + 1/2s - 1 + 1/4s - 1 + ... . This is just the geometric series that sums to 1/(1 - 2s - 1). If s > 1, the original series then converges, and since all its terms are positive, it converges absolutely. Therefore, Σ1/ns is absolutely convergent if s > 1. This is often a convenient comparison series for the establishment of convergence.
This series defines the Riemann Zeta Function, ζ(p) = Σ(1,∞)1/kp (p > 1), which can be expressed in terms of the Bernoulli numbers Bp/2 for even p. In fact, ζ(p) = (2p - 1πp/p!)Bp/2. ζ(2) is π2/6, since B1 = 1/6.
There are tests for convergence that compare successive terms to gauge how quickly they approach zero. These tests are usually equivalent to comparison with a geometric series or the ζ function, but are easier to perform. The ratio test, due to d'Alembert, says that if |un + 1/un| < ρ, where ρ is positive, independent of n and less than unity, for all n greater than some number r, then the series Σun converges absolutely. If the ratio is greater than unity, then the series surely diverges. If the ratio is unity, the series may or may not converge, and further investigation is needed. For example, the ratio of succeeding terms in the exponential series is x/n. The limit of this ratio is zero as n → ∞ for any x. This is certainly less than unity, so the series converges for any x.
Under the same conditions, if the greatest limit of |un|1/n is less than unity, then Σun converges absolutely. This is Cauchy's root test, and is convenient when the root is easy to find. Again, if the limit of the expression is greater than unity, the series diverges. In the exponential series, the nth root of the general term xn/n! is of the order of x/n (use Stirling's approximation), so again the test confirms that the series converges.
If the kth term in the series is written f(k), and the function f(x) can be integrated, a very effective test for convergence, Maclaurin's (or Cauchy's) Integral Test, can be applied. We assume that f(k + 1) < f(k), f(k) ≥ 0, and f(x) continuous. Then, if ∫(1,∞)f(x)dx = A, where A is finite, then the series is absolutely convergent. If the integral diverges, then the series is divergent. If we make a bar chart of the series, the sum of the series is the sum of the areas of the bars. The integral gives an upper limit to this sum, as is easily seen on making a sketch.
To prove this, consider the interval of unit width from k to k + 1. Then, f(k + 1) ≤ f(x) ≤ f(k). If we integrate over this interval, then f(k + 1) ≤ ∫(k, k + 1)f(x)dx ≤ f(k). Now, if we sum this inequality from k = 1 to k = n, we have Σ(2,n + 1)f(k) ≤ ∫(1,n)f(x)dx ≤ Σ(1,n)f(k). Adding f(1) + f(n + 1) to each member of this inequality, we find that Σ(1,n + 1)f(k) + f(n + 1) ≤ ∫(1,n + 1)f(x)dx + f(1) + f(n + 1) ≤ Σ(1,n + 1)f(k) + f(1). As n → ∞, f(n + 1) → 0 and the integral approaches A, it is clear that (lim n→∞)Σ(1,n + 1)f(k) ≤ A + f(1), or the series converges. If A is infinite, then the series diverges.
The integral test is easily applied to Σ1/ks. The integral is ∫(1,∞)x-sdx, which for s > 1 is 1/(s - 1). Therefore, the series converges absolutely for s > 1. If s < 1, the integral diverges, so the series does not converge. If s = 1, the integral is ∫(1,n)dx/x = ln n. This diverges as ln n, which we have seen is characteristic of the harmonic series. The integral test does not involve comparison with a known convergent series, as most convergence tests do. On the other hand, it depends on the convergence of an improper integral. The integral test cannot be applied to the exponential series, since the integral cannot be performed.
If the limit of kpf(k) as k → ∞ for p > 1 is less than infinity, then Σ(1,∞)f(k) converges absolutely. This limit test is equivalent to comparison with the zeta function. If the limit for p = 1 is greater than zero or diverges, then the series diverges.
The theory of convergence rests on two fundamental theorems. The first is the Bolzano-Weierstrass theorem, which states that an infinite bounded sequence of numbers has at least one limit point. A limit point m is such that in any neighborhood |z - m| < ε of m there are an infinite number of members of the sequence, however small ε may be. For a real variable, the neighborhood can be defined as m - ε/2 < x < m + ε/2. This theorem was first stated by Bolzano, and though known to Cauchy was largely overlooked until Weierstrass pointed out its importance. A sequence may have more than one limit point; indeed, every point in an interval may be a limit point, but the theorem states that there is at least one. To prove the theorem for a real variable, suppose the sequence is bounded by a above and b below: b ≤ xi ≤ a. Divide this interval in half. Then at least one of the halves must contain an infinite number of members of the sequence. Repeat this, again selecting the half with the infinite number of members, since there must be one. Eventually, some point m is found in every neighborhood of which there must be an infinite number of members; this is the limit point.
The second fundamental theorem is Cauchy's convergence test. If sn is the sum of the first n terms of a series, then |sm - sn| < ε for any ε, however small, when m and n are both larger than some number N, which generally increases without bound as ε approaches zero. This means that sn has a limit point, and that limit point is the sum of the series. Clearly, it is necessary that an → 0, or else the difference in partial sums could not be arbitrarily small. This condition is certainly not sufficient, as the harmonic series shows. In this case, 1/n approaches zero as n increases, yet the series diverges.
Imagine a sphere resting on the (x,y) plane at the origin. Every point P(x,y) can be mapped onto the sphere by the intersection of a line from the north pole of the sphere to P. It's convenient to choose the radius of the sphere so that points on the unit circle x2 + y2 = 1 map onto the equator of the sphere. Then, we see that all points at infinity are mapped onto the single point at the north pole. The complex mapping w = 1/z maps infinity to the origin, and the origin to the points at infinity. A line in the plane maps into a closed curve that passes through the point at infinity. The analogue of a straight line on the surface of a sphere is a great circle, which is a closed curve without any point at infinity. In another picture, the points at infinity are conceived to be on a straight line. An ellipse does not intersect this line, a parabola touches it, and a hyperbola cuts it. A line may be considered to identify a point at infinity that is at both ends of the line. Parallel lines pass through the same point at infinity in this picture.
The reciprocal of an infinity is an infinitesimal, a concept of great use in the differential calculus. We often find infinities and infinitesimals combined in an algebraic expression. For example, in sin x/x the numerator and denominator both approach zero as x approaches 0, becoming infinitesimals. A form such as this is represented as 0/0 and is called an indeterminate form, since its value is not immediately apparent. Under certain conditions, we may differentiate both numerator and denominator without changing the value of the expression (l'Hôpital's Rule) and remove the indeterminacy. In this case, we find cos x/1, which is 1 at x = 0 and is not indeterminate. The same procedure may help with ∞/∞, permitting a simplification. The forms 0 · ∞, 1∞, 00, ∞0 are also indeterminate. 0∞ = 0 and ∞∞ = ∞ are not indeterminate. The limit of (1 + a/x)x as x → ∞, which is of the form 1∞ is, as we know, ea. (1 + 1/1000)1000 = 2.7169, while the limit is e1 = 2.7183.
Two infinities for which ∞/∞ is a number greater than zero are said to be of the same order. If this ratio is zero, the infinity in the numerator is of lower order than the infinity in the denominator. If it is infinite, then the numerator infinity is of higher order. This relation can also be determined from ∞ - ∞, as in the case of the definition of Euler's constant. In this case, the sum of the harmonic series can be said to be of the order of ln n. Infinitesimals can be classified by order in a similar way.
Every good Calculus text includes a chapter on infinite series, generally toward the back of the book.
R. Courant, Differential and Integral Calculus, 2nd ed., Vol. I (London: Blackie and Son, 1936). Chapter VIII.
E. T. Whittaker and G. N. Watson, A Course of Modern Analysis, 4th ed. (Cambridge: Cambridge University Press, 1958). Chapter II.
D. V. Widder, Advanced Calculus, 2nd ed. (New York: Dover, 1989). Chapter 9.
H. B. Dwight, Tables of Integrals and Other Mathematical Data, 4th ed. (New York: Macmillan, 1961). An excellent source for series and expansions, now regrettably out of print.
Composed by J. B. Calvert
Created 29 October 2004
Last revised