In our study of series so far, almost every series that we've considered has exclusively nonnegative terms. Of course, it is possible to consider series that have some negative terms. For instance, if we consider the geometric series
\begin{equation*}
2  \frac{4}{3} + \frac{8}{9}  \cdots + 2 \left(\frac{2}{3} \right)^n + \cdots\text{,}
\end{equation*}
which has \(a = 2\) and \(r = \frac{2}{3}\text{,}\) we see that not only does every other term alternate in sign, but also that this series converges to
\begin{equation*}
S = \frac{a}{1r} = \frac{2}{1 \left(\frac{2}{3}\right)} = \frac{6}{5}\text{.}
\end{equation*}
In Preview Activity 8.4.1 and our following discussion, we investigate the behavior of similar series where consecutive terms have opposite signs.
Preview Activity 8.4.1
Preview Activity 8.3.1 showed how we can approximate the number \(e\) with linear, quadratic, and other polynomial approximations. We use a similar approach in this activity to obtain linear and quadratic approximations to \(\ln(2)\text{.}\) Along the way, we encounter a type of series that is different than most of the ones we have seen so far. Throughout this activity, let \(f(x) = \ln(1+x)\text{.}\)
Find the tangent line to \(f\) at \(x=0\) and use this linearization to approximate \(\ln(2)\text{.}\) That is, find \(L(x)\text{,}\) the tangent line approximation to \(f(x)\text{,}\) and use the fact that \(L(1) \approx f(1)\) to estimate \(\ln(2)\text{.}\)

The linearization of \(\ln(1+x)\) does not provide a very good approximation to \(\ln(2)\) since 1 is not that close to 0. To obtain a better approximation, we alter our approach; instead of using a straight line to approximate \(\ln(2)\text{,}\) we use a quadratic function to account for the concavity of \(\ln(1+x)\) for \(x\) close to 0. With the linearization, both the function's value and slope agree with the linearization's value and slope at \(x=0\text{.}\) We will now make a quadratic approximation \(P_2(x)\) to \(f(x) = \ln(1+x)\) centered at \(x=0\) with the property that \(P_2(0) = f(0)\text{,}\) \(P'_2(0) = f'(0)\text{,}\) and \(P''_2(0) = f''(0)\text{.}\)
Let \(P_2(x) = x  \frac{x^2}{2}\text{.}\) Show that \(P_2(0) = f(0)\text{,}\) \(P'_2(0) = f'(0)\text{,}\) and \(P''_2(0) = f''(0)\text{.}\) Use \(P_2(x)\) to approximate \(\ln(2)\) by using the fact that \(P_2(1) \approx f(1)\text{.}\)
We can continue approximating \(\ln(2)\) with polynomials of larger degree whose derivatives agree with those of \(f\) at 0. This makes the polynomials fit the graph of \(f\) better for more values of \(x\) around 0. For example, let \(P_3(x) = x  \frac{x^2}{2}+\frac{x^3}{3}\text{.}\) Show that \(P_3(0) = f(0)\text{,}\) \(P'_3(0) = f'(0)\text{,}\) \(P''_3(0) = f''(0)\text{,}\) and \(P'''_3(0) = f'''(0)\text{.}\) Taking a similar approach to preceding questions, use \(P_3(x)\) to approximate \(\ln(2)\text{.}\)
If we used a degree 4 or degree 5 polynomial to approximate \(\ln(1+x)\text{,}\) what approximations of \(\ln(2)\) do you think would result? Use the preceding questions to conjecture a pattern that holds, and state the degree 4 and degree 5 approximation.
Subsection 8.4.1 The Alternating Series Test
Preview Activity 8.4.1 gives us several approximations to \(\ln(2)\text{,}\) the linear approximation is 1 and the quadratic approximation is \(1  \frac{1}{2} = \frac{1}{2}\text{.}\) If we continue this process we will obtain approximations from cubic, quartic (degree 4), quintic (degree 5), and higher degree polynomials giving us the approximations to \(\ln(2)\) in Table 8.4.1.
The pattern here shows the fact that the number \(\ln(2)\) can be approximated by the partial sums of the infinite series
\begin{equation}
\sum_{k=1}^{\infty} (1)^{k+1} \frac{1}{k}\label{eqln2}\tag{8.4.1}
\end{equation}
where the alternating signs are determined by the factor \((1)^{k+1}\text{.}\)
linear 
\(1\) 
\(1\) 
quadratic 
\(1  \frac{1}{2}\) 
\(0.5\) 
cubic 
\(1  \frac{1}{2} + \frac{1}{3}\) 
\(0.8\overline{3}\) 
quartic 
\(1  \frac{1}{2} + \frac{1}{3}  \frac{1}{4}\) 
\(0.58\overline{3}\) 
quintic 
\(1  \frac{1}{2} + \frac{1}{3}  \frac{1}{4} + \frac{1}{5}\) 
\(0.78\overline{3}\) 
Table 8.4.1
Using computational technology, we find that 0.6881721793 is the sum of the first 100 terms in this series. As a comparison, \(\ln(2) \approx 0.6931471806\text{.}\) This shows that even though the series (8.4.1) converges to \(\ln(2)\text{,}\) it must do so quite slowly, since the sum of the first 100 terms isn't particularly close to \(\ln(2)\text{.}\) We will investigate the issue of how quickly an alternating series converges later in this section. Again, note particularly that the series (8.4.1) is different from the series we have consider earlier in that some of the terms are negative. We call such a series an alternating series.
Definition 8.4.2
An alternating series is a series of the form
\begin{equation*}
\sum_{k=0}^{\infty} (1)^k a_k\text{,}
\end{equation*}
where \(a_k \gt 0\) for each \(k\text{.}\)
We have some flexibility in how we write an alternating series; for example, the series
\begin{equation*}
\sum_{k=1}^{\infty} (1)^{k+1} a_k\text{,}
\end{equation*}
whose index starts at \(k = 1\text{,}\) is also alternating. As we will soon see, there are several very nice results that hold about alternating series, while alternating series can also demonstrate some unusual behaivior.
It is important to remember that most of the series tests we have seen in previous sections apply only to series with nonnegative terms. Thus, alternating series require a different test. To investigate this idea, we return to the example in Preview Activity 8.4.1.
Activity 8.4.2
Remember that, by definition, a series converges if and only if its corresponding sequence of partial sums converges.

Calculate the first few partial sums (to 10 decimal places) of the alternating series
\begin{equation*}
\sum_{k=1}^{\infty} (1)^{k+1}\frac{1}{k}
\end{equation*}
and record your responses to the right.
Plot the sequence of partial sums from part a in the plane. What do you notice about this sequence?
\begin{align*}
\sum_{k=1}^{1} (1)^{k+1}\frac{1}{k} \amp= \amp \sum_{k=1}^{6} (1)^{k+1}\frac{1}{k} \amp=\\
\sum_{k=1}^{2} (1)^{k+1}\frac{1}{k} \amp= \amp \sum_{k=1}^{7} (1)^{k+1}\frac{1}{k} \amp=\\
\sum_{k=1}^{3} (1)^{k+1}\frac{1}{k} \amp= \amp \sum_{k=1}^{8} (1)^{k+1}\frac{1}{k} \amp=\\
\sum_{k=1}^{4} (1)^{k+1}\frac{1}{k} \amp= \amp \sum_{k=1}^{9} (1)^{k+1}\frac{1}{k} \amp=\\
\sum_{k=1}^{5} (1)^{k+1}\frac{1}{k} \amp= \phantom{0.7833333333} \amp \sum_{k=1}^{10} (1)^{k+1}\frac{1}{k} \amp=
\end{align*}
Activity 8.4.2 exemplifies the general behavior that any convergent alternating series will demonstrate. In this example, we see that the partial sums of the alternating harmonic series oscillate around a fixed number that turns out to be the sum of the series.
Recall that if \(\lim_{k \to \infty} a_k \neq 0\text{,}\) then the series \(\sum a_k\) diverges by the Divergence Test. From this point forward, we will thus only consider alternating series
\begin{equation*}
\sum_{k=1}^{\infty} (1)^{k+1} a_k
\end{equation*}
in which the sequence \(a_k\) consists of positive numbers that decrease to 0. For such a series, the \(n\)th partial sum \(S_n\) satisfies
\begin{equation*}
S_n = \sum_{k=1}^n (1)^{k+1} a_k\text{.}
\end{equation*}
Notice that
\(S_2 = a_1  a_2\text{,}\) and since \(a_1 \gt a_2\) we have \(0 \lt S_2 \lt S_1 \text{.}\)
\(S_3 = S_2+a_3\) and so \(S_2 \lt S_3\text{.}\) But \(a_3 \lt a_2\text{,}\) so \(S_3 \lt S_1\text{.}\) Thus, \(0 \lt S_2 \lt S_3 \lt S_1 \text{.}\)
\(S_4 = S_3a_4\) and so \(S_4 \lt S_3\text{.}\) But \(a_4 \lt a_3\text{,}\) so \(S_2 \lt S_4\text{.}\) Thus, \(0 \lt S_2 \lt S_4 \lt S_3 \lt S_1 \text{.}\)
\(S_5 = S_4+a_5\) and so \(S_4 \lt S_5\text{.}\) But \(a_5 \lt a_4\text{,}\) so \(S_5 \lt S_3\text{.}\) Thus, \(0 \lt S_2 \lt S_4 \lt S_5 \lt S_3 \lt S_1 \text{.}\)
This pattern continues as illustrated in Figure 8.4.5 (with \(n\) odd) so that each partial sum lies between the previous two partial sums.
Figure 8.4.5 Partial sums of an alternating series
Note further that the absolute value of the difference between the \((n1)\)st partial sum \(S_{n1}\) and the \(n\)th partial sum \(S_n\) is
\begin{equation*}
\left\lvert S_n  S_{n1} \right\rvert = a_n\text{.}
\end{equation*}
Since the sequence \(\{a_n\}\) converges to 0, the distance between successive partial sums becomes as close to zero as we'd like, and thus the sequence of partial sums converges (even though we don't know the exact value to which the sequence of partial sums converges).
The preceding discussion has demonstrated the truth of the Alternating Series Test.
The Alternating Series Test
Given an alternating series
\begin{equation*}
\sum (1)^k a_k\text{,}
\end{equation*}
if the sequence \(\{a_k\}\) of positive terms decreases to 0 as \(k \to \infty\text{,}\) then the alternating series converges.
Note particularly that if the limit of the sequence \(\{a_k\}\) is not 0, then the alternating series diverges.
Activity 8.4.3
Which series converge and which diverge? Justify your answers.
\(\sum_{k=1}^{\infty} \frac{(1)^k}{k^2+2}\)
\(\sum_{k=1}^{\infty} \frac{(1)^{k+1}2k}{k+5}\)
\(\sum_{k=2}^{\infty} \frac{(1)^{k}}{\ln(k)}\)
Subsection 8.4.2 Estimating Alternating Sums
The argument for the Alternating Series Test also provides us with a method to determine how close the \(n\)th partial sum \(S_n\) is to the actual sum of a convergent alternating series. To see how this works, let \(S\) be the sum of a convergent alternating series, so
\begin{equation*}
S = \sum_{k=1}^{\infty} (1)^k a_k\text{.}
\end{equation*}
Recall that the sequence of partial sums oscillates around the sum \(S\) so that
\begin{equation*}
\leftS  S_n \right \lt \left S_{n+1}  S_n \right = a_{n+1}\text{.}
\end{equation*}
Therefore, the value of the term \(a_{n+1}\) provides an error estimate for how well the partial sum \(S_n\) approximates the actual sum \(S\text{.}\) We summarize this fact in the statement of the Alternating Series Estimation Theorem.
Alternating Series Estimation Theorem
If the alternating series
\begin{equation*}
\sum_{k=1}^{\infty} (1)^{k+1}a_k
\end{equation*}
converges and has sum \(S\text{,}\) and
\begin{equation*}
S_n = \sum_{k=1}^{n} (1)^{k+1}a_k
\end{equation*}
is the \(n\)th partial sum of the alternating series, then
\begin{equation*}
\left\lvert \sum_{k=1}^{\infty} (1)^{k+1}a_k  S_n \right\rvert \leq a_{n+1}\text{.}
\end{equation*}
Example 8.4.6
Let's determine how well the 100th partial sum \(S_{100}\) of
\begin{equation*}
\sum_{k=1}^{\infty} \frac{(1)^{k+1}}{k}
\end{equation*}
approximates the sum of the series.
Solution
If we let \(S\) be the sum of the series \(\sum_{k=1}^{\infty} \frac{(1)^{k+1}}{k}\text{,}\) then we know that
\begin{equation*}
\left S_{100}  S \right \lt a_{101}\text{.}
\end{equation*}
Now
\begin{equation*}
a_{101} = \frac{1}{101} \approx 0.0099\text{,}
\end{equation*}
so the 100th partial sum is within 0.0099 of the sum of the series. We have discussed the fact (and will later verify) that
\begin{equation*}
S = \sum_{k=1}^{\infty} \frac{(1)^{k+1}}{k} = \ln(2)\text{,}
\end{equation*}
and so \(S \approx 0.693147\) while
\begin{equation*}
S_{100} = \sum_{k=1}^{100} \frac{(1)^{k+1}}{k} \approx 0.6881721793\text{.}
\end{equation*}
We see that the actual difference between \(S\) and \(S_{100}\) is approximately \(0.0049750013\text{,}\) which is indeed less than \(0.0099\text{.}\)
Activity 8.4.4
Determine the number of terms it takes to approximate the sum of the convergent alternating series
\begin{equation*}
\sum_{k=1}^{\infty} \frac{(1)^{k+1}}{k^4}
\end{equation*}
to within 0.0001.
Subsection 8.4.3 Absolute and Conditional Convergence
A series such as
\begin{equation}
1  \frac{1}{4}  \frac{1}{9} + \frac{1}{16} + \frac{1}{25} + \frac{1}{36}  \frac{1}{49}  \frac{1}{64}  \frac{1}{81}  \frac{1}{100} + \cdots\label{eq84absconvergence}\tag{8.4.2}
\end{equation}
whose terms are neither all nonnegative nor alternating is different from any series that we have considered to date. The behavior of these series can be rather complicated, but there is an important connection between these arbitrary series that have some negative terms and series with all nonnegative terms that we illustrate with the next activity.
Activity 8.4.5

Explain why the series
\begin{equation*}
1  \frac{1}{4}  \frac{1}{9} + \frac{1}{16} + \frac{1}{25} + \frac{1}{36}  \frac{1}{49}  \frac{1}{64}  \frac{1}{81}  \frac{1}{100} + \cdots
\end{equation*}
must have a sum that is less than the series
\begin{equation*}
\sum_{k=1}^{\infty} \frac{1}{k^2}\text{.}
\end{equation*}

Explain why the series
\begin{equation*}
1  \frac{1}{4}  \frac{1}{9} + \frac{1}{16} + \frac{1}{25} + \frac{1}{36}  \frac{1}{49}  \frac{1}{64}  \frac{1}{81}  \frac{1}{100} + \cdots
\end{equation*}
must have a sum that is greater than the series
\begin{equation*}
\sum_{k=1}^{\infty} \frac{1}{k^2}\text{.}
\end{equation*}

Given that the terms in the series
\begin{equation*}
1  \frac{1}{4}  \frac{1}{9} + \frac{1}{16} + \frac{1}{25} + \frac{1}{36}  \frac{1}{49}  \frac{1}{64}  \frac{1}{81}  \frac{1}{100} + \cdots
\end{equation*}
converge to 0, what do you think the previous two results tell us about the convergence status of this series?
As the example in Activity 8.4.5 suggests, if we have a series \(\sum a_k\text{,}\) (some of whose terms may be negative) such that \(\sum a_k\) converges, it turns out to always be the case that the original series, \(\sum a_k\text{,}\) must also converge. That is, if \(\sum  a_k \) converges, then so must \(\sum a_k\text{.}\)
As we just observed, this is the case for the series (8.4.2), since the corresponding series of the absolute values of its terms is the convergent \(p\)series \(\sum \frac{1}{k^2}\text{.}\) At the same time, there are series like the alternating harmonic series \(\sum (1)^{k+1} \frac{1}{k}\) that converge, while the corresponding series of absolute values, \(\sum \frac{1}{k}\text{,}\) diverges. We distinguish between these behaviors by introducing the following language.
Definition 8.4.7
Consider a series \(\sum a_k\text{.}\)
The series \(\sum a_k\) converges absolutely (or is absolutely convergent) provided that \(\sum  a_k \) converges.
The series \(\sum a_k\) converges conditionally (or is conditionally convergent) provided that \(\sum  a_k \) diverges and \(\sum a_k\) converges.
In this terminology, the series (8.4.2) converges absolutely while the alternating harmonic series is conditionally convergent.
Activity 8.4.6

Consider the series \(\sum (1)^k \frac{\ln(k)}{k}\text{.}\)
Does this series converge? Explain.
Does this series converge absolutely? Explain what test you use to determine your answer.

Consider the series \(\sum (1)^k \frac{\ln(k)}{k^2}\text{.}\)
Does this series converge? Explain.
Does this series converge absolutely? Hint: Use the fact that \(\ln(k) \lt \sqrt{k}\) for large values of \(k\) and then compare to an appropriate \(p\)series.
Conditionally convergent series turn out to be very interesting. If the sequence \(\{a_n\}\) decreases to 0, but the series \(\sum a_k\) diverges, the conditionally convergent series \(\sum (1)^k a_k\) is right on the borderline of being a divergent series. As a result, any conditionally convergent series converges very slowly. Furthermore, some very strange things can happen with conditionally convergent series, as illustrated in some of the exercises.
Subsection Exercises
¶
1 Testing convergence for an alternating series
2 Estimating the sum of an alternating series
3 Estimating the sum of a different alternating series
4 Estimating the sum of one more alternating series
5
Conditionally convergent series converge very slowly. As an example, consider the famous formula^{ 1 }
\begin{equation}
\frac{\pi}{4} = 1  \frac{1}{3} + \frac{1}{5}  \frac{1}{7} + \cdots = \sum_{k=0}^{\infty} (1)^{k} \frac{1}{2k+1}\text{.}\label{Ex84pi}\tag{8.4.3}
\end{equation}
In theory, the partial sums of this series could be used to approximate \(\pi\text{.}\)
Show that the series in (8.4.3) converges conditionally.
Let \(S_n\) be the \(n\)th partial sum of the series in (8.4.3). Calculate the error in approximating \(\frac{\pi}{4}\) with \(S_{100}\) and explain why this is not a very good approximation.
Determine the number of terms it would take in the series (8.4.3) to approximate \(\frac{\pi}{4}\) to 10 decimal places. (The fact that it takes such a large number of terms to obtain even a modest degree of accuracy is why we say that conditionally convergent series converge very slowly.)
6
We have shown that if \(\sum (1)^{k+1} a_k\) is a convergent alternating series, then the sum \(S\) of the series lies between any two consecutive partial sums \(S_n\text{.}\) This suggests that the average \(\frac{S_n+S_{n+1}}{2}\) is a better approximation to \(S\) than is \(S_n\text{.}\)
Show that \(\frac{S_n+S_{n+1}}{2} = S_n + \frac{1}{2}(1)^{n+2} a_{n+1}\text{.}\)

Use this revised approximation in (a) with \(n = 20\) to approximate \(\ln(2)\) given that
\begin{equation*}
\ln(2) = \sum_{k=1}^{\infty} (1)^{k+1} \frac{1}{k}\text{.}
\end{equation*}
Compare this to the approximation using just \(S_{20}\text{.}\) For your convenience, \(S_{20} = \frac{155685007}{232792560}\text{.}\)
7
In this exercise, we examine one of the conditions of the Alternating Series Test. Consider the alternating series
\begin{equation*}
1  1 + \frac{1}{2}  \frac{1}{4} + \frac{1}{3}  \frac{1}{9} + \frac{1}{4}  \frac{1}{16} + \cdots\text{,}
\end{equation*}
where the terms are selected alternately from the sequences \(\left\{\frac{1}{n}\right\}\) and \(\left\{\frac{1}{n^2}\right\}\text{.}\)
Explain why the \(n\)th term of the given series converges to 0 as \(n\) goes to infinity.

Rewrite the given series by grouping terms in the following manner:
\begin{equation*}
(1  1) + \left(\frac{1}{2}  \frac{1}{4}\right) + \left(\frac{1}{3}  \frac{1}{9}\right) + \left(\frac{1}{4}  \frac{1}{16}\right) + \cdots\text{.}
\end{equation*}
Use this regrouping to determine if the series converges or diverges.
Explain why the condition that the sequence \(\{a_n\}\) decreases to a limit of 0 is included in the Alternating Series Test.
8
Conditionally convergent series exhibit interesting and unexpected behavior. In this exercise we examine the conditionally convergent alternating harmonic series \(\sum_{k=1}^{\infty} \frac{(1)^{k+1}}{k}\) and discover that addition is not commutative for conditionally convergent series. We will also encounter Riemann's Theorem concerning rearrangements of conditionally convergent series. Before we begin, we remind ourselves that
\begin{equation*}
\sum_{k=1}^{\infty} \frac{(1)^{k+1}}{k} = \ln(2)\text{,}
\end{equation*}
a fact which will be verified in a later section.

First we make a quick analysis of the positive and negative terms of the alternating harmonic series.
Show that the series \(\sum_{k=1}^{\infty} \frac{1}{2k}\) diverges.
Show that the series \(\sum_{k=1}^{\infty} \frac{1}{2k+1}\) diverges.
Based on the results of the previous parts of this exercise, what can we say about the sums \(\sum_{k=C}^{\infty} \frac{1}{2k}\) and \(\sum_{k=C}^{\infty} \frac{1}{2k+1}\) for any positive integer \(C\text{?}\) Be specific in your explanation.

Recall addition of real numbers is commutative; that is
\begin{equation*}
a + b = b + a
\end{equation*}
for any real numbers \(a\) and \(b\text{.}\) This property is valid for any sum of finitely many terms, but does this property extend when we add infinitely many terms together?
The answer is no, and something even more odd happens. Riemann's Theorem (after the nineteenthcentury mathematician Georg Friedrich Bernhard Riemann) states that a conditionally convergent series can be rearranged to converge to any prescribed sum. More specifically, this means that if we choose any real number \(S\text{,}\) we can rearrange the terms of the alternating harmonic series \(\sum_{k=1}^{\infty} \frac{(1)^{k+1}}{k}\) so that the sum is \(S\text{.}\) To understand how Riemann's Theorem works, let's assume for the moment that the number \(S\) we want our rearrangement to converge to is positive. Our job is to find a way to order the sum of terms of the alternating harmonic series to converge to \(S\text{.}\)

Explain how we know that, regardless of the value of \(S\text{,}\) we can find a partial sum \(P_1\)
\begin{equation*}
P_1 = \sum_{k=1}^{n_1} \frac{1}{2k+1} = 1 + \frac{1}{3} + \frac{1}{5} + \cdots + \frac{1}{2n_1+1}
\end{equation*}
of the positive terms of the alternating harmonic series that equals or exceeds \(S\text{.}\) Let
\begin{equation*}
S_1 = P_1\text{.}
\end{equation*}

Explain how we know that, regardless of the value of \(S_1\text{,}\) we can find a partial sum \(N_1\)
\begin{equation*}
N_1 = \sum_{k=1}^{m_1} \frac{1}{2k} = \frac{1}{2}  \frac{1}{4}  \frac{1}{6}  \cdots  \frac{1}{2m_1}
\end{equation*}
so that
\begin{equation*}
S_2 = S_1 + N_1 \leq S\text{.}
\end{equation*}

Explain how we know that, regardless of the value of \(S_2\text{,}\) we can find a partial sum \(P_2\)
\begin{equation*}
P_2 = \sum_{k=n_1+1}^{n_2} \frac{1}{2k+1} = \frac{1}{2(n_1+1)+1} + \frac{1}{2(n_1+2)+1} + \cdots + \frac{1}{2n_2+1}
\end{equation*}
of the remaining positive terms of the alternating harmonic series so that
\begin{equation*}
S_3 = S_2 + P_2 \geq S\text{.}
\end{equation*}

Explain how we know that, regardless of the value of \(S_3\text{,}\) we can find a partial sum
\begin{equation*}
N_2 = \sum_{k=m_1+1}^{m_2} \frac{1}{2k} = \frac{1}{2(m_1+1)}  \frac{1}{2(m_1+2)}  \cdots  \frac{1}{2m_2}
\end{equation*}
of the remaining negative terms of the alternating harmonic series so that
\begin{equation*}
S_4 = S_3 + N_2 \leq S\text{.}
\end{equation*}
Explain why we can continue this process indefinitely and find a sequence \(\{S_n\}\) whose terms are partial sums of a rearrangement of the terms in the alternating harmonic series so that \(\lim_{n \to \infty} S_n = S\text{.}\)