How the Sum of the Natural Numbers Equals (and Does Not Equal) -1/12

You may have seen an episode of the Numberphile YouTube series which talks about how \(1 + 2 + 3 + … = -1/12\).

That episode has spawned an incredible number of discussions and flame wars about a mathematical claim that I haven’t seen since the age-old fight about whether \(0.99\overline{9} = 1\). (Yes, it does.) The nice thing about that debate is there there is an unambiguous, correct answer. Every reasonable way you can define “zero point nine repeating”, it is provably, definitely equal to the real number 1. All there is left to do is make a convincing argument aimed at a lay audience, using an incredibly basic mathematical toolset that doesn’t actually contain the tools you need to prove the claim.

This claim, on the other hand, is much more subtle. Convincing someone of the way it’s true is much more like convincing someone that The Dress is <insert colour here> when it’s clearly <insert colour here>.

The Numberphile episode gives a cute little “proof” of the claim which is probably not very satisfying. This is aggravated by the fact that internet points tend to be awarded to whoever can be the biggest contrarian, and so the discussions about this claim tend to be dominated by simple rebuttals. Those rebuttals are correct, but only skin-deep.

So yes, \(1 + 2 + 3 + … \ne -1/12\). It’s not equal to any number. The completely correct, technical statement is that the series diverges towards \(\infty\). End of story.

But that wouldn’t make for a very interesting blog post, now would it?

Thankfully, the truth of the matter is more subtle and interesting. Let’s start from the beginning.

There are many ways to define the value of an infinite sum (called a series in mathematics), which we denote by:

$$\sum a_n = a_1 + a_2 + \cdots$$

Once you have Calculus in your mathematical toolbelt and a definition of limits to work with, the most straightforward definition is to say that \(\sum a_n\) is the limit of the sequence \(a_1,\ a_1 + a_2,\ a_1 + a_2 + a_3\), and so on. (In other words, cut off the sum at progressively higher bounds and take the limit.) If this limit is finite, we say the series converges.

This sounds reasonable, and in fact this is the completely standard definition. But this turns out to be a fairly weak condition. Since addition commutes, we might expect the order of the terms \(a_n\) not to actually matter in the series \(\sum a_n\). We might expect that we can rearrange the terms any way we want, and the series should converge to the same number.

But that’s not the case. There are series which converge, but if you rearrange the terms they converge to a completely different number. It turns out that in order to have the nice property of being equal under rearrangements, the series needs to satisfy a condition called absolute convergence, which is pretty much what it sounds like: \(\sum a_n\) needs to converge, but so does the series of absolute values \(\sum |a_n|\).

In fact, there’s a cute result that if a series converges but is not absolutely convergent, then you can always rearrange the terms to give you any number you want.

Okay, so far I’m talking about very strict definitions of infinite series. But you can also weaken the definition of convergence. For example, consider the series \(1 - 1 + 1 - 1 + 1 - \cdots\). This doesn’t converge absolutely. It doesn’t converge at all, because as you cut it off at finite bounds, the partial sums flip between 0 and 1. But there’s still something interesting going on. Flipping between 0 and 1 like this seems like, in some sense, it should have the value \(½\). So one way to weaken the definition is to take all the partial sums, and ask what their “average” value is. This is called Cesaro summation.

(I promise I’m getting to the definition relevant to this discussion.) Yet another way to generalize the value of a series uses a method called analytic continuation. There’s a very important class of functions called holomorphic functions. If you have some background in Calculus, you can think of these as complex-valued generalizations of functions that are differentiable everywhere. These are the functions with the “nicest” properties you can have in complex analysis. (Complex Analysis is Calculus in the complex numbers.)

Well, there’s an important theorem in complex analysis called the identity theorem, which states that if you know all the values of a holomorphic function in even the tiniest region of its domain (you have to be careful how you define “region” here but I’m ignoring that detail), then those values determine the rest of the function across the rest of the complex numbers. Which is pretty incredible by itself. But it also gives us a way to take a function we only know how to compute in some small domain, and extend it uniquely to a much bigger set of inputs.

Which brings us to the Riemann zeta function. Given any fixed number \(s\), we can define a series:

$$\sum 1/n^ s = 1 + 1/ 2^ s + 1/3^ s + \cdots$$

We can prove that this series converges whenever whenever \(s\) is a complex number whose real part is greater than 1. So we can think of the series above as a function of \(s\) wherever it converges, which we denote by the Greek letter zeta:

$$\zeta(s) = \sum 1/n^ s$$

But thanks to the identity theorem we can prove this function has a unique generalization (analytic continuation) to the rest of the complex plane. This is called the Riemann zeta function. It’s kind of important, as you can see by the length of its Wikipedia page.

What this has given us is a way to generalize the series \(\sum 1/n^ s\) to a larger domain of values of \(s\) than we could originally. In particular, if we plug \(s=-1\) into that formula, we get the series \(1 + 2 + 3 + \cdots\) which we’re interested in. Remember, I said earlier that this series doesn’t converge using any of the typical definitions. But it has an analytic continuation in the form of the Riemann zeta function, which we can prove gives \(\zeta(-1) = -1/ 12\). Since the Riemann zeta function is the unique analytic continuation of the function \(\sum 1/n^ s\), this gives us a formal way of identifying the number \(-1/ 12\) with the series \(1 + 2 + 3 + \cdots\).

But that’s not all! Plugging other numbers into \(\zeta(s)\) allows us to identify other divergent series with finite numbers. For example, \(\zeta(0)=-1/ 2\), which corresponds to the series \(1 + 1 + 1 + \cdots\), and \(\zeta(-2)=0\), which corresponds to the series \(1 + 4 + 9 + 16 + \cdots\). These give us the strange “identities”,

$$ 1 + 1 + 1 + \cdots = -1/ 2 $$ $$ 1 + 2 + 3 + \cdots = -1/ 12 $$ $$ 1 + 4 + 9 + 16 + \cdots = 0 $$

This is nothing new. Euler discovered these strange results in the 1700s. But are they meaningful? Oddly enough, yes! These are all examples of a much more general technique in physics called regularization, where a naive formulation of a problem might include a value with a divergent sum, but it’s known that the actual value should be finite. This is used, for example, to calculate the Casimir effect.

(This gets a bit more technical from here on out.)

To see where these finite values come from another way, we can take a divergent series \(\sum a_n\) and somehow add a small parameter \(\epsilon\) which makes the series converge at positive but small values of \(\epsilon\), and equal the original series at \(\epsilon=0\). We can then analyze what this resulting function of \(\epsilon\) looks like near \(\epsilon=0\) to tell us about the “finite part” of the divergent series.

An incredibly useful mathematical tool is the Laurent series. Roughly speaking, it allows us to split up a function by something like “orders of magnitude” and analyze how a function behaves even around points where it is not defined. To see how this works, let’s start with \(1 + 2 + \cdots = \sum n\) and turn it into a function of \(\epsilon\):

$$ \sum ne^{-\epsilon n} $$

When \(\epsilon=0\), this becomes the original, divergent, series. But we can take the Laurent series expansion of this function around the point \(\epsilon=0\), which (skipping details), gives us

$$ \sum ne^{-\epsilon n} = \frac{1}{\epsilon ^2} - \frac{1}{12} + \mathcal{O}(\epsilon ^2) $$

Above, the notation \(\mathcal{O}(\epsilon ^2)\) means some term that’s proportional to \(\epsilon ^2\), which is approximately 0 when \(\epsilon\) is very small. What this leaves us with is a term that is very large, \(1/\epsilon ^2\), and our old friend, the finite term \(-1/ 12\). This allows us to say something like, “the finite part of \(1 + 2 + 3 + \cdots\) is \(-1/ 12\)”.

In a physical problem, the infinite contribution of the sum might be some part of the problem that can’t be properly computed yet but should disappear if we took everything into account.

But notice how we got the same result, \(-1/12\), in two completely different ways. This is no coincidence. If you would like to know more, check out this (very technical) blog post by Terence Tao where he explains that the two methods we looked at are equivalent.

So in conclusion, the contrarians are technically correct. \(1 + 2 + 3 + \cdots\) is not actually equal to \(-1/12\). At least not in the way we would typically define such a sum. But identifying the two quantities is not just a cheap parlor trick. It is deep, meaningful, and useful.

Comments