[The following entry was originally posted on 29 September. This is the last of my posts on the Heidelberg Laureate Forum. For chess fans, I will return to our regular subject matter in my next post. Thanks for your patience while I was hanging out in math-land!]
Two days ago the Heidelberg Laureate Forum came to an end, with a grand farewell dinner at Heidelberg Castle. Back in July, when I was invited to join the blog team for the Heidelberg Laureate Forum, I didn’t accept immediately. I was busy, and a week in Germany would definitely affect my writing schedule. For several days I sat on the fence, but what finally decided me was… the fact that we would have the closing dinner in a castle! “How many times in my life will I have the chance to eat dinner in a castle?” I asked. And so I said yes.
In retrospect, I can tell you that it was worth it. Both the dinner in the castle, and the whole Heidelberg Laureate Forum, more than lived up to my expectations.
The beginning and ending of the forum’s scientific program were absolutely perfect. The first talk, on Monday morning, was a look backward: Raj Reddy talked about who invented computing. The history was much more complicated than I ever dreamed, but what really stuck with me was that “computing” is not just one thing. There were many insights, some far from obvious, that had to be combined to make up the experience that we today call by one name, “computing.”
The final talk, on Friday afternoon, was by John Hopcroft, and appropriately it was about the future of computer science. He said that students sometimes tell him that he was lucky to get his degree in 1964 (in electrical engineering rather than computer science, because computer science didn’t exist yet). He and many of the laureates had the opportunity to lay the foundations of the discipline from the ground up, inventing the first compilers and operating systems and so on. Surely there will never be such an opportunity again.
But Hopcroft argued that now is equally a time of great opportunity, and the students of today are lucky to be graduating in 2013. He mentioned new scientific fields like computational biology, sparse vectors, and the study of communities (what is a community, anyway?). All of these are just beginning to be explored.
For my mathematically minded readers, who may be feeling left out because the conference began and ended with computer science, Hopcroft happened to mention an intriguing subject that caused more conversation afterwards than anything else in his talk: the ways in which high-dimensional space are different from low-dimensional space.
The main puzzle is this: What is the hypervolume of an N-dimensional hyperball of radius 1, as N gets large? It seems as if it should get larger and larger. A beach ball is somehow rounder than a CD, and a 4-dimensional ball should be even rounder yet. Indeed, the volumes do go up at first. The area of a unit disk is pi (3.14159…). The volume of a unit sphere is 4/3 times pi. The hypervolume of a hypersphere, if I remember correctly, is 1/2 times pi squared. (Someone correct me if I’m wrong.)
But remarkably, somewhere around the sixth dimension the hyper-hyper-volumes start going down, and in the limit they approach zero! In other words, the unit N-ball is tiny compared to the unit N-cube. How can this be?
There are several answers. First, a physicist’s answer would be that you can’t compare them. The units are different, so it’s like comparing apples to oranges. However, that’s not very satisfying. The numbers really do mean something. As Hopcroft said, if you pick two random vectors on an N-dimensional sphere they are very likely to be perpendicular, or nearly so. That’s not true on a low-dimensional sphere. There is a genuine phenomenon here that needs explaining.
Second, the mathematician’s answer would be to compute an integral. I’ve seen this done, and at the end it’s not very satisfying. You get this formula for the N-volume of the N-sphere that involves the gamma function, and you aren’t any wiser than you were before. It’s just like Cedric Villani said in the round-table panel on mathematics on Tuesday. If the proof doesn’t add to your understanding, it wasn’t really worth doing.
(Well, I don’t completely agree. It does help you in the sense that you can just accept that this is true, and then it leads you to several other counterintuitive insights about the N-sphere and N-ball. But let’s move on.)
It would be nice to have a truly intuitive explanation of why the N-dimensional ball “shrinks.” I’ll give a sketchy argument that is still not completely intuitive — it’s more algebraic than geometric. But perhaps one of you can do better?
[Non-math types can stop here, but if you’ve had first-year calculus you should be able to follow the rest of this post.]
According to Pappus’ theorem, the volume of a region formed by rotating an area A about an axis l is equal to the area of A times the distance traveled by the center of mass of A. This continues to be true in higher dimensions. So in the case of the N-ball, the volume should be the (N-1)-volume of half an (N-1)-ball, times 2 pi, times the altitude of the center of mass of half an (N-1)-ball.
Where is this center of mass? Hopcroft gave us a clue yesterday when the said that the mass of an N-ball is concentrated around the equator. What this means is that the center of mass of a half N-sphere is very low.
How can we see this? Well, the altitude of the center of mass is given by averaging the N-th coordinate (x_N) over the half N-sphere. I don’t want to do this integral, because that would not be intuitive. Instead I want to estimate it. By Schwartz’s inequality, the average of x_N is bounded above by the root-mean-square (RMS) average of x_N, obtained by averaging x_N^2 and taking the square root. (Even though the ancient Greeks didn’t have calculus, the arithmetic mean-RMS inequality was familiar to them.)
Now here’s the key thing. The average of x_N^2 over a half-ball is the same as the average of x_N^2 over a whole ball. And that is, by symmetry, the same as the average of x_1^2, x_2^2, etc. Thus the average of x_N^2 is just 1/N times the average of (x_1^2 + x_2^2 + … + x_N^2). This, in turn, is the average of r^2. And the average value of r^2, I believe, approaches 1 as N approaches infinity. Certainly it’s bounded above by 1, because r itself is bounded above by 1.
Putting it all together, the mean of x_N^2 is less than 1/N times 1, and so the RMS of x_N is less than 1/sqrt(N). Therefore the average of x_N over the half N-sphere is less than 1/sqrt(N). Finally, by Pappus’ principle, this means that the N-volume of the N-sphere is less than pi/sqrt(N) times the (N-1)-volume of the (N-1)-sphere. So for N greater than about 10 or so, this will start decreasing, and in the limit the N-volume will have to go to zero.
A I said, I don’t consider this a truly intuitive explanation, but I think it is convincing, does add a little bit to my understanding, and doesn’t involve any actual computation or gamma functions.
This blog post originates from the official blog of the 1st Heidelberg Laureate Forum (HLF) which takes place in Heidelberg, Germany, September 22 – 27, 2013. 40 Abel, Fields, and Turing Laureates will gather to meet a select group of 200 young researchers.
Dana Mackenzie is a member of the HLF blog team. Please find all his postings on the HLF blog.
{ 3 comments… read them below or add one }
Hmm do we have the gamma function formula somewhere? Seems like the geometric explanation tries to find the limit as N->infty, which we can probably try also with the gamma function formula?
Oh, and as good as people have it in CS, I think that my own (ex) field of fluid mechanics is now dead and has been replaced by busy work computer simulations (higher resolution simulations, prettier pictures to make it more ‘accessible’ to laymen, no more math). But maybe the golden age for this type of work is over and one ought to move to other more fertile areas.
Hi, Dana. I like your center-of-gravity construction, which does indeed make a very counterintuitive fact a little less counterintuitive. I tried to puzzle this out for myself a couple of years ago (URL below) and didn’t do as well. But there is some context and historical background that might interest you.
http://www.americanscientist.org/libraries/documents/201110101628308738-2011-11CompSciHayes.pdf
A ball with infinite dimensions becomes a point!