Function Algebras — by Walter Rudin

(The following is reproduced from the book “The Way I Remember It” by Walter Rudin. The purpose is just to share the insights of a formidable analyst with the student community.)

When I arrived at MIT in 1950, Banach algebras were one of the hot toppers. Gelfand’s 1941 paper “Normierte Ringe” had apparently only reached the USA in the late forties, and circulated on hard-to-read smudged purple ditto copies. As one application of the general theory presented there, it contained a stunningly short proof of Wiener’s lemma: the Fourier series of the reciprocal of a nowhere vanishing function with absolutely convergent Fourier series also  converges absolutely. Not only was the proof extremely short, it was one of those that are hard to forget. All one needs to remember is that the absolutely convergent Fourier series form a Banach algebra, and that every multiplicative linear functional on this algebra is evaluation at some point of the unit circle.

This may have led some to believe that Banach algebras would now solve all our problems. Of course, they could not, but they did provide the right framework for many questions in analysis (as does most of functional analysis) and conversely, abstract questions about Banach algebras often gave rise to interesting problems in “hard analysis”. (Hard analysis is used here as Hardy and Littlewood used it. For example, you do hard analysis when, in order to estimate some integral, you break it into three pieces and apply different inequalities to each.)

One type of Banach algebras that was soon studied in detail were the so-called function algebras, also known as uniform algebras.

To see what these are, let C(X) be the set of all complex-valued continuous functions on a compact Hausdorff space X. A function algebra on X is a subset A of C(X) such that

(i) If f and g are in A, so are f+g, fg, and cf for every complex number c (this says that A is an algebra).

(ii) A contains the constant functions.

(iii) A separates points on X (that is, if p \neq q, both in X, then f(p) \neq f(q) for some f in A), and

(iv) A is closed, relative to the sup-norm topology of C(X), that is, the topology in which convergence means uniform convergence.

A is said to be self-adjoint if the complex conjugate of every f in A is also in A. The most familiar example of a non-self-adjoint function algebra is the disc algebra A(U) which consists of all f in C(U) that are holomorphic in U. (here, and later, U is the open unit disc in C, the complex plane, and \overline{U} is its closure). I already had an encounter with  A(U), a propos maximum modulus algebras.

One type of question that was asked over and over again was: Suppose that a function algebra on X satisfies … and …and is it C(X)? (In fact, 20 years later a whole book, entitled “Characterizations of C(X) among its Subalgebras”  was published by R. B. Burckel.)  The Stone-Weierstrass Theorem gives the classical answer. Yes, if A is self-adjoint.

There are problems even when X is a compact interval I on the real line. For instance, suppose A is a function algebra on I, and to every maximal ideal M of A corresponds a point p in I such that M is the set of all f in A having f(p)=0 (In other words, the only maximal ideals of A are the obvious ones). Is A=C(I)? This is still unknown, in 1995.

If f_{1}, f_{2}, \ldots f_{n} are in C(I), and the n-tuple (f_{1}, f_{2}, \ldots, f_{n}) separates points on I, let A(f_{1}, \ldots , f_{n}) be the smallest closed subalgebras of C(I) that contains f_{1}, f_{2}, \ldots f_{n} and I.

When f_{1} is 1-1 on I, it follows from an old theorem of Walsh (Math. Annalen 96, 1926, 437-450) that A(f_{1})=C(f).

Stone-Weierstrass implies that A(f_{1}, f_{2}, \ldots, f_{n})=C(I) if each f_{i} is real-valued.

In the other direction, John Wermer showed in Annals of Math. 62, 1955, 267-270, that A(f_{1}, f_{2}, f_{3}) can be a proper subset of C(I)!

Here is how he did this:

Let E be an arc in C, of positive two-dimensional measure, and let A_{E} be an algebra of all continuous functions on the Riemann sphere S (the one-point compactification of C). which are holomorphic in the complement of E. He showed that g(E)=g(S) for every g in A_{E}, that A_{E} contains a triple (g_{1}, g_{2}, g_{3}) that separates points on S and that the restriction of A_{E} to E is closed in C(E). Pick a homeomorphism \phi of I onto E and define f_{i}=g_{i} \circ \phi. Then, A(f_{1}, f_{2}, f_{3}) \neq C(I), for if h is in A(f_{1}, f_{2}, f_{3}) then h= g \circ \phi for some g in A_{E}, so that

h(I)=g(E)=g(S) is the closure of an open subset of C (except when h is constant).

In order to prove the same with two function instead of three I replaced John’s arc E with a Cantor set K, also of positive two-dimensional measure (I use the term “Cantor set” for any totally disconnected compact metric space with no isolated points; these are all homeomorphic to each other.) A small extra twist, applied to John’s argument, with A_{K} in place of A_{E}, proved that A(f_{1}, f_{2}) can also be smaller than C(I).

I also used A_{K} to show that C(K) contains maximal closed point-separating subalgebras that are not maximal ideals, and that the same is true for C(X) whenever X contains a Cantor set. These ideas were pushed further by Hoffman and Singer in Acta Math. 103, 1960, 217-241.

In the same paper, I showed that A(f_{1}, f_{2}, \ldots, f_{n})=C(I) when n-1 of the n given functions are real-valued.

Since Wermer’s paper was being published in the Annals, and mine strengthened his theorem and contained other interesting (at least to me) results, I sent mine there too. It was rejected, almost by return mail, by an anonymous editor, for not being sufficiently interesting. I have had a few others papers rejected over the years, but for better reasons. This one was published in Proc. AMS 7, 1956, 825-830, and is one of six whose Russian transactions were made into a book “Some Questions in Approximation Theory”, the others were three by Bishop and two by Wermer. Good company.

Later, Gabriel Stolzenberg (Acta Math. 115, 1966, 185-198) and Herbert Alexander (Amer. J. Math., 93, 1971, 65-74) went much more deeply into these problems. One of the highlights in Alexander’s paper is:

A(f_{1}, f_{2}, \ldots f_{n})=C(I) if f_{1}, f_{2}, \ldots, f_{n-1} are of bounded variation.

A propos the Annals (published by Princeton University) here is a little Princeton anecdote. During a week that I spent there, in the mid-eighties, the Institute threw a cocktail party. (What I enjoyed best at that affair was being attacked by Armand Borel for having said, in print, that sheaves had vanished into the background.) Next morning I overheard the following conversation in Fine Hall:

Prof. A: That was a nice party yesterday, wasn’t it?

Prof. B: Yes, and wasn’t it nice that they invited the whole department.

Prof. A: Well, only the full professors.

Prof. B: Of course.

The above-mentioned facts about Cantor sets led me to look at the opposite extreme, the so-called scattered spaces. A compact Hausdorff space Q is said to be shattered if Q contains no perfect set, every non-empty compact set F in Q thus contains a point that is not a limit point of F. The principal result proved in Proc. AMS 8, 1957, 39-42 is:

THEOREM: Every closed subalgebra of C(Q) is self-adjoint.

In fact, the scattered spaces are the only ones for which this is true, but I did not state this in that paper.

In 1956, I found a very explicit description of all closed ideals in the disc algebra A(U) (defined at the beginning of this chapter). The description involves inner function. These are the bounded holomorphic functions in U whose radial limits have absolute value 1 at almost every point of the unit circle \mathcal{T}. They play a very important role in the study of holomorphic functions in U (see, for instance, Garnett’s book, Bounded Analytic Functions) and their analogues will be mentioned again, on Riemann surfaces, in polydiscs, and in balls in C^{n}.

Recall that a point \zeta on \mathcal{T} is called a singular point of a holomorphic  function f in U if f has no analytic continuation to any neighbourhood of \zeta. The ideals in question are described in the following:

THEOREM: Let E be a compact subset of \mathcal{T}, of Lebesgue measure 0, let u be an inner function all of whose singular points lie in E, and let J(E,u) be the set of all f in A(U) such that

(i) the quotient f/u is bounded in U, and

(ii) f(\zeta)=0 at every \zeta in E.

Then, J(E,u) is a closed ideal of A(U), and every closed ideal of A(U) (\neq \{0\}) is obtained in this way.

One of several corollaries is that every closed ideal of A(U) is principal, that is, is generated by a single function.

I presented this at the December 1956 AMS meeting in Rochester, and was immediately told by several people that Beurling had proved the same thing, in a course he had given at Harvard, but had not published it. I was also told that Beurling might be quite upset at this, and having Beurling upset at you was not a good thing. Having used this famous paper about the shift operator on a Hilbert space as my guide, I was not surprised that he too had proved this, but I saw no reason to withdraw my already submitted paper. It appeared in Canadian J. Math. 9, 1967, 426-434. The result is now known as Beurling-Rudin theorem. I met him several times later, and he never made a fuss over this.

In the preceding year Lennart Carleson and I, neither of us knowing what the other was doing proved what is now known as Rudin-Carleson interpolation theorem. His paper is in Math. Z. 66, 1957, 447-451, mine in Proc. AMS 7, 1956, 808-811.

THEOREM. If E is a compact subset of \mathcal{T}, of Lebesgue measure 0, then every f in C(E) extends to a function F in A(U).

(It is easy to see that this fails if m(E)>0. To say that F is an extension of f means simply that F(\zeta)=f(\zeta) at every \zeta in E.)

Our proofs have some ingredients in common, but they are different, and we each proved more than is stated above. Surprisingly, Carleson, the master of classical hard analysis, used a soft approach, namely duality in Banach spaces, and concluded that F could be so chosen that ||F||_{U} \leq 2||f||_{E}. (The norms are sup-norms over the sets appearing as subscripts.) In the same paper he used his Banach space argument to prove another interpolation theorem, involving Fourier-Stieltjes transforms.

On the other hand, I did not have functional analysis in mind at all, I did not think of the norms or of Banach spaces, I proved, by a bare-hands construction combined with the Riemann mapping theorem that if \Omega is a closed Jordan domain containing f(E) then f can be chosen so that F(\overline{U}) also lies in \Omega. If \Omega is a disc, centered at 0, this gives ||F||_{U}=||f||_{E}, so F is a norm-preserving extension.

What our proofs had in common is that we both used part of the construction that was used in the original proof of the F. and M. Riesz theorem (which says that if a measure \mu on \mathcal{T} gives \int fd\mu=0 for every f in A(U) then \mu is absolutely continuous with respect to Lebesgue measure). Carleson showed, in fact, that F. and M. Riesz can be derived quite easily from the interpolation theorem. I tried to prove the implication in the other direction. But that had to wait for Errett Bishop. In Proc. AMS 13, 1962, 140-143, he established this implication in a very general setting which had nothing to do with holomorphic functions or even with algebras, and which, combined with a refinement due to Glicksberg (Trans. AMS 105, 1962, 415-435) makes the interpolation theorem even more precise:

THEOREM: One can choose F in A(U) so that F(\zeta)=f(\zeta) at every \zeta in E, and |f(z)|<||f||_{E} at every z in \overline{U}\\E.

This is usually called peak-interpolation.

Several variable analogues of this and related results may be found in Chap. 6 of my Function Theory in Polydiscs and in Chap 10 of my Function Theory in the Unit Ball of C^{n}.

The last item in this chapter concerns Riemann surfaces. Some definitions are needed.

A finite Riemann surface is a connected open proper subset R of some compact Riemann surface X, such that the boundary \partial R of R in X is also the boundary of its closure \overline{R} and is the union of finitely many disjoint simple closed analytic curves \Gamma_{1}, \ldots, \Gamma_{k}. Shrinking each \Gamma_{i} to a point gives a compact orientable manifold whose genus g is defined to be the genus of R. The numbers g and k determine the topology of R, but not, of course, its conformal structure.

A(R) denotes the algebra of all continuous functions on \overline{R} that are holomorphic in R. If f is in A(R) and |f(p)|=1 at every point p in \partial R then, just as in U, f is called inner. A set S \subset A(R) is unramified if every point of \overline{R} has a neighbourhood in which at least one member of S is one-to-one.

I became interested in these algebras when Lee Stout (Math. Z., 92, 1966, 366-379; also 95, 1967, 403-404) showed that every A(R) contains an unramified triple of inner functions that separates points on \overline{R}.  He deduced from the resulting embedding of R in U^{S} that A(R) is generated by these 3 functions. Whether every A(R) is generated by some pair of its member is still unknown, but the main result of my paper in Trans. AMS 150, 1969, 423-434 shows that pairs of inner functions won’t always do:

THEOREM: If A(\overline{R}) contains a point-separating unramified pair f, g of inner functions, then there exist relatively prime integers s and t such that f is s-to-1 and g is t-to-1 on every \Gamma_{i}, and

(*) (ks-1)(kt-1)=2g+k-1

For example, when g=2 and k=4, then (*) holds for no  integers s and t. When g=23 and k=4, then s=t=2 is the only pair that satisfies (*) but it is not relatively prime. Even when R=U the theorem gives some information. In that case, g=0, k=1, so (*) becomes (s-1)(t-1)=0, which means:

If a pair of finite Blaschke products separates points on \overline{U} and their derivatives have no  common zero in U, then at least one of them is one-to-one (that is, a Mobius transformation).

There are two cases in which (*) is not  only necessary but also sufficient. This happens when g=0 and when g=k=1.

But there are examples in which the topological condition (*) is satisfied even though the conformal structure of R prevents the existence of a separating unramified pair of inner functions.

This paper is quite different from anything else that I have ever done. As far as I know, no  one has ever referred to it, but I had fun working on it.

*************************************************************************************************

More blogs from Rudin’s autobiography later, till then,

Nalin Pithwa

 

 

 

 

 

 

 

 

Interchanging Limit Processes — by Walter Rudin

As I mentioned earlier, my thesis (Trans. AMS 68, 1950, 278-363) deals with uniqueness questions for series of spherical harmonics, also known as Laplace series. In the more familiar setting of trigonometric series, the first theorem of the kind that I was looking for was proved by Georg Cantor in 1870, based on earlier work of Riemann (1854, published in 1867). Using the notations

A_{n}(x)=a_{n} \cos{nx}+b_{n}\sin{nx},

s_{p}(x)=A_{0}+A_{1}(x)+ \ldots + A_{p}(x), where a_{n} and b_{n} are real numbers. Cantor’s theorem says:

If \lim_{p \rightarrow \infty}s_{p}(x)=0 at every real x, then a_{n}=b_{n}=0 for every n.

Therefore, two distinct trigonometric series cannot converge to the same sum. This is what is meant by uniqueness.

My aim was to prove this for spherical harmonics and (as had been done for trigonometric series) to whittle away at the hypothesis. Instead of assuming convergence at every point of the sphere, what sort of summability will do? Does one really need convergence (or summability) at every point? If not, what sort of sets can be omitted? Must anything else be assumed at these omitted points? What sort of side conditions, if any, are relevant?

I came up with reasonable answers to these questions, but basically the whole point seemed to be the justification of the interchange of some limit processes. This left me with an uneasy feeling that there ought to be more to Analysis than that. I wanted to do something with more “structure”. I could not  have explained just what I meant by this, but I found it later when I became aware of the close relation between Fourier analysis and group theory, and also in an occasional encounter with number theory and with geometric aspects of several complex variables.

Why was it all an exercise in interchange of limits? Because the “obvious” proof of Cantor’s theorem goes like this: for p > n,

\pi a_{n}= \int_{-\pi}^{\pi}s_{p}(x)\cos{nx}dx = \lim_{p \rightarrow \infty}\int_{-\pi}^{\pi}s_{p}(x)\cos {nx}dx, which in turn, equals

\int_{-\pi}^{\pi}(\lim_{p \rightarrow \infty}s_{p}(x))\cos{nx}dx= \int_{-\pi}^{\pi}0 \cos{nx}dx=0 and similarly, for b_{n}. Note that \lim \int = \int \lim was used.

In Riemann’s above mentioned paper, the derives the conclusion of Cantor’s theorem under an additional hypothesis, namely, a_{n} \rightarrow 0 and b_{n} \rightarrow 0 as n \rightarrow \infty. He associates to \sum {A_{n}(x)} the twice integrated series

F(x)=-\sum_{1}^{\infty}n^{-2}A_{n}(x)

and then finds it necessary to prove, in some detail, that this series converges and that its sum F is continuous! (Weierstrass had not yet invented uniform convergence.) This is astonishingly different from most of his other publications, such as his paper  on hypergeometric functions in which mind-boggling relations and transformations are merely stated, with only a few hints, or  his  painfully brief paper on the zeta-function.

In Crelle’s J. 73, 1870, 130-138, Cantor showed that Riemann’s additional hypothesis was redundant, by proving that

(*) \lim_{n \rightarrow \infty}A_{n}(x)=0 for all x implies \lim_{n \rightarrow \infty}a_{n}= \lim_{n \rightarrow \infty}b_{n}=0.

He included the statement: This cannot be proved, as is commonly believed, by term-by-term integration.

Apparently, it took a while before this was generally understood. Ten years later, in Math. America 16, 1880, 113-114, he patiently explains the differenence between pointwise convergence and uniform convergence, in order to refute a “simpler proof” published by Appell. But then, referring to his second (still quite complicated) proof, the one in Math. Annalen 4, 1871, 139-143, he sticks his neck out and writes: ” In my opinion, no further simplification can be achieved, given the nature of ths subject.”

That was a bit reckless. 25 years later, Lebesgue’s dominated convergence theorem became part of every analyst’s tool chest, and since then (*) can be proved in a few lines:

Rewrite A_{n}(x) in the form A_{n}(x)=c_{n}\cos {(nx+\alpha_{n})}, where c_{n}^{2}=a_{n}^{2}+b_{n}^{2}. Put

\gamma_{n}==\min \{1, |c_{n}| \}, B_{n}(x)=\gamma_{n}\cos{(nx+\alpha_{n})}.

Then, B_{n}^{2}(x) \leq 1, B_{n}^{2}(x) \rightarrow 0 at every x, so that the D. C.Th., combined with

\int_{-\pi}^{\pi}B_{n}^{2}(x)dx=\pi \gamma_{n}^{2} shows that \gamma_{n} \rightarrow 0. Therefore, |c_{n}|=\gamma_{n} for all large n, and c_{n} \rightarrow 0. Done.

The point of all this is that my attitude was probably wrong. Interchanging limit processes occupied some of the best mathematicians for most of the 19th century. Thomas Hawkins’ book “Lebesgue’s Theory” gives an excellent description of the difficulties that they had to overcome. Perhaps, we should not be too surprised that even a hundred years later many students are baffled by uniform convergence, uniform continuity etc., and that some never get it at all.

In Trans. AMS 70, 1961,  387-403, I applied the techniques of my thesis to another problem of this type, with Hermite functions in place of spherical harmonics.

(Note: The above article has been picked from Walter Rudin’t book, “The Way I  Remember It)) — hope it helps advanced graduates in Analysis.

More later,

Nalin Pithwa

 

Analysis: Chapter 1: part 11: algebraic operations with real numbers: continued

(iii) Multiplication. 

When we come to multiplication, it is most convenient to confine ourselves to positive numbers (among which we may include zero) in the first instance, and to go back for a moment to the sections of positive rational numbers only which we considered in articles 4-7. We may then follow practically the same road as in the case of addition, taking (c) to be (ab) and (O) to be (AB). The argument is the same, except when we are proving that all rational numbers with at most one exception must belong to (c) or (C). This depends, as in the case of addition, on showing that we can choose a, A, b, and B so that C-c is as small as we please. Here we use the identity

C-c=AB-ab=(A-a)B+a(B-b).

Finally, we include negative numbers within the scope of our definition by agreeing that, if \alpha and \beta are positive, then

(-\alpha)\beta=-\alpha\beta, \alpha(-\beta)=-\alpha\beta, (-\alpha)(-\beta)=\alpha\beta.

(iv) Division. 

In order to define division, we begin by defining the reciprocal \frac{1}{\alpha} of a number \alpha (other than zero). Confining ourselves in the first instance to positive numbers and sections of positive rational numbers, we define the reciprocal of a positive number \alpha by means of the lower class (1/A) and the upper class (1/a). We then define the reciprocal of a negative number -\alpha by the equation 1/(-\alpha)=-(1/\alpha). Finally, we define \frac{\alpha}{\beta} by the equation

\frac{\alpha}{\beta}=\alpha \times (1/\beta).

We are then in a position to apply to all real numbers, rational or  irrational the whole of the ideas and methods of elementary algebra. Naturally, we do not propose to carry out this task in detail. It will be more profitable and more interesting to turn our attention to some special, but particularly important, classes of irrational numbers.

More later,

Nalin Pithwa

Analysis: Chapter 1: part 10: algebraic operations with real numbers

Algebraic operations with real numbers.

We now proceed to meaning of the elementary algebraic operations such as addition, as applied to real numbers in general.

(i),  Addition. In order to define the sum of two numbers \alpha and \beta, we consider the following two classes: (i) the class (c) formed by all sums c=a+b, (ii) the class (C) formed by all sums C=A+B. Clearly, c < C in all cases.

Again, there cannot be more than one rational number which does not belong either to (c) or to (C). For suppose there were two, say r and s, and let s be the greater. Then, both r and s must be greater than every c and less than every C; and so C-c cannot be less than s-r. But,

C-c=(A-a)+(B-b);

and we can choose a, b, A, B so that both A-a and B-b are as small as we like; and this plainly contradicts our hypothesis.

If every rational number belongs to (c) or to (C), the classes (c), (C) form a section of the rational numbers, that is to say, a number \gamma. If there is one which does not, we add it to (C). We have now a section or real number \gamma, which must clearly be rational, since it corresponds to the least member of (C). In any case we call \gamma the sum of \alpha and \betaand write 

\gamma=\alpha + \beta.

If both \alpha and \beta are rational, they are the least members of the upper classes (A) and (B). In this case it is clear that \alpha + \beta is the least member of (C), so that our definition agrees with our previous ideas of addition.

(ii) Subtraction.

We define \alpha - \beta by the equation \alpha-\beta=\alpha +(-\beta).

The idea of subtraction accordingly presents no fresh difficulties.

More later,

Nalin Pithwa

Chapter I: Real Variables: Rational Numbers: Examples I

Examples I.

1) If r and s are rational numbers, then r+s, r-s, rs, and r/s are rational numbers, unless in the last case s=0 (when r/s is of course meaningless).

Proof:

Part i): Given r and s are rational numbers. Let r=a/b, s=c/d, where a, b, c and d are integers, and b and d are not zero; where a and b do not have any common factors, where c and d do not have any common factors, and c and d are positive integers.

Then, r+s=a/b+c/d=(ad+bc)/bd, which is clearly rational as both the numerator and denominator are new integers (closure in addition and multiplication).

Part ii) Similar to part (i).

Part iii) By closure in multiplication.

Part iv) By definition of division in fractions, and closure in multiplication.

2) If \lambda , m, n are positive rational numbers, and m > n, then prove that \lambda(m^{2}-n^{2}), 2\lambda mn, \lambda(m^{2}+n^{2}) are positive rational numbers. Hence, show how to determine any number of right-angled triangles the lengths of all of whose sides are rational.

Proof:

This follows from problem 1 where we proved that the addition, subtraction and multiplication of rational numbers is rational.

Also, Pythagoras’ theorem holds in the following manner:

\lambda^{2}(m^{2}-n^{2})^{2}+(2\lambda m n)^{2}=\lambda^{2}(m^{2}+n^{2})^{2}

3) Any terminated decimal represents a rational number whose denominator contains no factors other than 2 or 5. Conversely, any such rational number can be expressed, and in one way only, as a terminated decimal.

Proof Part 1:

This is obvious since the divisors other than 2 or 5, namely, 3,6,7,9, and other prime numbers do not divide 1 into a terminated decimal.

Proof Part 2:

Since the process of division produces a unique quotient.

4) The positive rational numbers may be arranged in the form of a simple series as follows:

1/1, 2/1, 1/2, 3/1, 2/2, 1/3, 4/1, 3/2, 2/3, 1/4, \ldots

Show that p/q is the [\frac{p}{q}(p+q-1)(p+q-2)+q]th term of the series.

Proof:

Suggested idea. Try by mathematical induction.

More later,

Nalin Pithwa

 

 

Abel Laureates 2015 John Nash, Jr. and Louis Nirenberg

The leading lights at Courant were very much at the forefront of rapid progress, stimulated by World War II, in certain kinds of differential equations that serve as mathematical models for an immense variety of physical phenomena involving some sort of change. By the mid-fifties, as Fortune noted, mathematicians knew relatively simple routines for solving ordinary differential equations using computers. But there were no straightforward methods for solving most nonlinear partial differential equations that crop up when large or abrupt changes occur — such as equations that describe the aerodynamic shock waves produced when a jet accelerates past the speed of sound. In his 1958 obituary of von Neumann, who did important work in this field in the thirties, Stanislaw Ulam called such systems of equations “baffling analytically” saying that they “defy even qualitative insights by present methods.” As Nash was to write that same year, “The open problems in the area of non-linear partial differential equations are very relevant to applied mathematics and science as a whole, perhaps, more so than the open problems in any other area of mathematics, and this field seems poised for rapid development. It seems clear however that fresh methods must be employed.”

Nash, partly because of his contact with Norbert Wiener and perhaps his earlier interaction with Weinstein at Carnegie, was already interested in the problem of turbulence. Turbulence refers to the flow of gas or liquid over any uneven surface, like water rushing into a bay, heat or electrical charges travelling through metal, oil escaping from an underground pool, or clouds skimming over an air mass. It should be possible to model such motion mathematically. But, it turns out to be extremely difficult. As Nash wrote:

Little is known about the existence, uniqueness and smoothness of solutions of the general equations of flow for a viscous, compressible, and heat conducting fluid. These are a non-linear parabolic system of equations. An interest in these questions led us to undertake this work. It became clear that nothing could be done about the continuum description of general fluid flow without the ability to handle non-linear parabolic equations and this in turn required an a priori estimate of continuity.

It was Louis Nirenberg, a short, myopic, and sweet-natured young protege of Courant’s, who had handed Nash a major unsolved problem in the then fairly new field of nonlinear theoty. Nirenberg, also in his twenties then, and already a formidable analyst, found Nash a bit strange. “He’d often seemed to have an internal smile, as if he was thinking of a private joke, as he was laughing at a private joke that he had never told anyone about.” But he was extremely impressed with the technique Nash had invented for solving his embedding theorem and sensed that Nash might be the man to crack an extremely difficult outstanding problem that had been open since the late 1950s:

He (Nirenberg) had recalled:

I worked in partial differential equations. I also worked in geometry. The problem had to do with certain kinds of inequalities associated with elliptic partial differential equations. The problem had been around in the field for some time and a number of people had worked on it. Someone had obtained such estimates much earlier in the 1930s in two dimensions. But the problem was open for almost thirty years in higher dimensions.

Nash had begun working on the problem almost as soon as Nirenberg suggested it, although he knocked on doors until he had been satisfied that the problem was as important as Nirenberg had claimed. Peter Lax, who was one of these he had consulted, had commented some time back: In physics, everybody knows the most important problems. They are well-defined. Not so in mathematics. People are more introspective. For Nash, though, it had to be important in the opinion of others.

Nash had started visiting Nirenberg’s office to discuss his progress. But, it was weeks before Nirenberg got any real sense that Nash was getting anywhere. “We would meet often. Nash would say, “I seem to need such and such an inequality. I think it’s true that…” Very often, Nash’s speculations were far off the mark. He was sort of groping. He gave that impression. I wasn’t very confident he was going to get through.

Nitenberg had then sent Nash around to talk to Lars Hormander, a tall, steely Swede who was then already one of the top scholars in the field. Precise, careful, and immensely knowledgeable, Hormander knew Nash, by reputation but had reacted even more skeptically than Nirenberg. “Nash had learned from Nirenberg the importance of extending the Holder estimates known for second-order elliptic equations with two variables and irregular coefficients to higher dimensions,” Hormander had recalled in 1997. “He came to see me several times. ‘What did I think of such and such an inequality?’ At first, his conjectures were obviously false. They were easy to disprove by known facts on constant coefficient operators. He was rather inexperienced in these matters. Nash did things from scratch without using standard techniques. He was always trying to extract problems…(from conversations with others). He had not the patience to study them.”

Nash had continued to grope, but with more success. “After a couple more times,” said Hormander, “he would come up with things that were not so obviously wrong.”

By  the spring, Nash was able to obtain basic existence, uniqueness, and continuity theorems once again using novel methods of his own invention. He had a theory that difficult problems couldn’t be attacked frontally. He had approached the problem in an ingeniously roundabout manner, first transforming the nonllnear equations into linear equations and then attacking these by nonlinear means. “It was a stroke of genius,” said Peter Lax, who had followed the progress of Nash’s research closely. “I have never seen that done. I always kept it in my mind, thinking may be, it will work in another circumstance.”

(Note: Peter Lax is an earlier Abel Laureate).

Nash’s new result had gotten far more immediate attention than his embedding theorem. It had convinced Nirenberg, too, that Nash was a genius. Hormander’s mentor of the University of Lund, Lars Garding, a world class specialist in partial differential equations, had immediately declared, “You have to be a genius to do that.”

*************************************************************************************

More later,

Nalin Pithwa