Function Algebras — by Walter Rudin

(The following is reproduced from the book “The Way I Remember It” by Walter Rudin. The purpose is just to share the insights of a formidable analyst with the student community.)

When I arrived at MIT in 1950, Banach algebras were one of the hot toppers. Gelfand’s 1941 paper “Normierte Ringe” had apparently only reached the USA in the late forties, and circulated on hard-to-read smudged purple ditto copies. As one application of the general theory presented there, it contained a stunningly short proof of Wiener’s lemma: the Fourier series of the reciprocal of a nowhere vanishing function with absolutely convergent Fourier series also  converges absolutely. Not only was the proof extremely short, it was one of those that are hard to forget. All one needs to remember is that the absolutely convergent Fourier series form a Banach algebra, and that every multiplicative linear functional on this algebra is evaluation at some point of the unit circle.

This may have led some to believe that Banach algebras would now solve all our problems. Of course, they could not, but they did provide the right framework for many questions in analysis (as does most of functional analysis) and conversely, abstract questions about Banach algebras often gave rise to interesting problems in “hard analysis”. (Hard analysis is used here as Hardy and Littlewood used it. For example, you do hard analysis when, in order to estimate some integral, you break it into three pieces and apply different inequalities to each.)

One type of Banach algebras that was soon studied in detail were the so-called function algebras, also known as uniform algebras.

To see what these are, let C(X) be the set of all complex-valued continuous functions on a compact Hausdorff space X. A function algebra on X is a subset A of C(X) such that

(i) If f and g are in A, so are f+g, fg, and cf for every complex number c (this says that A is an algebra).

(ii) A contains the constant functions.

(iii) A separates points on X (that is, if p \neq q, both in X, then f(p) \neq f(q) for some f in A), and

(iv) A is closed, relative to the sup-norm topology of C(X), that is, the topology in which convergence means uniform convergence.

A is said to be self-adjoint if the complex conjugate of every f in A is also in A. The most familiar example of a non-self-adjoint function algebra is the disc algebra A(U) which consists of all f in C(U) that are holomorphic in U. (here, and later, U is the open unit disc in C, the complex plane, and \overline{U} is its closure). I already had an encounter with  A(U), a propos maximum modulus algebras.

One type of question that was asked over and over again was: Suppose that a function algebra on X satisfies … and …and is it C(X)? (In fact, 20 years later a whole book, entitled “Characterizations of C(X) among its Subalgebras”  was published by R. B. Burckel.)  The Stone-Weierstrass Theorem gives the classical answer. Yes, if A is self-adjoint.

There are problems even when X is a compact interval I on the real line. For instance, suppose A is a function algebra on I, and to every maximal ideal M of A corresponds a point p in I such that M is the set of all f in A having f(p)=0 (In other words, the only maximal ideals of A are the obvious ones). Is A=C(I)? This is still unknown, in 1995.

If f_{1}, f_{2}, \ldots f_{n} are in C(I), and the n-tuple (f_{1}, f_{2}, \ldots, f_{n}) separates points on I, let A(f_{1}, \ldots , f_{n}) be the smallest closed subalgebras of C(I) that contains f_{1}, f_{2}, \ldots f_{n} and I.

When f_{1} is 1-1 on I, it follows from an old theorem of Walsh (Math. Annalen 96, 1926, 437-450) that A(f_{1})=C(f).

Stone-Weierstrass implies that A(f_{1}, f_{2}, \ldots, f_{n})=C(I) if each f_{i} is real-valued.

In the other direction, John Wermer showed in Annals of Math. 62, 1955, 267-270, that A(f_{1}, f_{2}, f_{3}) can be a proper subset of C(I)!

Here is how he did this:

Let E be an arc in C, of positive two-dimensional measure, and let A_{E} be an algebra of all continuous functions on the Riemann sphere S (the one-point compactification of C). which are holomorphic in the complement of E. He showed that g(E)=g(S) for every g in A_{E}, that A_{E} contains a triple (g_{1}, g_{2}, g_{3}) that separates points on S and that the restriction of A_{E} to E is closed in C(E). Pick a homeomorphism \phi of I onto E and define f_{i}=g_{i} \circ \phi. Then, A(f_{1}, f_{2}, f_{3}) \neq C(I), for if h is in A(f_{1}, f_{2}, f_{3}) then h= g \circ \phi for some g in A_{E}, so that

h(I)=g(E)=g(S) is the closure of an open subset of C (except when h is constant).

In order to prove the same with two function instead of three I replaced John’s arc E with a Cantor set K, also of positive two-dimensional measure (I use the term “Cantor set” for any totally disconnected compact metric space with no isolated points; these are all homeomorphic to each other.) A small extra twist, applied to John’s argument, with A_{K} in place of A_{E}, proved that A(f_{1}, f_{2}) can also be smaller than C(I).

I also used A_{K} to show that C(K) contains maximal closed point-separating subalgebras that are not maximal ideals, and that the same is true for C(X) whenever X contains a Cantor set. These ideas were pushed further by Hoffman and Singer in Acta Math. 103, 1960, 217-241.

In the same paper, I showed that A(f_{1}, f_{2}, \ldots, f_{n})=C(I) when n-1 of the n given functions are real-valued.

Since Wermer’s paper was being published in the Annals, and mine strengthened his theorem and contained other interesting (at least to me) results, I sent mine there too. It was rejected, almost by return mail, by an anonymous editor, for not being sufficiently interesting. I have had a few others papers rejected over the years, but for better reasons. This one was published in Proc. AMS 7, 1956, 825-830, and is one of six whose Russian transactions were made into a book “Some Questions in Approximation Theory”, the others were three by Bishop and two by Wermer. Good company.

Later, Gabriel Stolzenberg (Acta Math. 115, 1966, 185-198) and Herbert Alexander (Amer. J. Math., 93, 1971, 65-74) went much more deeply into these problems. One of the highlights in Alexander’s paper is:

A(f_{1}, f_{2}, \ldots f_{n})=C(I) if f_{1}, f_{2}, \ldots, f_{n-1} are of bounded variation.

A propos the Annals (published by Princeton University) here is a little Princeton anecdote. During a week that I spent there, in the mid-eighties, the Institute threw a cocktail party. (What I enjoyed best at that affair was being attacked by Armand Borel for having said, in print, that sheaves had vanished into the background.) Next morning I overheard the following conversation in Fine Hall:

Prof. A: That was a nice party yesterday, wasn’t it?

Prof. B: Yes, and wasn’t it nice that they invited the whole department.

Prof. A: Well, only the full professors.

Prof. B: Of course.

The above-mentioned facts about Cantor sets led me to look at the opposite extreme, the so-called scattered spaces. A compact Hausdorff space Q is said to be shattered if Q contains no perfect set, every non-empty compact set F in Q thus contains a point that is not a limit point of F. The principal result proved in Proc. AMS 8, 1957, 39-42 is:

THEOREM: Every closed subalgebra of C(Q) is self-adjoint.

In fact, the scattered spaces are the only ones for which this is true, but I did not state this in that paper.

In 1956, I found a very explicit description of all closed ideals in the disc algebra A(U) (defined at the beginning of this chapter). The description involves inner function. These are the bounded holomorphic functions in U whose radial limits have absolute value 1 at almost every point of the unit circle \mathcal{T}. They play a very important role in the study of holomorphic functions in U (see, for instance, Garnett’s book, Bounded Analytic Functions) and their analogues will be mentioned again, on Riemann surfaces, in polydiscs, and in balls in C^{n}.

Recall that a point \zeta on \mathcal{T} is called a singular point of a holomorphic  function f in U if f has no analytic continuation to any neighbourhood of \zeta. The ideals in question are described in the following:

THEOREM: Let E be a compact subset of \mathcal{T}, of Lebesgue measure 0, let u be an inner function all of whose singular points lie in E, and let J(E,u) be the set of all f in A(U) such that

(i) the quotient f/u is bounded in U, and

(ii) f(\zeta)=0 at every \zeta in E.

Then, J(E,u) is a closed ideal of A(U), and every closed ideal of A(U) (\neq \{0\}) is obtained in this way.

One of several corollaries is that every closed ideal of A(U) is principal, that is, is generated by a single function.

I presented this at the December 1956 AMS meeting in Rochester, and was immediately told by several people that Beurling had proved the same thing, in a course he had given at Harvard, but had not published it. I was also told that Beurling might be quite upset at this, and having Beurling upset at you was not a good thing. Having used this famous paper about the shift operator on a Hilbert space as my guide, I was not surprised that he too had proved this, but I saw no reason to withdraw my already submitted paper. It appeared in Canadian J. Math. 9, 1967, 426-434. The result is now known as Beurling-Rudin theorem. I met him several times later, and he never made a fuss over this.

In the preceding year Lennart Carleson and I, neither of us knowing what the other was doing proved what is now known as Rudin-Carleson interpolation theorem. His paper is in Math. Z. 66, 1957, 447-451, mine in Proc. AMS 7, 1956, 808-811.

THEOREM. If E is a compact subset of \mathcal{T}, of Lebesgue measure 0, then every f in C(E) extends to a function F in A(U).

(It is easy to see that this fails if m(E)>0. To say that F is an extension of f means simply that F(\zeta)=f(\zeta) at every \zeta in E.)

Our proofs have some ingredients in common, but they are different, and we each proved more than is stated above. Surprisingly, Carleson, the master of classical hard analysis, used a soft approach, namely duality in Banach spaces, and concluded that F could be so chosen that ||F||_{U} \leq 2||f||_{E}. (The norms are sup-norms over the sets appearing as subscripts.) In the same paper he used his Banach space argument to prove another interpolation theorem, involving Fourier-Stieltjes transforms.

On the other hand, I did not have functional analysis in mind at all, I did not think of the norms or of Banach spaces, I proved, by a bare-hands construction combined with the Riemann mapping theorem that if \Omega is a closed Jordan domain containing f(E) then f can be chosen so that F(\overline{U}) also lies in \Omega. If \Omega is a disc, centered at 0, this gives ||F||_{U}=||f||_{E}, so F is a norm-preserving extension.

What our proofs had in common is that we both used part of the construction that was used in the original proof of the F. and M. Riesz theorem (which says that if a measure \mu on \mathcal{T} gives \int fd\mu=0 for every f in A(U) then \mu is absolutely continuous with respect to Lebesgue measure). Carleson showed, in fact, that F. and M. Riesz can be derived quite easily from the interpolation theorem. I tried to prove the implication in the other direction. But that had to wait for Errett Bishop. In Proc. AMS 13, 1962, 140-143, he established this implication in a very general setting which had nothing to do with holomorphic functions or even with algebras, and which, combined with a refinement due to Glicksberg (Trans. AMS 105, 1962, 415-435) makes the interpolation theorem even more precise:

THEOREM: One can choose F in A(U) so that F(\zeta)=f(\zeta) at every \zeta in E, and |f(z)|<||f||_{E} at every z in \overline{U}\\E.

This is usually called peak-interpolation.

Several variable analogues of this and related results may be found in Chap. 6 of my Function Theory in Polydiscs and in Chap 10 of my Function Theory in the Unit Ball of C^{n}.

The last item in this chapter concerns Riemann surfaces. Some definitions are needed.

A finite Riemann surface is a connected open proper subset R of some compact Riemann surface X, such that the boundary \partial R of R in X is also the boundary of its closure \overline{R} and is the union of finitely many disjoint simple closed analytic curves \Gamma_{1}, \ldots, \Gamma_{k}. Shrinking each \Gamma_{i} to a point gives a compact orientable manifold whose genus g is defined to be the genus of R. The numbers g and k determine the topology of R, but not, of course, its conformal structure.

A(R) denotes the algebra of all continuous functions on \overline{R} that are holomorphic in R. If f is in A(R) and |f(p)|=1 at every point p in \partial R then, just as in U, f is called inner. A set S \subset A(R) is unramified if every point of \overline{R} has a neighbourhood in which at least one member of S is one-to-one.

I became interested in these algebras when Lee Stout (Math. Z., 92, 1966, 366-379; also 95, 1967, 403-404) showed that every A(R) contains an unramified triple of inner functions that separates points on \overline{R}.  He deduced from the resulting embedding of R in U^{S} that A(R) is generated by these 3 functions. Whether every A(R) is generated by some pair of its member is still unknown, but the main result of my paper in Trans. AMS 150, 1969, 423-434 shows that pairs of inner functions won’t always do:

THEOREM: If A(\overline{R}) contains a point-separating unramified pair f, g of inner functions, then there exist relatively prime integers s and t such that f is s-to-1 and g is t-to-1 on every \Gamma_{i}, and

(*) (ks-1)(kt-1)=2g+k-1

For example, when g=2 and k=4, then (*) holds for no  integers s and t. When g=23 and k=4, then s=t=2 is the only pair that satisfies (*) but it is not relatively prime. Even when R=U the theorem gives some information. In that case, g=0, k=1, so (*) becomes (s-1)(t-1)=0, which means:

If a pair of finite Blaschke products separates points on \overline{U} and their derivatives have no  common zero in U, then at least one of them is one-to-one (that is, a Mobius transformation).

There are two cases in which (*) is not  only necessary but also sufficient. This happens when g=0 and when g=k=1.

But there are examples in which the topological condition (*) is satisfied even though the conformal structure of R prevents the existence of a separating unramified pair of inner functions.

This paper is quite different from anything else that I have ever done. As far as I know, no  one has ever referred to it, but I had fun working on it.

*************************************************************************************************

More blogs from Rudin’s autobiography later, till then,

Nalin Pithwa

 

 

 

 

 

 

 

 

Interchanging Limit Processes — by Walter Rudin

As I mentioned earlier, my thesis (Trans. AMS 68, 1950, 278-363) deals with uniqueness questions for series of spherical harmonics, also known as Laplace series. In the more familiar setting of trigonometric series, the first theorem of the kind that I was looking for was proved by Georg Cantor in 1870, based on earlier work of Riemann (1854, published in 1867). Using the notations

A_{n}(x)=a_{n} \cos{nx}+b_{n}\sin{nx},

s_{p}(x)=A_{0}+A_{1}(x)+ \ldots + A_{p}(x), where a_{n} and b_{n} are real numbers. Cantor’s theorem says:

If \lim_{p \rightarrow \infty}s_{p}(x)=0 at every real x, then a_{n}=b_{n}=0 for every n.

Therefore, two distinct trigonometric series cannot converge to the same sum. This is what is meant by uniqueness.

My aim was to prove this for spherical harmonics and (as had been done for trigonometric series) to whittle away at the hypothesis. Instead of assuming convergence at every point of the sphere, what sort of summability will do? Does one really need convergence (or summability) at every point? If not, what sort of sets can be omitted? Must anything else be assumed at these omitted points? What sort of side conditions, if any, are relevant?

I came up with reasonable answers to these questions, but basically the whole point seemed to be the justification of the interchange of some limit processes. This left me with an uneasy feeling that there ought to be more to Analysis than that. I wanted to do something with more “structure”. I could not  have explained just what I meant by this, but I found it later when I became aware of the close relation between Fourier analysis and group theory, and also in an occasional encounter with number theory and with geometric aspects of several complex variables.

Why was it all an exercise in interchange of limits? Because the “obvious” proof of Cantor’s theorem goes like this: for p > n,

\pi a_{n}= \int_{-\pi}^{\pi}s_{p}(x)\cos{nx}dx = \lim_{p \rightarrow \infty}\int_{-\pi}^{\pi}s_{p}(x)\cos {nx}dx, which in turn, equals

\int_{-\pi}^{\pi}(\lim_{p \rightarrow \infty}s_{p}(x))\cos{nx}dx= \int_{-\pi}^{\pi}0 \cos{nx}dx=0 and similarly, for b_{n}. Note that \lim \int = \int \lim was used.

In Riemann’s above mentioned paper, the derives the conclusion of Cantor’s theorem under an additional hypothesis, namely, a_{n} \rightarrow 0 and b_{n} \rightarrow 0 as n \rightarrow \infty. He associates to \sum {A_{n}(x)} the twice integrated series

F(x)=-\sum_{1}^{\infty}n^{-2}A_{n}(x)

and then finds it necessary to prove, in some detail, that this series converges and that its sum F is continuous! (Weierstrass had not yet invented uniform convergence.) This is astonishingly different from most of his other publications, such as his paper  on hypergeometric functions in which mind-boggling relations and transformations are merely stated, with only a few hints, or  his  painfully brief paper on the zeta-function.

In Crelle’s J. 73, 1870, 130-138, Cantor showed that Riemann’s additional hypothesis was redundant, by proving that

(*) \lim_{n \rightarrow \infty}A_{n}(x)=0 for all x implies \lim_{n \rightarrow \infty}a_{n}= \lim_{n \rightarrow \infty}b_{n}=0.

He included the statement: This cannot be proved, as is commonly believed, by term-by-term integration.

Apparently, it took a while before this was generally understood. Ten years later, in Math. America 16, 1880, 113-114, he patiently explains the differenence between pointwise convergence and uniform convergence, in order to refute a “simpler proof” published by Appell. But then, referring to his second (still quite complicated) proof, the one in Math. Annalen 4, 1871, 139-143, he sticks his neck out and writes: ” In my opinion, no further simplification can be achieved, given the nature of ths subject.”

That was a bit reckless. 25 years later, Lebesgue’s dominated convergence theorem became part of every analyst’s tool chest, and since then (*) can be proved in a few lines:

Rewrite A_{n}(x) in the form A_{n}(x)=c_{n}\cos {(nx+\alpha_{n})}, where c_{n}^{2}=a_{n}^{2}+b_{n}^{2}. Put

\gamma_{n}==\min \{1, |c_{n}| \}, B_{n}(x)=\gamma_{n}\cos{(nx+\alpha_{n})}.

Then, B_{n}^{2}(x) \leq 1, B_{n}^{2}(x) \rightarrow 0 at every x, so that the D. C.Th., combined with

\int_{-\pi}^{\pi}B_{n}^{2}(x)dx=\pi \gamma_{n}^{2} shows that \gamma_{n} \rightarrow 0. Therefore, |c_{n}|=\gamma_{n} for all large n, and c_{n} \rightarrow 0. Done.

The point of all this is that my attitude was probably wrong. Interchanging limit processes occupied some of the best mathematicians for most of the 19th century. Thomas Hawkins’ book “Lebesgue’s Theory” gives an excellent description of the difficulties that they had to overcome. Perhaps, we should not be too surprised that even a hundred years later many students are baffled by uniform convergence, uniform continuity etc., and that some never get it at all.

In Trans. AMS 70, 1961,  387-403, I applied the techniques of my thesis to another problem of this type, with Hermite functions in place of spherical harmonics.

(Note: The above article has been picked from Walter Rudin’t book, “The Way I  Remember It)) — hope it helps advanced graduates in Analysis.

More later,

Nalin Pithwa