Spectral theory and special functions
Abstract.
A short introduction to the use of the spectral theorem for selfadjoint operators in the theory of special functions is given. As the first example, the spectral theorem is applied to Jacobi operators, i.e. tridiagonal operators, on , leading to a proof of Favard’s theorem stating that polynomials satisfying a threeterm recurrence relation are orthogonal polynomials. We discuss the link to the moment problem. In the second example, the spectral theorem is applied to Jacobi operators on . We discuss the theorem of Masson and Repka linking the deficiency indices of a Jacobi operator on to those of two Jacobi operators on . For two examples of Jacobi operators on , namely for the Meixner, respectively MeixnerPollaczek, functions, related to the associated Meixner, respectively MeixnerPollaczek, polynomials, and for the second order hypergeometric difference operator, we calculate the spectral measure explicitly. This gives explicit (generalised) orthogonality relations for hypergeometric and basic hypergeometric series.
Last revision: July 5, 2001
Contents.

Introduction

The spectral theorem

Hilbert spaces and bounded operators

The spectral theorem for bounded selfadjoint operators

Unbounded selfadjoint operators

The spectral theorem for unbounded selfadjoint operators


Orthogonal polynomials and Jacobi operators

Orthogonal polynomials

Moment problems

Jacobi operators

Unbounded Jacobi operators


Doubly infinite Jacobi operators

Doubly infinite Jacobi operators

Relation with Jacobi operators

The Green kernel

Example: the Meixner functions

Example: the basic hypergeometric difference operator


References
1. Introduction
In these lecture notes we give a short introduction to the use of spectral theory in the theory of special functions. Conversely, special functions can be used to determine explicitly the spectral measures of explicit operators on a Hilbert space. The main ingredient from functional analysis that we are using is the spectral theorem, both for bounded and unbounded selfadjoint operators. We recall the main results of the theory in §2, hoping that the lecture notes become more selfcontained in this way. For differential operators this is a very wellknown subject, and one can consult e.g. Dunford and Schwartz [7].
In §3 we discuss Jacobi operators on , and we present the link to orthogonal polynomials. Jacobi operators on are symmetric tridiagonal matrices, and the link to orthogonal polynomials goes via the threeterm recurrence relation for orthogonal polynomials. We prove Favard’s theorem in this setting, which is more or less equivalent to the spectral decomposition of the Jacobi operator involved. We first discuss the bounded case. Next we discuss the unbounded case and its link to the classical Hamburger moment problem. This is very classical, and it can be traced back to at least Stone’s book [22]. This section is much inspired by Akhiezer [1], Berezanskiĭ [2, §7.1], Deift [5, Ch. 2] and Simon [20], and it can be viewed as an introduction to [1] and [20]. Especially, Simon’s paper [20] is recommended for further reading on the subject. See also the recent book [23] by Teschl on Jacobi operators and the relation to nonlinear lattices, see also Deift [5, Ch. 2] for the example of the Toda lattice. We recall this material on orthogonal polynomials and Jacobi operators, since it is an important ingredient in §4.
In §4 we discuss Jacobi operators on , and we give the link between a Jacobi operator on to two Jacobi operators on due to Masson and Repka [17], stating in particular that the Jacobi operator on is (essentially) selfadjoint if and only if the two Jacobi operators on are (essentially) selfadjoint. Next we discuss the example for the Meixner functions in detail, following Masson and Repka [17], but the spectral measure is now completely worked out. In this case the spectral measure is purely discrete. If we restrict the Jacobi operator acting on to a Jacobi operator on , we obtain the Jacobi operator for the associated Meixner polynomials. The case of the MeixnerPollaczek functions is briefly considered. As another example we discuss the second order hypergeometric difference operator. In this example the spectral measure has a continuous part and a discrete part. Here we follow Kakehi [10] and [13, App. A]. These operators naturally occur in the representation theory of the Lie algebra , see [17], or of the quantised universal enveloping algebra , see [13]. Here the action of certain elements from the Lie algebra or the quantised universal enveloping algebra is tridiagonal, and one needs to obtain the spectral resolution. It is precisely this interpretation that leads to the limit transition discussed in 4.5.
However, from the point of view of special functions, the Hilbert space is generally not the appropriate Hilbert space to diagonalise a secondorder difference operator . For the two examples in §4 this works nicely, as shown there. In particular this is true for the series, as shown in §4, see also [4] for another example. But for natural extensions of this situation to higher levels of special functions this Hilbert space is not good enough. We refer to [12] for the case of the second order difference operator having series as eigenfunctions corresponding to the big Jacobi functions, and to [15] for the case of the second order difference operator having series as eigenfunctions corresponding to the AskeyWilson functions. For more references and examples of spectral analysis of second order difference equations we refer to [14].
There is a huge amount of material on orthogonal polynomials, and there is a great number of good introductions to orthogonal polynomials, the moment problem, and the functional analysis used here. For orthogonal polynomials I have used [3], [5, Ch. 2] and [24]. For the moment problem there are the classics by Shohat and Tamarkin [19] and Stone [22], see also Akhiezer [1], Simon [20] and, of course, Stieltjes’s original paper [21] that triggered the whole subject. The spectral theorem can be found in many places, e.g. Dunford and Schwartz [7] and Rudin [18].
Naturally, there are many more instances of the use of functional analytic results that can be applied to special functions. As an example of a more qualitative question you can wonder how perturbation of the coefficients in the threeterm recurrence relation for orthogonal polynomials affects the orthogonality measure, i.e. the spectral measure of the associated Jacobi operator. Results of this kind can be obtained by using perturbation results from functional analysis, see e.g. Dombrowski [6] and further references given there.
Acknowledgement. I thank the organisors, especially Renato ÁlvarezNodarse and Francisco Marcellán, of the summer school for inviting me to give lectures on this subject. Moreover, I thank the participants, as well as Johan Kustermans, Hjalmar Rosengren and especially Wolter Groenevelt, for bringing errors to my attention.
2. The spectral theorem
2.1. Hilbert spaces and bounded operators
(2.1.1) A vector space over is an inner product space if there exists a mapping such that for all and for all we have (i) , (ii) , and (iii) and if and only if . With the inner product we associate the norm , and the topology from the corresponding metric . The standard inequality is the CauchySchwarz inequality; . A Hilbert space is a complete inner product space, i.e. for any Cauchy sequence in , i.e. such that for all , there exists an element such that converges to . In these notes all Hilbert spaces are separable, i.e. there exists a denumerable set of basis vectors.
(2.1.2) Example. , the space of square summable sequences , and , the space of square summable sequences , are Hilbert spaces. The inner product is given by . An orthonormal basis is given by the sequences defined by , so we identify with .
(2.1.3) Example. We consider a positive Borel measure on the real line such that all moments exist, i.e. for all . Without loss of generality we assume that is a probability measure, . By we denote the space of square integrable functions on , i.e. . Then is a Hilbert space (after identifying two functions and for which ) with respect to the inner product . In case is a finite sum of discrete Dirac measures, we find that is finite dimensional.
(2.1.4) An operator from a Hilbert space into another Hilbert space is linear if for all and for all we have . An operator is bounded if there exists a constant such that for all . The smallest for which this holds is the norm, denoted by , of . A bounded linear operator is continuous. The adjoint of a bounded linear operator is a map with . We call selfadjoint if . It is unitary if and . A projection is a linear map such that .
(2.1.5) We are also interested in unbounded linear operators. In that case we denote , where , the domain of , is a linear subspace of and . Then is densely defined if the closure of equals . All unbounded operators that we consider in these notes are densely defined. If the operator , , has an inverse which is densely defined and is bounded, so that , the resolvent operator, extends to a bounded linear operator on , then we call a regular value. The set of all regular values is the resolvent set . The complement of the resolvent set in is the spectrum of . The point spectrum is the subset of the spectrum for which is not onetoone. In this case there exists a vector such that , and is an eigenvalue. The continuous spectrum consists of the points for which is onetoone, but for which is dense in , but not equal to . The remaining part of the spectrum is the residual spectrum. For selfadjoint operators, both bounded and unbounded, see 2.3, the spectrum only consists of the discrete and continuous spectrum.
(2.1.6) For a bounded operator the spectrum is a compact subset of the disk of radius . Moreover, if is selfadjoint, then , so that and the spectrum consists of the point spectrum and the continuous spectrum.
2.2. The spectral theorem for bounded selfadjoint operators
(2.2.1) A resolution of the identity, say , of a Hilbert space is a projection valued Borel measure on such that for all Borel sets we have (i) is a selfadjoint projection, (ii) , (iii) , , (iv) implies , and (v) for all the map is a complex Borel measure.
(2.2.2) A generalisation of the spectral theorem for matrices is the following theorem for bounded selfadjoint operators, see [7, §X.2], [18, §12.22].
Theorem .
(Spectral theorem) Let be a bounded selfadjoint linear map, then there exists a unique resolution of the identity such that , i.e. . Moreover, is supported on the spectrum , which is contained in the interval . Moreover, any of the spectral projections , a Borel set, commutes with .
A more general theorem of this kind holds for normal operators, i.e. for those operators satisfying .
(2.2.3) Using the spectral theorem we define for any continuous function on the spectrum the operator by , i.e. . Then is bounded operator with norm equal to the supremum norm of on the spectrum of , i.e. . This is known as the functional calculus for selfadjoint operators. In particular, for we see that is continuous on the spectrum, and the corresponding operator is just the resolvent operator as in 2.1. The functional calculus can be extended to measurable functions, but then .
(2.2.4) The spectral measure can be obtained from the resolvent operators by the StieltjesPerron inversion formula, see [7, Thm. X.6.1].
Theorem .
The spectral measure of the open interval is given by
The limit holds in the strong operator topology, i.e. for all .
2.3. Unbounded selfadjoint operators
(2.3.1) Let , with the domain of , be a densely defined unbounded operator on , see 2.1. We can now define the adjoint operator as follows. First define
By the density of the map for extends to a continuous linear functional , and by the Riesz representation theorem there exists a unique such that for all . Now the adjoint is defined by , so that
(2.3.2) If and are unbounded operators on , then extends , notation , if and for all . Two unbounded operators and are equal, , if and , or and have the same domain and act in the same way. In terms of the graph
we see that if and only if . An operator is closed if its graph is closed in the product topology of . The adjoint of a densely defined operator is a closed operator, since the graph of the adjoint is given as
for the inner product on , see [18, 13.8].
(2.3.3) A densely defined operator is symmetric if , or, using the definition in 2.3,
A densely defined operator is selfadjoint if , so that a selfadjoint operator is closed. The spectrum of an unbounded selfadjoint operator is contained in . Note that , so that is a dense subspace and taking the adjoint once more gives as the minimal closed extension of , i.e. any densely defined symmetric operator has a closed extension. We have . We say that the densely defined symmetric operator is essentially selfadjoint if its closure is selfadjoint, i.e. if .
(2.3.4) In general, a densely defined symmetric operator might not have selfadjoint extensions. This can be measured by the deficiency indices. Define for the eigenspace
Then is constant for and for , [7, Thm. XII.4.19], and we put and . The pair are the deficiency indices for the densely defined symmetric operator . Note that if commutes with complex conjugation of the Hilbert space then we automatically have . Note furthermore that if is selfadjoint then , since a selfadjoint operator cannot have nonreal eigenvalues. Now the following holds, see [7, §XII.4].
Proposition .
Let be a densely defined symmetric operator.
(i) , as an orthogonal direct sum with respect to the graph norm for from . As a direct sum, for general .
(ii) Let be an isometric bijection and define by
then is a selfadjoint extension of , and every selfadjoint extension of arises in this way.
In particular, has selfadjoint extensions if and only if the deficiency indices are equal; . is a closed symmetric extension of . We can also characterise the domains of the selfadjoint extensions of using the sesquilinear form
then .
2.4. The spectral theorem for unbounded selfadjoint operators
(2.4.1) With all the preparations of the previous subsection the Spectral Theorem 2.2 goes through in the unbounded setting, see [7, §XII.4], [18, Ch. 13].
Theorem .
(Spectral theorem) Let be an unbounded selfadjoint linear map, then there exists a unique resolution of the identity such that , i.e. for , . Moreover, is supported on the spectrum , which is contained in . For any bounded operator that satisfies we have , a Borel set. Moreover, the StieltjesPerron inversion formula 2.2 remains valid;
(2.4.2) As in 2.2 we can now define for any measurable function by
where is the domain of . This makes into a densely defined closed operator. In particular, if , then is a continuous operator, by the closed graph theorem. This in particular applies to , , which gives the resolvent operator.
3. Orthogonal polynomials and Jacobi operators
3.1. Orthogonal polynomials
(3.1.1) Consider the Hilbert space as in Example 2.1. Assume that all moments exist, so that all polynomials are integrable. In applying the GramSchmidt orthogonalisation process to the sequence we may end up in one of the following situations: (a) the polynomials are linearly dependent in , or (b) the polynomials are linearly independent in . In case (a) it follows that there is a nonzero polynomial such that . This implies that is a finite sum of Dirac measures at the zeros of . From now on we exclude this case, but the reader may consider this case him/herself. In case (b) we end up with a set of orthonormal polynomials as in the following definition.
Definition .
A sequence of polynomials with is a set of orthonormal polynomials with respect to if .
Note that the polynomials are realvalued for , so that its coefficients are real. Moreover, from the GramSchmidt process it follows that the leading coefficient is positive.
(3.1.2) Note that only the moments of play a role in the orthogonalisation process. The Stieltjes transform of the measure defined by , , can be considered as a generating function for the moments of . Indeed, formally
(3.1) 
In case we see that implying that the series in (3.1) is absolutely convergent for . In this case we see that the Stieltjes transform of is completely determined by the moments of . In general, this expansion has to be interpreted as an asymptotic expansion of the Stieltjes transform as . We now give a proof of the Stieltjes inversion formula, cf. 2.2.
Proposition .
Let be a probability measure with finite moments, and let be its Stieltjes transform, then
Proof.
Observe that
so that
Integrating this expression and interchanging integration, which is allowed since the integrand is positive, gives
(3.2) 
The inner integration can be carried out easily;
by . It follows that , and
It suffices to show that we can interchange integration and the limit in (3.2). This follows from Lebesgue’s dominated convergence theorem since is a probability measure and . ∎
We need the following extension of this inversion formula in 3.3. For a polynomial with real coefficients we have
(3.3) 
We indicate how the proof of the proposition can be extended to obtain (3.3). Start with
Integrate this expression with respect to and interchange summations, which is justified since and are bounded on . This time we have to evaluate two integrals. The first integral
can be estimated, using uniformly on , by
This term tends to zero independently of , and , since
(3.1.3) The following theorem describes the fundamental property of orthogonal polynomials in these notes.
Theorem .
(Three term recurrence relation) Let be a set of orthonormal polynomials in , then there exist sequences , , with and , such that
(3.4)  
(3.5) 
Moreover, if is compactly supported, then the coefficients and are bounded.
Note that (3.4), (3.5) together with the initial condition completely determine the polynomials for all .
Proof.
The degree of is , so there exist constants such that . By the orthonormality properties of it follows that
Since the degree of is , we see that for . Then
Moreover, and display the required structure for the other coefficients. The positivity of follows by considering the leading coefficient.
For the last statement we observe that
since and is compact. In the second inequality we have used the CauchySchwarz inequality 2.1. Similarly,
gives the estimate on the coefficients . ∎
(3.1.4) We observed that (3.4) and (3.5) together with an initial condition for the degree zero component completely determines a solution for the recurrence (3.4), (3.5). We can also generate solutions of (3.4) by specifying the initial values for and . From now on we let be the sequence of polynomials generated by (3.4) subject to the initial conditions and . Then is a polynomial of degree , and (3.5) is not valid. The polynomials are the associated polynomials.
Lemma .
The associated polynomial is given by
Proof.
It suffices to show that the right hand side, denoted temporarily by , satisfies the recurrence (3.4) together with the initial conditions. Using Theorem 3.1 for and the definition of we obtain
Using Theorem 3.1 again shows that the integral equals , which is zero for and for by the orthogonality properties. Hence, (3.4) is satisfied. Using we find and using gives . ∎
Considering
immediately proves the following corollary.
Corollary .
Let be fixed. The th coefficient with respect to the orthonormal set in of is given by . Hence,
The inequality follows from the Bessel inequality. If the is an orthonormal basis of then we can write an equality by Parseval’s identity.
(3.1.5) Since is another solution to (3.4), multiplying (3.4) by and (3.4) for by , subtracting leads to
(3.6) 
for . Taking in (3.6), we see that the Wronskian, or Casorati determinant,
(3.7) 
is independent of , and taking gives . This also shows that and are linearly independent solutions to (3.4).
On the other hand, replacing by and summing we get the ChristoffelDarboux formula
(3.8) 
The case is obtained after dividing by and in the right hand side letting . This gives
3.2. Moment problems
(3.2.1) The moment problem consists of the following two questions:

Given a sequence , does there exist a positive Borel measure on such that ?

If the answer to problem 1 is yes, is the measure obtained unique?
Note that we can assume without loss of generality that . This is always assumed.
(3.2.2) In case is required to be contained in a finite interval we speak of the Haussdorf moment problem (1920). In case is to be contained in we speak of the Stieltjes moment problem (1894). Finally, without a condition on the support, we speak of the Hamburger moment problem (1922). Here, a moment problem is always a Hamburger moment problem. The answer to question 1 can be given completely in terms of positivity requirements of matrices composed of the ’s, see Akhiezer [1], Shohat and Tamarkin [19], Simon [20], Stieltjes [21], Stone [22]. In these notes we only discuss an answer to question 2, see §3.4.
In case the answer to question 2 is affirmative, we speak of a determinate moment problem and otherwise of an indeterminate moment problem. So for an indeterminate moment problem we have a convex set of probability measures on solving the same moment problem. The fact that this may happen has been observed first by Stieltjes [21]. The Haussdorf moment problem is always determinate as follows from 3.1 and the StieltjesPerron inversion formula, see Proposition 3.1.
For a nice overview of the early history of the moment problem, see Kjeldsen [11].
3.3. Jacobi operators
(3.3.1) A tridiagonal matrix of the form
is a Jacobi operator, or an infinite Jacobi matrix, if and . If for some , the Jacobi matrix splits as the direct sum of two Jacobi matrices, of which the first is an matrix.
We consider as an operator defined on the Hilbert space , see Example 2.1. So with respect to the standard orthonormal basis of the Jacobi operator is defined as
(3.9) 
Note the similarity with Theorem 3.1. So to each probability measure on with finite moments we associate a Jacobi operator on the Hilbert space from the threeterm recurrence relation for the corresponding orthonormal polynomials. However, some care is necessary, since (3.9) might not define a bounded operator on .
From (3.9) we extend to an operator defined on , the set of finite linear combinations of the elements of the orthonormal basis of . The linear subspace is dense in . From (3.9) it follows that
(3.10) 
so that is a densely defined symmetric operator, see 2.3. In particular, if is bounded on , extends to a bounded selfadjoint operator by continuity.
Lemma .
(3.3.2) for some polynomial of degree with real coefficients. In particular, is a cyclic vector for the action of , i.e. the linear subspace is dense in .
Proof.
It suffices to show that for some polynomial of degree , which follows easily from (3.9) and using induction on . ∎
Lemma .
(3.3.3) If the sequences and are bounded, say , then extends to a bounded selfadjoint operator with . On the other hand, if is bounded, then the sequences and are bounded.
Proof.
If , are bounded, then, with , ,
Let , with , then each summand can be written as