Jack function

From HandWiki
Short description: Generalization of the Jack polynomial

In mathematics, the Jack function is a generalization of the Jack polynomial, introduced by Henry Jack. The Jack polynomial is a homogeneous, symmetric polynomial which generalizes the Schur and zonal polynomials, and is in turn generalized by the Heckman–Opdam polynomials and Macdonald polynomials.

Definition

The Jack function [math]\displaystyle{ J_\kappa^{(\alpha )}(x_1,x_2,\ldots,x_m) }[/math] of an integer partition [math]\displaystyle{ \kappa }[/math], parameter [math]\displaystyle{ \alpha }[/math], and arguments [math]\displaystyle{ x_1,x_2,\ldots,x_m }[/math] can be recursively defined as follows:

For m=1
[math]\displaystyle{ J_{k}^{(\alpha )}(x_1)=x_1^k(1+\alpha)\cdots (1+(k-1)\alpha) }[/math]
For m>1
[math]\displaystyle{ J_\kappa^{(\alpha )}(x_1,x_2,\ldots,x_m)=\sum_\mu J_\mu^{(\alpha )}(x_1,x_2,\ldots,x_{m-1}) x_m^{|\kappa /\mu|}\beta_{\kappa \mu}, }[/math]

where the summation is over all partitions [math]\displaystyle{ \mu }[/math] such that the skew partition [math]\displaystyle{ \kappa/\mu }[/math] is a horizontal strip, namely

[math]\displaystyle{ \kappa_1\ge\mu_1\ge\kappa_2\ge\mu_2\ge\cdots\ge\kappa_{n-1}\ge\mu_{n-1}\ge\kappa_n }[/math] ([math]\displaystyle{ \mu_n }[/math] must be zero or otherwise [math]\displaystyle{ J_\mu(x_1,\ldots,x_{n-1})=0 }[/math]) and
[math]\displaystyle{ \beta_{\kappa\mu}=\frac{ \prod_{(i,j)\in \kappa} B_{\kappa\mu}^\kappa(i,j) }{ \prod_{(i,j)\in \mu} B_{\kappa\mu}^\mu(i,j) }, }[/math]

where [math]\displaystyle{ B_{\kappa\mu}^\nu(i,j) }[/math] equals [math]\displaystyle{ \kappa_j'-i+\alpha(\kappa_i-j+1) }[/math] if [math]\displaystyle{ \kappa_j'=\mu_j' }[/math] and [math]\displaystyle{ \kappa_j'-i+1+\alpha(\kappa_i-j) }[/math] otherwise. The expressions [math]\displaystyle{ \kappa' }[/math] and [math]\displaystyle{ \mu' }[/math] refer to the conjugate partitions of [math]\displaystyle{ \kappa }[/math] and [math]\displaystyle{ \mu }[/math], respectively. The notation [math]\displaystyle{ (i,j)\in\kappa }[/math] means that the product is taken over all coordinates [math]\displaystyle{ (i,j) }[/math] of boxes in the Young diagram of the partition [math]\displaystyle{ \kappa }[/math].

Combinatorial formula

In 1997, F. Knop and S. Sahi [1] gave a purely combinatorial formula for the Jack polynomials [math]\displaystyle{ J_\mu^{(\alpha )} }[/math] in n variables:

[math]\displaystyle{ J_\mu^{(\alpha )} = \sum_{T} d_T(\alpha) \prod_{s \in T} x_{T(s)}. }[/math]

The sum is taken over all admissible tableaux of shape [math]\displaystyle{ \lambda, }[/math] and

[math]\displaystyle{ d_T(\alpha) = \prod_{s \in T \text{ critical}} d_\lambda(\alpha)(s) }[/math]

with

[math]\displaystyle{ d_\lambda(\alpha)(s) = \alpha(a_\lambda(s) +1) + (l_\lambda(s) + 1). }[/math]

An admissible tableau of shape [math]\displaystyle{ \lambda }[/math] is a filling of the Young diagram [math]\displaystyle{ \lambda }[/math] with numbers 1,2,…,n such that for any box (i,j) in the tableau,

  • [math]\displaystyle{ T(i,j) \neq T(i',j) }[/math] whenever [math]\displaystyle{ i'\gt i. }[/math]
  • [math]\displaystyle{ T(i,j) \neq T(i,j-1) }[/math] whenever [math]\displaystyle{ j\gt 1 }[/math] and [math]\displaystyle{ i'\lt i. }[/math]

A box [math]\displaystyle{ s = (i,j) \in \lambda }[/math] is critical for the tableau T if [math]\displaystyle{ j \gt 1 }[/math] and [math]\displaystyle{ T(i,j)=T(i,j-1). }[/math]

This result can be seen as a special case of the more general combinatorial formula for Macdonald polynomials.

C normalization

The Jack functions form an orthogonal basis in a space of symmetric polynomials, with inner product:

[math]\displaystyle{ \langle f,g\rangle = \int_{[0,2\pi]^n} f \left (e^{i\theta_1},\ldots,e^{i\theta_n} \right ) \overline{g \left (e^{i\theta_1},\ldots,e^{i\theta_n} \right )} \prod_{1\le j\lt k\le n} \left |e^{i\theta_j}-e^{i\theta_k} \right |^{\frac{2}{\alpha}} d\theta_1\cdots d\theta_n }[/math]

This orthogonality property is unaffected by normalization. The normalization defined above is typically referred to as the J normalization. The C normalization is defined as

[math]\displaystyle{ C_\kappa^{(\alpha)}(x_1,\ldots,x_n) = \frac{\alpha^{|\kappa|}(|\kappa|)!}{j_\kappa} J_\kappa^{(\alpha)}(x_1,\ldots,x_n), }[/math]

where

[math]\displaystyle{ j_\kappa=\prod_{(i,j)\in \kappa} \left (\kappa_j'-i+\alpha \left (\kappa_i-j+1 \right ) \right ) \left (\kappa_j'-i+1+\alpha \left (\kappa_i-j \right ) \right ). }[/math]

For [math]\displaystyle{ \alpha=2, C_\kappa^{(2)}(x_1,\ldots,x_n) }[/math] is often denoted by [math]\displaystyle{ C_\kappa(x_1,\ldots,x_n) }[/math] and called the Zonal polynomial.

P normalization

The P normalization is given by the identity [math]\displaystyle{ J_\lambda = H'_\lambda P_\lambda }[/math], where

[math]\displaystyle{ H'_\lambda = \prod_{s\in \lambda} (\alpha a_\lambda(s) + l_\lambda(s) + 1) }[/math]

where [math]\displaystyle{ a_\lambda }[/math] and [math]\displaystyle{ l_\lambda }[/math] denotes the arm and leg length respectively. Therefore, for [math]\displaystyle{ \alpha=1, P_\lambda }[/math] is the usual Schur function.

Similar to Schur polynomials, [math]\displaystyle{ P_\lambda }[/math] can be expressed as a sum over Young tableaux. However, one need to add an extra weight to each tableau that depends on the parameter [math]\displaystyle{ \alpha }[/math].

Thus, a formula [2] for the Jack function [math]\displaystyle{ P_\lambda }[/math] is given by

[math]\displaystyle{ P_\lambda = \sum_{T} \psi_T(\alpha) \prod_{s \in \lambda} x_{T(s)} }[/math]

where the sum is taken over all tableaux of shape [math]\displaystyle{ \lambda }[/math], and [math]\displaystyle{ T(s) }[/math] denotes the entry in box s of T.

The weight [math]\displaystyle{ \psi_T(\alpha) }[/math] can be defined in the following fashion: Each tableau T of shape [math]\displaystyle{ \lambda }[/math] can be interpreted as a sequence of partitions

[math]\displaystyle{ \emptyset = \nu_1 \to \nu_2 \to \dots \to \nu_n = \lambda }[/math]

where [math]\displaystyle{ \nu_{i+1}/\nu_i }[/math] defines the skew shape with content i in T. Then

[math]\displaystyle{ \psi_T(\alpha) = \prod_i \psi_{\nu_{i+1}/\nu_i}(\alpha) }[/math]

where

[math]\displaystyle{ \psi_{\lambda/\mu}(\alpha) = \prod_{s \in R_{\lambda/\mu}-C_{\lambda/\mu} } \frac{(\alpha a_\mu(s) + l_\mu(s) +1)}{(\alpha a_\mu(s) + l_\mu(s) + \alpha)} \frac{(\alpha a_\lambda(s) + l_\lambda(s) + \alpha)}{(\alpha a_\lambda(s) + l_\lambda(s) +1)} }[/math]

and the product is taken only over all boxes s in [math]\displaystyle{ \lambda }[/math] such that s has a box from [math]\displaystyle{ \lambda/\mu }[/math] in the same row, but not in the same column.

Connection with the Schur polynomial

When [math]\displaystyle{ \alpha=1 }[/math] the Jack function is a scalar multiple of the Schur polynomial

[math]\displaystyle{ J^{(1)}_\kappa(x_1,x_2,\ldots,x_n) = H_\kappa s_\kappa(x_1,x_2,\ldots,x_n), }[/math]

where

[math]\displaystyle{ H_\kappa=\prod_{(i,j)\in\kappa} h_\kappa(i,j)= \prod_{(i,j)\in\kappa} (\kappa_i+\kappa_j'-i-j+1) }[/math]

is the product of all hook lengths of [math]\displaystyle{ \kappa }[/math].

Properties

If the partition has more parts than the number of variables, then the Jack function is 0:

[math]\displaystyle{ J_\kappa^{(\alpha )}(x_1,x_2,\ldots,x_m)=0, \mbox{ if }\kappa_{m+1}\gt 0. }[/math]

Matrix argument

In some texts, especially in random matrix theory, authors have found it more convenient to use a matrix argument in the Jack function. The connection is simple. If [math]\displaystyle{ X }[/math] is a matrix with eigenvalues [math]\displaystyle{ x_1,x_2,\ldots,x_m }[/math], then

[math]\displaystyle{ J_\kappa^{(\alpha )}(X)=J_\kappa^{(\alpha )}(x_1,x_2,\ldots,x_m). }[/math]

References

External links