An SOS counterexample to an inequality of symmetric functions

This document verifies the fact that a certain symmetric polynomial can be written as a sum of squares. This provides a counterexample to a certain conjecture relating nonnegativity and the dominance (or majorization) partial order on partitions. See https://arxiv.org/abs/1909.00081 for more information. Our symmetry-adapted SDP returned a numerical matrix (with floating point entries). Since we had reduced the problem size using representation theory of the symmetric group, and also knowledge of the real zeros of our polynomial, we were able to successfully round the floating point entries into (exact) rational numbers.

We load this matrix \(A\), whose entries are rational numbers, as well as a permutation matrix which reorders the columns and rows so that we can apply naive \(LDL^T\) decomposition below. In order to apply \(LDL^T\) decomposition, it is sometimes required to reorder the rows and columns in order to avoid zero pivots.

[13]:
P = load("counterexample-P.sobj")
A = load("counterexample-A.sobj")
B = P.T * A * P
C = B.change_ring(QQ)

Below we apply \(LDL^T\) decomposition to extract pairs \((d,\ell)\) where \(d\) is the pivot entry and \(\ell\) is a vector. These pairs can be used to write

\[C = \sum d \cdot \ell \ell^T\]

.

[14]:
def naiveLDLt(A):
    N = A.nrows()
    terms = [] # (d,l) pair of diagonal entry and vector
    for i in range(0,N):
        pivot = A[i,i]
        if pivot != 0:
            l = 1/pivot * vector(list(A.columns()[i]))
            terms.append((pivot,l))
            A = A - pivot * l.outer_product(l)
    return terms

terms = naiveLDLt(C)

Below we create a vector of monomials with total degree \(8\) in three variables. The permutation matrix \(P\) loaded above reorders this basis appropriately to match the reordering of the matrix \(A\).

[15]:
def create_monomials(n,d):
    varz = [var('x%s'%i) for i in range(1,n+1)]
    mons = []
    for exps in IntegerVectors(d,n):
        prod = varz[0] - varz[0] + 1
        for i,x in enumerate(varz):
            prod *= x^exps[i]
        mons.append(prod)
    return mons

m = create_monomials(3,8)
m_shuffled = P.T * vector(m)
show(m_shuffled)
[15]:

Below we sum the squares. We use each vector \(\ell\) to create a polynomial \(\ell m(x)\), which we square and sum with positive coefficient \(d\). The resulting polynomial displayed below is therefore a sum of squares with positive coefficients.

[16]:
okay = True
for tup in terms:
    d,l = tup[0],tup[1]
    if d < 0:
        okay = False
show(okay)
[16]:
[17]:
ans = []
for tup in terms:
    d,l = tup[0],tup[1]
    term = d * (l*m_shuffled)^2
    term = term.expand()
    ans.append(term)

res = sum(ans)
show(res.simplify())
[17]:

Below we create the symmetric polynomial \(H_{44} - H_{521}\), evaluated at \(x_1^2, x_2^2, x_3^2\) to check nonnegativity on the nonnegative orthant, as explained in https://arxiv.org/abs/1909.00081. The nonnegativity of this polynomial provides the counterexample to the conjecture.

[18]:
def input_squares(p):
    # input is our polynomial p, like H_(2,1) - H_(1,1,1)
    # output will be inserting each variable squared, also making it start at x1, x2,... rather than x0,x1,...
    current_vars = p.args() # this is a tuple of variables x0, x1, x2,... etc.
    numvars = len(current_vars)
    new_vars = [var('x%i'%i) for i in range(1,numvars+1)]
    subz = {}
    for i,vr in enumerate(current_vars):
        subz[vr] = (new_vars[i])^2
    ans = p.subs(subz)
    return ans

def plugin_ones(p):
    # want to return the number/scalar that results from plugging in 1's to all variables
    current_vars = p.args() # this is a tuple of variables x0, x1, x2,... etc.
    numvars = len(current_vars)
    subz = {}
    for i,vr in enumerate(current_vars):
        subz[vr] = 1
    ans = p.subs(subz)
    return ans

def term_normalize(p):
    norm = plugin_ones(p)
    ans = p/norm
    return ans

def create_difference(la,mu,n=3):
    h = SymmetricFunctions(QQ).h()
    # we will return the difference h_la - h_mu
    hla = h(la).expand(n)
    hmu = h(mu).expand(n)
    hla = input_squares(hla)
    hmu = input_squares(hmu)
    hla = term_normalize(hla)
    hmu = term_normalize(hmu)
    ans = hla - hmu
    return ans

show(create_difference((4,4),(5,2,1)))
[18]:

Below we verify that our previous sum of squares reproduces the desired polynomial.

[19]:
res - create_difference((4,4),(5,2,1))
[19]:
0
[ ]: