Squared Linear Models
On this page, we provide julia
and Maple
code to recreate computations mentioned in examples throughout the text.
Ideals of generic squared linear models (Proposition 3.4)
To compute the ideals of generic squared linear models \(X_{d,n}\), we use the Gröbner basis implementation in Oscar.jl
. To install Oscar
, enter the julia
package manager and type add Oscar
.
We start out with an implementation that works in most of our cases. Notably, we do NOT use the standard function eliminate
in Oscar. Instead, the core of our function is the groebner_basis_f4
command which uses Faugère’s F4 algorithm. This works much faster in our examples.
using Oscar
using DataStructures
function squared_model(d, n)
K = GF(32003)
S, x, y = polynomial_ring(K, :x => 1:d, :y => 1:n)
rand_linear_forms = matrix(S, [x])*matrix(S, [[S(rand(K)) for j in 1:n] for i in 1:d])
elim_linear = ideal([y[i] - (rand_linear_forms[i])^2 for i in 1:n])
linear_GB = groebner_basis_f4(elim_linear, la_option = 44, eliminate = d, info_level = 2)
linJ = ideal(linear_GB)
R, p = graded_polynomial_ring(K, :p => 1:n)
inject = hom(S, R, vcat([R(0) for i in 1:d], p))
linI = inject(linJ)
lin_min_gens = minimal_generating_set(linI)
return counter(Oscar.total_degree.((lin_min_gens)))
end
squared_model(4, 6) # compute the model X_{4,6}
A similar function is used to compare the results for squared linear models with models defined by generic quadrics.
function quadratic_model(d, n)
K = GF(32003)
S, x, y = polynomial_ring(K, :x => 1:d, :y => 1:n)
rand_quad_forms = matrix(S, [collect(Oscar.monomials(sum(x)^2))])*matrix(S, [[S(rand(K)) for j in 1:n] for i in 1:binomial(d+1, 2)])
elim_quad = ideal([y[i] - rand_quad_forms[i] for i in 1:n])
quad_GB = groebner_basis_f4(elim_quad, la_option = 44, eliminate = d, info_level = 2)
quadJ = ideal(quad_GB)
R, p = graded_polynomial_ring(K, :p => 1:n)
inject = hom(S, R, vcat([R(0) for i in 1:d], p))
quadI = inject(quadJ)
quad_min_gens = minimal_generating_set(quadI)
return counter(Oscar.total_degree.((quad_min_gens)))
end
The functions above failed to compute the cases \((6,8),(6,9),(6,10)\). The reason is that a priori, groebner_basis_f4
fails to recognize the homogeneity. This issue can be resolved by doing the elimination process step by step as follows. The implementation was proposed to us by Benjamin Hollering.
using Oscar
using DataStructures
K = GF(32003)
d = 6
n = 12
S, t, x = graded_polynomial_ring(K, :t => (1:d), :x => (1:n), weights = vcat([1 for i in 1:d], [2 for i in 1:n]))
# make the elimination ideal for the linear forms and use F4 to eliminate
# observe that when we run this F4 says homogeneous input? 0 which means it does not regonize the input as homogeneous
rand_linear_forms = t*matrix(S, [[S(rand(K)) for j in 1:n] for i in 1:d])
elim_linear = ideal([x[i]-(rand_linear_forms[i])^2 for i in 1:n]);
linear_GB = groebner_basis_f4(elim_linear, nr_thrds = 32, la_option = 44, eliminate = d, info_level = 2);
linJ = ideal(linear_GB);
counter(total_degree.((linear_GB)))
# make a smaller ring with only the remaining variables
# we will now sequentially eliminate one variable at a time
R, x = graded_polynomial_ring(K, :x => (1:n))
inject = hom(S, R, vcat([0 for i in 1:d], gens(R)))
linI = inject(linJ);
# note that now it says homogeneous input? 1
# this is because linI is homogeneous in the standard grading which F4 can recognize
linear_GB11 = groebner_basis_f4(linI, nr_thrds = 32, la_option = 44, eliminate = 1, info_level = 2);
linI11 = ideal(linear_GB11);
counter(total_degree.((linear_GB11)))
linear_GB10 = groebner_basis_f4(linI11, nr_thrds = 32, la_option = 44, eliminate = 2, info_level = 2);
linI10 = ideal(linear_GB10);
counter(total_degree.((linear_GB10)))
linear_GB9 = groebner_basis_f4(linI10, max_nr_pairs = 10000, nr_thrds = 32, la_option = 44, eliminate = 3, info_level = 2);
linI9 = ideal(linear_GB9);
counter(total_degree.((linear_GB9)))
linear_GB8 = groebner_basis_f4(linI9, max_nr_pairs = 10000, nr_thrds = 32, la_option = 44, eliminate = 4, info_level = 2);
linI8 = ideal(linear_GB8);
counter(total_degree.((linear_GB8)))
We ran this on an MPI MiS compute server and obtained the results shown in Table 1 of the paper.
Likelihood Degenerations (Example 5.4)
Example 5.4 was computed using the Maple code in the following file: likelihood_degenerations.mw
.
Project page created: 24/05/2025.
Project contributors: Hannah Friedman, Bernd Sturmfels, Maximilian Wiesmann.
Corresponding author of this page: Maximilian Wiesmann, wiesmann@mis.mpg.de.
Code written by: Hannah Friedman, Benjamin Hollering, Bernd Sturmfels, Maximilian Wiesmann.
Software used: Julia (Version 1.11.3), Oscar (Version 1.3.1), HomotpyContinuation.jl (Version 2.14.0), Maple 2023
System setup used: MacBook Pro with macOS Monterey 12.5, Processor Apple M1 Pro, Memory 16 GB LPDDR5. MPI-MiS compute server with 4 x 6-Core Intel Xeon Gold 6128 at 3.4 GHz, 3072 GB RAM and NVidia RTX 4000 SFF Ada 20GB.
License for code of this project page: MIT License (https://spdx.org/licenses/MIT.html).
License for all other content of this project page (text, images, …): CC BY 4.0 (https://creativecommons.org/licenses/by/4.0/).
Last updated 24/05/2025.