Fermat User's Guide

for
Linux and Mac OSX
32 bit and 64 bit versions



Robert H. Lewis
© 1992, 1995 - 2011 by Robert H. Lewis. all rights reserved.


http://home.bway.net/lewis/


July 25, 2011      v3.9.99, 4.19 (64 bit)


Appendix 4. Summary of Matrix and Polynomial Features

Matrix and polynomial computations are the heart of Fermat. Here is a concise annotated list of relevant features and commands. Most are explained further earlier in the manual.

Polynomials
Recall that most built-in functions allow call-by-reference for parameters. For example, time and compare Terms(x) and Terms(^x) for a large x.
Adjoin polynomial variable: use the command &J (Fermat interrogates user for name), or &(J=t) (imperative to adjoin t).
Cancel (delete) polynomial variable: similar to above, &(J=−t).
Quotient rings and fields: One may choose to mod out by some of the polynomial variables to create quotient rings (or fields). The chapter on "Polymods" describes how. In principle, any monic polynomial may be modded out, say tn + c1tn−1 + ….
However, Fermat is best at the case where the quotient ring becomes a field (note well, Q Q[t, u, …]  / < p, q,… > is a field, Z Z[t, u, …]  / < p, q, … > is not). Specifically, suppose the polynomial variables have been attached in the temporal sequence t, u, v, …. Begin by modding out a monic irreducible polynomial p(t) such that F1 = Z Z[t]/ < p > is an integral domain, and its field of fractions is Q Q[t]/ < p > . Then, if desired, mod out by a monic polynomial q(u,t) such that F2 = F1[u]/ < q > is an integral domain, and continue in this manner always creating an integral domain, and, by the same stroke, its field of fractions.
You must tell Fermat that a field will result, and it is your responsibility to check this. Do this at each step by adding a comma and 1, as &(P = tn + c1tn−1 + …, 1). You may append a list of primes q such that the modder p(t) remains irreducible mod q. For example, &(P = t3 + t2 + t + 2, 1:151, 167, 191, 839, 859, 863, 907, 911, 991). If you omit the list Fermat will compute it for you. (This takes a while, so if you save the session to a file, Fermat will include the list in the saved file and just reload it next time.) Fermat then computes a second list of auxiliary primes: modulo these primes the modding polynomial has a root. Both types of primes are used to speed up g.c.d. computations. If Fermat cannot find enough of either type, it will tell you and instruct you how to get more, using the commands &A and &B.
Laurent Polynomials: a polynomial with negative exponents. To allow this, activate the toggle switch &l. All of the variables you have created up to that point will be converted so that no negative exponents are in the denominator of a quolynomial and all positives are factored out and moved to the numerator. For example, 1/(t2+2t) will become t−1/(t+2).
Basic arithmetic: +,  −,  *,  /,  |,  \, ^. p/q creates a rational function ("quolynomial") unless q divides evenly into p. In any event, GCD(p,q) is computed and divided into p and q. p|q means p mod q and p\q means p div q, i.e. divide and truncate.
Mod and div may be "pseudo" results: if c is the leading coefficient of q and is not invertible, there exist polynomials r and y such that ck p = y q + r, where 0 ≤ k ≤ deg(p) + deg(q). Fermat chooses the smallest possible k. p|q is r and p \q is y.
Remquot = remainder and quotient. Syntax is Remquot(x,y,q). q gets the quotient of dividing x by y and the function returns the remainder. Almost twice as fast as calling mod and div separately. Pseudo-remainder and quotient will be returned if necessary.
Deg. degree of a polynomial (or quolynomial). There are three variants: (1) Deg(x) computes the highest exponent in x (any expression) of the highest precedence polynomial variable. (2) Deg (x, i) computes the highest exponent in x of the ith polynomial variable, where the highest level variable has the ordinal 1. (3) Deg (x, t) computes the highest exponent in x of the polynomial variable t. In modular mode, Deg returns an actual integer, not reduced modulo the modulus. For a quolynomial, it returns the degree of the numerator.
Codeg, "codegree," just like Deg except it computes the lowest exponent.
# = polynomial evaluation. x#y replaces the highest precedence variable everywhere in x with y. x#(u = y) replaces the variable u with y. x and y could be quolynomials.
There is a fast shortcut form of evaluation called total evaluation. To evaluate x at every variable, use the syntax x#(v1, v2, …), where all the vi are numbers. There must be a number corresponding to each polynomial variable, in the precedence order - highest precedence (last attached) first.
Fermat also allows evaluation of polynomials at a square matrix. The syntax is x#[y]. The highest precedence polynomial variable in x is replaced with the matrix [y] and the resulting expression simplified. [y] can contain entries that are quolynomials.
The following more general syntax is allowed. Suppose there are five poly vars, e, d, c, b, a in that order (e last and highest). Then q#(d = w,x,y) will replace each d in q with w, each c with x, each b with y. e and a are untouched. Further, w, x, and y can be arbitrary quolynomials.
Similarly if [t] is an array, q # (d = [t]) replaces the variables from d on down with the entries of [t] in column major order until [t] is used up. It is an error if [t] has too many entries.
Numb = is the argument a number (as opposed to a polynomial or quolynomial)? If so the result is 1, else it's 0.
Numer = numerator of a quolynomial. In rational mode, also gives the numerator of a rational number.
Denom = denominator of a quolynomial. In rational mode, also gives the denominator of a rational number.
Coef in a polynomial (or quolynomial).
(1) Suppose first that only one polynomial variable t has been adjoined. Then the syntax of use is either Coef(x) or Coef(x, n). x can be any expression. n, the desired exponent, must be a number. In the first form, without n, the leading coefficient is computed. Coef(x, n) returns the coefficient of tn in the polynomial x. If x is a quolynomial, the denominator is ignored.
To replace a coefficient, use Rcoef(x, n) : = y; the coefficient of tn in x will be set to the expression y. y must be a number.
If there are several polynomial variables, the coefficient desired is specified by listing the exponents of the variables in precedence order, such as Coef(x, 1, 2).
In Rcoef(x, …) = y, x must be a polynomial and … must be suitable for y.
(2) If t is any polynomial variable, Coef(x, t, n) computes the coefficient in x of tn, as if t were the highest level variable. In other words, if x were written out as a sum of monomial terms, find all the terms containing exactly tn and factor out the tn. x must be a variable name, either a scalar name or an array reference x[i]. This form cannot be used on the left of an assignment. Especially useful for n=0.
Killden(x) sets the denominator of x to 1 - it actually changes x.
Lterm(x) = leading term of polynomial x.
Lcoef(x) = leading numerical coefficient of polynomial x.
Flcoef(x) = leading field-element coefficient of polynomial x, when a quotient field has been created.
Lmon(z) = leading monomial of z. Lmon has an optional second argument. First, Lmon(z) returns the leading monomial of z. This is always an authenic monomial. If you think of a multivariate polynomial in nested (recursive) form, Lmon recursively finds the first term in each level and throws away all the other terms. Lmon(z, x), where x is a poly var, stops the recursion at the level of x. For example, suppose u is the higher variable and t the lower variable. Let z = (t2 + 3t + 5)u2 + 5t*u + 7t − 2. Then Lmon(z) is t2*u2 but Lmon(z, t) is (t2 + 3t + 5)u2.
Mcoef(x, m1, m2) = monomially-oriented coefficient. m1 (and m2 if present) must evaluate to a monomial; their numerical coefficient is irrelevant. Factor out m1 from all the terms in x that contain it exactly, and return the factor. m2 (optional) specifies variables that must occur with exponent 0 (as they cannot be included in m1!).
Mfact(x, m) = monomially-oriented factor. m must evaluate to a monomial; its numerical coefficient is irrelevant. Factor out m from all the terms in x that it divides into and return the factor. m may not contain any negative exponents.
Mono(a,b) to compare monomials; returns true iff a ≤ b. Numerical coefficients are irrelevant. Individual variables are ranked by order of their creation, latest is largest. Monomials are ranked first by length = #variables, i.e. all univariates < all bivariates < all trivariates, ... etc. Within equal #vars, ranking is by subset of vars. Within subset, ranking is exponent arrays. See "Sort" below. This is the order that is used internally by Fermat for certain algorithms.
Mons(x, [a]) dumps the monomials of x into a linear array [a]. If you want each monomial stripped of its numerical coefficient, use Mons(x, [a], 1).
PRoot(x) returns the pth root of x, when x ∈ the ground ring, a field of characteristic p.
Splice: Splice(s,c,m) multiplies c by xm and "tacks it in front" of s; it adds them, if there is overlap. c becomes 1, s gets the answer. see Split. Split is basically the dual of Splice. Invoke with Split(w,n,v) or Split(w,n). w and v are existing scalar variables (including entries in an array). n is an integer. w becomes w mod xn, where x is the highest variable. v (if present, optional) becomes w \xn.
Terms(x) = if x were written out as a sum of monomial terms, the number of such terms. x must be a variable name, either a scalar name or an array reference x[i]. does not count field variables (see next function).
Vars(x) = number of variables that actually occur in x.
Termsf(x) = counts the total number of terms when polymodding to create a field, counting the field variables. If s = (t+1)x + t − 1 where t is a field variable, Termsf(s) = 4 while Terms(s) = 2.
&_l: Toggle switch to display each polynomial as a list of monomials.
&c: Enable full Hensel checking. This one is quite technical. If this flag is on (the default) Fermat will double-check the results of certain Hensel Lemma GCD computations. Leaving it off will slightly speed up GCD but introduce an extremely minute probability of GCD giving the wrong answer. See &O next below.
&O: Toggle switch to disable the Hensel and Chinese Remainder Theorem (CRT) methods for polynomial gcd. This is a good idea only when you are working over Zp for small primes, say p < 30, and the degrees in each variable are fairly small. For such small primes, the Hensel and CRT methods often fail, for "lack of room."
" = derivative of a polynomial (or quolynomial) with respect to the highest precedence variable (the last attached), as in x" (see also Deriv below).
GCD = greatest common divisor, as in GCD(x,y), x and y can be numbers or polynomials, but not quolynomials. If numbers, the result is always positive, except that GCD(0,0) = 0. GCD(0,x) = |x| if x is a number not 0, and is 1 if x is a polynomial. If they are both polynomials, the result always has positive leading coefficient. If in rational mode, the result has all coefficients integral and content 1. In cases where the ground ring is a field, the result has leading coefficient 1.
Content = content of a polynomial; i.e., the GCD of all its coefficients. Content(x,i) is content w.r.t. ith variable.
Numcon = numerical content, the GCD of all its numerical coefficients.
Var. Followed by an expression that evaluates to a positive integer, as in Var(i) returns the ith polynomial variable, counting the highest (last created) as 1.
Height = the difference between the levels (ordinals) of the polynomial variables in an expression.
Level = the ordinal position of the highest precedence polynomial variable in an expression.
Raise = Two forms: Raise(x) and Raise(x, i). In the first, replace each polynomial variable with the variable one level higher. The second form allows the user to provide an expression i that evaluates to a positive integer, and raises x that many levels, if possible.
Lower = The inverse of Raise. See above.
Divides(n,m) = does n divide evenly into m?
PDivides(n,m) = does n divide evenly into m? When n and m are multivariable polynomials, this procedure attempts to answer quickly by substituting each polynomial variable except the highest with a constant. PDivides says true iff these reduced polynomials divide evenly. The constants are chosen with care. Nonetheless, this is a probabalistic algorithm. An answer of False is always correct, but an answer of True has an infinitesimal probability of being wrong.
SDivide(n,m) = does n divide evenly into m? "S" stands for "space-saving". To save space m is cannibalized. If n does divide m, m becomes the quotient; if not, m becomes 0. m must be a variable name, not an expression. Not probabalistic. Use when you are virtually certain that n divides m and you want the quotient in the fastest way.
Powermod(x,n,m) computes xn  mod  m. x must be a polynomial or integer, n must be a positive integer, and m must be a monic polynomial or positive integer. You may omit the third argument if you are in modular mode or polymodding. Note that n often needs to be very large. In modular mode, this is a problem. The solution is that n must be either a constant or must involve only variables that have been created in rational mode while under "Selective Mode Conversion."
Deriv(x, t, n) returns the nth derivative of x with respect to t, where t is one of the polynomial variables.
Poly[a]: With x = the highest polynomial variable, Poly[a] takes an array OF NUMBERS and yields Σ < i=1,n > [  a[i]  xn+1−i  ], for a standard array. For a sparse array, only the first entry in each row is used.
Splice: Splice(s,c,m) multiplies c by xm and "tacks it in front" of s; it adds them, if there is overlap. c becomes 1, s gets the answer. see Split. Split is basically the dual of Splice. Invoke with Split(w,n,v) or Split(w,n). w and v are existing scalar variables (including entries in an array). n is an integer. w becomes w mod xn, where x is the highest variable. v (if present, optional) becomes w \xn.
Totdeg(x, [a], n) returns the subpolynomial of x of total degree n in the variables listed in [a]. [a] is an existing array. Each entry should be a single polynomial variable, in no particular order. n is an integer. x is a polynomial; laurent is ok.
Vars(x) = number of variables that actually occur in x.
Factoring Polynomials: Fermat allows the factoring of monic one-variable polynomials over any finite field. The finite field is created by simply being in modular mode over a prime modulus, or by additionally modding out by irreducible polynomials to form a more complex finite field, as described in the section "Polymods." Factoring into irreducibles or square-free polynomials is possible, or polynomials can just be checked for irreducibility.
Factor(poly, [x]) or Factor(poly, [x], level). The factors of poly will be deposited into an array [x] having two columns and as many rows as necessary. (The number of factors (rows) is returned as the value of Factor.) In each row, the first entry is an irreducible polynomial p(t) and the second is the largest exponent e such that p(t)e divides poly. In the second form, level specifies the subfield to factor over. Examples are given earlier in this manual. It is best for factoring to use as many variables t, u, … as possible in creating the field.
Sqfree is similar to Factor except it produces factors that are square-free only.
Sqfree works for any number of variables and over Q Q. Also, it works recursively by first extracting the content of its argument and factoring it. Over quotient fields, the product of all the factors in the answer may differ from the argument by an invertible factor.
Irred tells if its argument is irreducible, and, if not, describes the factorization. The syntax is Irred( < poly > ) or Irred( < poly > , < level > ) ("level" is explained above). The value returned is as follows:
   −1 means can't decide (too many variables, for instance).
   0 means the argument is a number or a field element.
   1 means irreducible.
   n > 1 means the argument is the product of n distinct irreducibles of the same degree.
   x, a poly, means x is a factor of the argument (which is therefore not irreducible).
Fermat uses the algorithms of H. Cohen, "A Course in Computational Number Theory," Springer Verlag, 1993, p. 123-130.


Matrices

Creation: An n ×m matrix is created with the command   Array  x[n,m]. Access elements in such an array via x[i,j] or via x[k], which returns the kth element in column-major order. To refer to an entire matrix, use the syntax [x].

Sparse Matrices: Sparse matrices are implemented in Fermat. This is an alternative mode of storing the data of the array. In an "ordinary" n ×m matrix, nm adjacent spots in memory are allocated. If an array consists of mostly 0's, this is wasteful of space. In a Sparse implementation, only the non-zero entries are stored in a list structure.
A Sparse matrix is created by following the creation command with the keyword "Sparse," as in  Array  x[5,5] Sparse. There is no size limitation in Fermat. An array [x] already created can be converted to Sparse format with the command Sparse [x]. There is no requirement that [x] actually have a certain number of zeros.

Indexing: One has a choice of how to index the first element of an array. The default in Fermat is x[1]. This can be changed by entering the command &a, which switches the initial array index to 0. Entering &a again switches back to 1. Note that this is not a property of any particular array, but of how all arrays are indexed.
Dynamic Allocation of Arrays: Arrays that are no longer needed can be freed to provide space for new arrays. This is done with the cancel command, whose syntax is @[x], or, to free several, @([x],[y],[z]).
Arithmetic: Most of the ordinary arithmetic built-in functions can be applied to arrays. See Appendix 2, last column. For example, [x] + [y] is the sum. 2*[a], or [a]*2, multiplies every component of [a] by 2. [a] + 3 adds 3 to every component of [a], and so forth. [a] : = [1] sets an already existing square matrix [a] equal to the identity. [a] : = 1 sets every entry to 1. [z] : = [x] * [y] is the product. [z] : = 1 / [y] is the inverse. Matrix exponentiation (including inverse) is just like scalars (but see Altpower below), such as [z] : = [x]^n.
Arithmetical Expressions: Like numerical expressions, such as [z] : = [a]*([x] + [y] − [1]).
Parameters: Matrices may be parameters in functions.
Subarrays: Fermat allows subarray expressions. That is, part of an array [c] can be assigned part of an array [a]. For example, [c[1  ∼   4,2  ∼   6]] : = [a] sets rows 1 to 4 and columns 2 to 6 of [c] equal to [a]. This assumes that [a] is declared to be 4 ×5 and [c] is at least 4×6. (Here  ∼   is shift-`). In defining the subarray, if one of the coordinate expressions is left out, the obvious default values are used. For example, if [c] has four rows then [c[,2 ∼  6]] : = [a] is equivalent to the above. Similarly, one can use expressions like [c[3 ∼  ,2 ∼  6]] : = [a] or [c[ ∼  4,2 ∼  6]] : = [a], in which case the default lower row coordinate is the array initial index, 0 or 1.
In subarray assignments, a vector declared to be one-dimensional (like a[5]) is treated as a column vector, i.e., a[5,1].
Subarray cannot be used with Sparse matrices, but Minors can; see below.

Matrix Built-in Functions:

Det, is used in several ways to compute a scalar from an array argument. If used by itself on a square matrix, Det is determinant. Det#([x] = a) returns the number of entries in [x] that equal a. Similarly Det#([x] > a) and Det#([x] < a) compute the number of entries of [x] larger or smaller than a. If any entry is a polynomial, an error results. Det [x] returns the index of the largest element of [x] (in column major order if [x] is a matrix). Det_[x] returns the index of the smallest element of [x]. Det_ + [x] returns the index of the smallest nonzero element of [x], or −1 if there is no such element.
Determinant: Fermat uses four basic methods to compute determinant: expansion by minors, Gaussian elimination, Lagrangian interpolation, and reducing modulo n for some n's. The last of these is used for matrices of integers or polynomials with integer coefficients. The actual determinant can be reconstructed from its values modulo n (for a "good" set of n's) by the Chinese Remainder Theorem (see Knuth volume 2). Alternatively, it is often possible to work modulo an easily computed "pseudo determinant" known to bound the actual determinant. Gaussian elimination is applicable in all situations - all one needs is the ability to invert any nonzero element in a matrix. If the matrix is small enough, expansion by minors is faster (see &D.) Gaussian elimination can be nontrivial and even problematical in modular arithmetic over a nonprime modulus, in polynomial rings, and in polynomial rings modulo a polynomial. Fermat has heuristics to guide its choice of method.
When the matrix has all polynomial entries, Fermat has two other methods. It may also compute the determinant with Lagrangian interpolation: constants are substituted for the highest polynomial variable everywhere in the matrix, and so on recursively. The algorithm is probabalistic, with very high probability of success. It is very fast, especially for two or more variable polynomials.
Nonetheless, if there are many polynomial variables and the matrix is sparse or has a regular pattern of zeros, expansion by minors can be by far the fastest method. Setting the determinant cutoff (with &D) at least as large as the number of rows will force Fermat to do this method. This is true of Sparse matrices as well as "ordinary."
Secondly, Fermat uses the well-known Gauss-Bareiss method (for a matrix of all polynomial entries).
Fermat has heuristics to choose among the methods, but the user may override them and force a particular method. Assuming an m ×m matrix of all polynomial entries, if m is more than three and the user has left &D = −1, the default method is Lagrangian interpolation, unless the "mass" of the matrix is very small. The "mass" is estimated by a heuristic and compared to a cutoff. The user can change the cutoff with the command &L. The default is 5000. Therefore, to turn off Lagrangian interpolation, give a very large value (up to 231−1). To then choose Gauss-Bareiss, set the command &K = 1. This is a bit confusing, so summary:
To force Gauss-Bareiss, set &K = 1 and &D = 2.
To force Gaussian elimination, set &K = 0 and &D = any d > 0. At the d ×d stage, it       will switch to expansion by minors.
If m ≥ 4, to force Lagrangian interpolation, set &D = −1 and &L = 1.
LCM = the least common multiple of all the denominators in a matrix. "Denominators" means those of rational numbers or of expressions like (t2 + 3t + 1)/17 or 3/(2t). Use this to clear a matrix of its integer denominators. The denominator of 2/(3+2t) is ignored, since you can't clear it by multiplying [x] by any number.
Adjoint = adjoint of a square matrix.
Chpoly = the characteristic polynomial of a square matrix. The syntax of the command and the method used depend on whether the matrix is sparse or "ordinary."
With the ordinary matrix storage scheme, LaGrange interpolation is used when the matrix consists of all numbers. It is to your advantage to clear the matrix of all numerical denominators before invoking Chpoly. To do LaGrange interpolation, Fermat computes the necessary determinants using the Chinese Remainder Theorem. To do so, it must make an initial estimate of the absolute value of the determinant. The estimate is often rather liberal. The determinants in question are simply f(ci), where f is the characteristic polynomial and { ci } is a set of "sample points." The user may be able to supply a better bound on |f(ci)|, so there is a second optional argument to Chpoly, a polynomial g such that |f(t)| ≤ |g(t)| for all t. The syntax is Chpoly([x], g).
With sparse matrices, a clever way to compute characteristic polynomial is the Leverrier-Faddeev method. This method often loses to the standard one, det([x] − λI), but it can be faster for matrices that contain quolynomials. The user may choose the method with the second argument: Chpoly([x], n) selects the standard method when n=0 and the Leverrier-Faddeev when n is any other integer.
As of October 2009, the LaGrange modular determinant coefficients can be dumped to a file, rather than stored in RAM. This can be a big space saving when doing a very large computation. The command is &(_L=1). In other words, if the highest precedence variable is x and lower ones are y, z, …, and if a determinant cn xn + cn−1 xn−1 + … is being computed with LaGrange interpolation, the coefficients c0, c1, ..., cn (which are polynomials in y, z, …) will be dumped to the output file to save RAM.
Minpoly: The "modifed Mills method," a fast probabalistic algorithm that computes the minimal polynomial M(t) of a sparse matrix of integers, or, more precisely, a factor of the minimal polynomial. If one of the roots of M(t) is 0, the associated factor t of M(t) will not show up, but other factors may not show up either. This algorithm is built into Fermat via the command Minpoly. Syntax of use is Minpoly([a], level, bound). [a] is the matrix, which must be Sparse. level = 0, 1, 2,3, 4 is a switch to tell Minpoly how much effort to expend in its basic strategy. Larger levels will take longer, but have a better chance of giving the entire minimal polynomial. bound is an integer at least as big as any coefficient in the minimal polynomial. This argument can be omitted, in which case Fermat will supply an estimate based on the well-known Gershgorn's Theorem.
Repeated calls to Minpoly may return different answers. It may be worthwhile to run it several times and compute the l. c. m. of the answers.
Sumup = add up the elements of an array.
Trace = trace of a matrix.
Altmult. Multiply two matrices using the algorithm of Knuth volume II, p. 481. A big time saver when multiplication in the ring is much slower than addition. Especially good for Polymods (see that chapter). Syntax is Altmult([x],[y]).
Altpower. Uses Altmult to take a matrix [x] to the power n. Syntax is Altpower([x],n).
MPowermod([x],n,m) computes [x]n  mod  m. [x] contains only polynomials or integers, n must be a positive integer, and m must be a monic polynomial or positive integer. You may omit the third argument if you are in modular mode or polymodding. Note that n often needs to be very large. In modular mode, this is a problem. The solution is that n must be either a constant or must involve only variables that have been created in rational mode while under "Selective Mode Conversion."
To see the effect of this command, try creating a 20 ×20 matrix of random integers, let n=100 and m=108. Compute [x]n  mod  m both ways.
Trans = transpose matrix, as in [y] := Trans[x].
STrans = transpose a matrix in place, as in STrans[x].
Diag refers to the diagonal of a matrix, as in Diag[y]: = [x]. [x] is considered a linear array. The diagonal of [y] becomes the entries of [x]. If the name [y] does not yet exist, a new square matrix will be created with off-diagonal entries 0. If square matrix [y] of the right size (i.e., rows equal to the number of entries of [x]) does exist then the off-diagonal elements are not changed.
Dually, Diag can be used on the right side of an assignment, as in [y] := Diag[x], which sets [y] equal to a linear array consisting of the diagonal elements of [x]. [x] does not have to be square.
To create a diagonal matrix with all entries equal to a constant, say 1, you can use the easier form  [x] : = [1], if [x] already exists as a square matrix.
Cols[x] = number of columns of array [x].
Deg = number of elements in an array. Deg[x] = total size of array [x] (rows × columns).
[ < ] = last computed array. Fermat has a hidden system array. If you type the command  [x] + [y], arrays [x] and [y] will be added and, since you didn't provide an assignment of the result, the result will go into the system array. You can later access it by typing, for example, [z] : = [z] + [ < ]. Subarrays cannot be used with < .
_ = concatenate arrays; glue two arrays together to form a larger one, as in [z] : = [x]_[y]. Neither array can be Sparse.
Iszero = is the argument (an array) entirely 0? If so, return 1, else return 0. Syntax: Iszero[x].
Switchrow = Interchange two rows in an array. Syntax: Switchrow([x], n, m).
Switchcol = Interchange two columns in an array. Syntax: Switchcol([x], n, m).
Normalize = convert to a diagonal matrix. The matrix must not be Sparse. If requested, Fermat will return the change of basis matrices used in normalizing. Possible invocations include Normalize([x]) and Normalize([x], [a], [b], [c], [d]). In the second case, matrices [a], [b], [c], and [d] will be returned that satisfy [a]*[x′]*[b] = [x], where [x′] is the original [x], and where [c] = [a]−1 and [d] = [b]−1. The value returned by Normalize is the rank of [x].
You can omit any of the change of basis matrices. For example, Normalize([x], ,[b], , [d]) and Normalize([x], [a], , [c]). Every comma promises that an argument will eventually follow.
Colreduce = Column reduce a matrix. The matrix may NOT be Sparse. By column manipulations, the argument is converted to a lower triangular matrix. If requested, Fermat will also return the change of basis (or conversion) matrices that it used in normalizing. Possible invocations include Colreduce([x]) and Colreduce([x], [a], [b], [c], [d]). In the second case, matrices [a], [b], [c], and [d] will be returned that satisfy [a]*[x′]*[b] = [x], where [x′] is the original [x], and where [c] = [a]−1 and [d] = [b]−1. The value returned by Colreduce is the rank of [x]. As with Normalize, you can omit any of the change of basis matrices.
Colreduce cannot be used on sparse arrays. In addition a function Pseudet is implemented. Pseudet([x]) computes a "pseudo-determinant," a nonzero determinant of a maximal rank submatrix. It returns the rank of the matrix and leaves the matrix in diagonal form (so [x] is changed). The product of the diagonal entries is (up to sign) the "pseudo-determinant." The optional form Pseudet([x], [rc]) returns a 2×rank[x] matrix [rc] specifying the rows (first row of [rc]) and columns (second row of [rc]) that constitute the maximal rank submatrix. A second optional form is Pseudet([x], [rc], k) or Pseudet([x], , k). Integer k specifies the last row/col to pivot on. The entries beyond spot [k,k] are left.
Rowreduce = Row reduce a matrix. The matrix must be Sparse. Exactly like Colreduce but for sparse arrays and row reduction.
Smith = Put a matrix of integers into Smith normal form. The matrix may be Sparse. This function can only be used in rational mode, and assumes that every entry is an integer. (Any denominator encountered will be ignored, with unpredictable results.) By row and column manipulations, the argument is converted to a diagonal matrix of non-negative integers. Furthermore, each integer on the diagonal divides all the following integers. The set of such integers is an invariant of the matrix.
As with Normalize, you can omit any of the change of basis matrices.
If you do not require any conversion matrices then it is possible to greatly speed up Smith in most cases by working modulo a "pseudo-determinant", a multiple of the gcd of the determinants of all the maximal rank minors (see Kannan and Backem, SIAM Journal of Computing vol 8, no. 4, Nov 1979). Do this in Fermat with the command MSmith. For relatively small matrices or sparse matrices, it's faster to forgo the modding out. Fermat will compute the pseudo-determinant if the matrix is Sparse. If you already have a pseudo-determinant pd, use the syntax MSmith([x], pd). (If the matrix is not Sparse, you must use the latter method. Pseudet may be helpful.)
Hermite = Column reduce a matrix of integers. The matrix may be Sparse. This function can only be used in rational mode, and assumes that every entry is an integer. By column manipulations and row permutations, the argument is converted to a lower triangular matrix of integers. All diagonal entries are non-negative. This is often referred to as Hermite normal form.
If requested, Fermat will also return the integer change of basis (or conversion) matrices used in normalizing, exactly as in Smith.
Be aware that if the matrix is "large" and "dense" a horrendous explosion is possible in the intermediate entries, and in the entries of the conversion matrices.
Hampath = a pretty fast algorithm for finding a Hamiltonian path in a graph. The algorithm is from an exercise in a text book by Papadimitriou. Given the n ×n adjacency matrix of a simple graph, the program constructs an n2+1 ×n2+1 sparse matrix. Each entry is either 0, 1, or a polynomial variable xi, i = 1, …, n. The graph has a Hamiltonian path iff the term x1 x2 …xn appears in its determinant. We do not actually compute the determinant. If there is a Hamiltonian path, this method often proves it very quickly. Hampath returns 1 or 0 for path or no path.
The basic invocation is Hampath[w] where [w] is the adjacency matrix, symmetrical with each entry either 0 or 1. However, on any graph of more than 15 or so nodes, it is better to use the alternate form Hampath([w], k), where k is a positive integer. k controls the probabalistic search by terminating a search tree once more than k leaves in the tree are visited with no resolution, and starting over. A reasonable heuristic for k is around 2n2.
Disclaimers: Hampath does not first check elementary properties of a graph that might easily decide the issue, like connectivity. That's up to the user. I make no claims about efficiency relative to other methods.
Redrowech = the reduced row echelon form, for elementary matrix equations of the form AX = B. (Other Fermat commands do column manipulations as well, which could be used to solve AX=B but take an extra step.) Invoke with Redrowech([a]), where all columns but the last in [a] represent the matrix A and the last represents B (i.e., Redrowech never pivots on the last column.) Alternately, Redrowech([a],[u],[v]) will return in [u] the transition matrix used in normalizing [a]. [v] is [u]−1. As in other similar Fermat commands, you can also do Redrowech([a], ,[v]).
Minors: extract minors from sparse arrays. The syntax is e.g. [y] : = Minors([x],[r],[c]). [x] is an existing sparse array. [r] and [c] are existing ordinary arrays specifying the rows and columns to be extracted. The result is stored in [y], which will be a new sparse array of the right dimensions. [x] is untouched.
FFLU and FFLUC are for fraction-free LU factorization of matrices. See the two articles in the September 1997 SIGSAM Bulletin: "Fraction-free Algorithms for Linear and Polynomial Equations," by Nakos, Turner, and Williams; and "The Turing Factorization of a Rectangular Matrix," by Corless and Jeffrey. See especially Theorem 4, p. 26 of the latter. FFLU is invoked as: FFLU([x], [p], [l], [a], [b]). [x] is the n ×m matrix to be factored. Presumably [x] contains integers or polynomials, but this is not enforced. [p] is an n ×n diagonal matrix consisting of the pivots used, [p] = diag(p1, p2, ... , pn−1, 1). [l] is the unit lower triangular matrix, the first factor. [a] (optional) is the n ×n permutation matrix of row swaps. [b] (optional) is [a]−1. At the end, [x] is in upper triangular form. Let [z] be a copy of the original [x]. If [f] and [g] are the matrices called f1 and f2 in the Corless and Jeffrey article, then at the end one has [f]*[a]*[z] = [l]*[g]*[x]. Note that [f] and [g] are not computed by FFLU; however it is obvious how to get them from [p]. Note also that [p] is not necessarily the diagonal of [x]: if columns of 0s are encountered along the way, [x] will be in row-echelon form and may have 0s on its main diagonal.
FFLUC allows column swaps as well as row swaps. In this way, the size of the pivots can be further reduced. FFLUC is invoked as: FFLUC([x], [p], [l], [a], [b], [c],[d]). As above, [x] is the n ×m matrix to be factored. [p], [l], [a], and [b] are the same as above. At the end, [x] is in upper triangular form. [c] and [d] (optional) are permutation matrices coming from column swaps ([d] is [c]−1). Let [z] be a copy of the original [x]. If [f] and [g] are the matrices called f1 and f2 in the Corless and Jeffrey article, then at the end one has [f]*[a]*[z]*[c] = [l]*[g]*[x]. [p], [l], [a], etc. need not be existing matrices when the function is invoked. Matrices of those names with the right size will be created at the end. Saying that [a], [c], etc. are optional above means that they may be ommitted, as for example FFLUC([x], [p], [l], [a], , [c]). Note the space to indicate no [b].
Reverse[a] will reverse the elements in Array a[n]. If [a] has one column, this is obvious. Otherwise, the exact behavior depends on whether [a] is a standard array or a sparse array. For standard, each a[i] is swapped with a[n+1−i], where you think of [a] as in column-major order. For sparse [a], the rows are swapped.
Sort: sort an (already existing) array [a] of polynomials, actually monomials. [a] can be either sparse or ordinary. The first column holds the keys. For ordinary arrays, every entry in column 1 must have been assigned a value. 0 is a legal value. For sparse arrays, it is more subtle: every row must have an entry, and the first entry in each row is taken as the key. (Recall that sparse arrays never contain 0). To avoid confusion, the best policy is to have the first column contain the key. As swaps are made, the entire row is swapped. The algorithm is quicksort. Quolynomials are allowed; denominators are ignored.
The order is a monomial order, the one built into Fermat via the Mono command. In comparing a and b, only the leading monomials are compared - the ones you see first when they are displayed. Numerical coefficients are irrelevant. All numbers are considered equal. To illustrate, create a multivariate polynomial w, do Mons(w, [a], 1), then Sort[a]. Syntax: Sort[a].
Sparse Access Loops There is a need for a way to work efficiently with sparse arrays. For example, suppose you have a sparse array of 60000 rows and 50000 columns with only 10 or so entries in each row (this is quite realistic). Suppose you wanted to add up all the entries. Naively, one could write something like:
for  i  = 1,60000  do  for  j=1,50000  do  sum  : =  sum + x[i,j]  od  od
But this will do 3,000,000,000 additions, almost all of which are adding 0! This is a preposterous waste of time. The solution is "sparse column access loops" for sparse arrays. The syntax is, continuing the example above,
for  i  = 1,60000  do  for  j = [x]i  do  sum  : =  sum + x[i,j]  od  od.
"for j = [x]i do" means find the ith row of [x] and let j run down it - of course encountering only the entries actually there! So j takes on whatever the column indices are in which x[i,j] ≠ 0. [x] must be an existing sparse array, and i must have a value suitable for [x] at the start of the loop. More generally, one may use the syntax: for j = [x]i,k do ... . Here i and k both refer to rows of the sparse matrix [x]. At the start of the loop, all nonzero column coords in both rows are found. Then as the loop proceeds, j runs through those values in order. Any number of row indices is allowed. There is no analogous procedure for "sparse row loops" due to the way Fermat stores sparse matrices. If necessary, transpose the matrix.
Dixon Resultant Technique To help compute the Dixon resultant, a feature has been added to Det. It may be used when a system of equations exhibiting symmetry leads to a determinant that is a polynomial in one variable, say t, over Z. Set the flags to select LaGrange interpolation. The syntax is Det([m], dr, power1, power2, exp). [m] is the matrix in question (must be Sparse).
Note: the second degree, power2, was added in February 2008.
Type 1: the determinant will be of the form dr * f(t)exp, and of degree power1. dr is a known divisor of the determinant, a "spurious factor." Knowing dr,  exp, and power1 enables Fermat to interpolate for f very efficiently. Type 2: the determinant will be of the form dr * f(texp), and of degree power1. Signal this type by setting exp to be negative; i.e. enter −5 instead of 5. exp must be either odd or a power of 2. power2 is the degree of the determinant in the second highest variable. Leave this field blank if there is no second variable, or you don't know the degree. For an introduction to the Dixon method, see: Lewis, R. H. and P. F. Stiller, "Solving the recognition problem for six lines using the Dixon resultant," Mathematics and Computers in Simulation 49 (1999) p. 205-219.
As of March 2007, Det([m], dr, power1, power2, exp) will work recursively if more than one variable is present in array [m]. Note, however, that the exp field is ignored if [m] is not Sparse.


Appendix 5. Finite Fields GF(28) and GF(216)

As of version 3.4.7 Fermat implements the finite field GF(28) in an efficient way. The 256 elements are represented as bit strings, simply the integers 0 - 255 in one byte in the obvious way. These integers are mapped in 1-1 fashion to the AES-Rijndael implementation of this field, Z/2  [t]  / < t8+t4+t3+t+1 > (see The Design of Rijndael: AES - The Advanced Encryption Standard, Information Security and Cryptography, by Joan Daemen and Vincent Rijmen.) The mapping is: f in Z/2  [t]  / < t8+t4+t3+t+1 > becomes f#(t=2); i.e. f evaluated at t=2.
Elements are added by exclusive-or. They are multiplied and inverted by table lookup.
The field GF(216) is built on top of GF(28) and is likewise just the first 65536 bit strings 0 - 65535. The first (lower) byte is GF(28). Elements are added by exclusive-or. They are inverted by table lookup and multiplied by a table of logarithms. This is slightly slower than the table lookup of GF(28).
To make these fields the ground ring in Fermat, simply do &p followed by 256 or 65536.
The problem for the user (speaking from my own experience) will be forgetting that there is no useful homomorphism from Z to these fields. One must be scrupulous in using &_m and _(...) to suppress modular!
Example:
BAD:  for i = 1, m do
        if n = i+1 then ....   ;; i+1 is NOT 1 more than i (!)
GOOD: for i = 1, m do
        if n = _(i+1) then ...

BAD:  rc := 1 + Sigma<j=1,n> [ Deg(za,j)*exp[j] ]
GOOD: rc := 1 + _Sigma<j=1,n> [ Deg(za,j)*exp[j] ]

Technically, over any modular ground ring, one should use the &_m or _(...) as above, but I at least usually use large primes p in Z/p, maybe 44449, so no harm results - usually. But over GF(28) and GF(216), one needs constant vigilance. For example, 2 is not 2 (!)
Remember that modular is automatically suppressed in for-loops, exponents, and array indexes. So these should be OK:
for i = 1, 2n do...
x := y^(i+2);
s := c[2*n+1];

It is fine to attach poly vars on top of these ground fields. However, do not do &P on top of that. I have not thoroughly checked that.


Appendix 6. New Features Added Between June 2005 and July 2011

In roughly chronological order. These features are all described earlier in this manual too. The new parts are emphasized here.


&_s: Suppress/don't suppress display of long polynomials.
&_t: Toggle switch to turn on/off a certain fast probabalistic algorithm to test if one multivariate polynomial divides another over ground ring Z. Rarely, this technique can fail, in which case you will see a "Fermat error" about "number in trial_poly_divide". Then turn it off.
&_G: sort the heap garbage. This can be added to the user's functions periodically during memory intensive polynomial calculations. A noticeable speedup occurs when used between repetitions of an intensive calculation.
&v: List all current variables. ... Only about the first 150 lines of a large polynomial are shown, unless &_s has been set.
To help compute the Dixon resultant, a feature has been added to Det. It may be used when a system of equations exhibiting symmetry leads to a determinant that is a polynomial in one variable, say t, over Z. Set the flags to select LaGrange interpolation. The syntax is Det([m], dr, power1, power2, exp). [m] is the matrix in question (must be Sparse). …
The new part is the second degree, power2.
As of March 2007, Det([m], dr, power1, power2, exp) will work recursively if more than one variable is present in array [m]. Note, however, that the exp field is ignored if [m] is not Sparse.
STrans = transpose a matrix in place, as in STrans[x]. Much faster than Trans.
Isprime(n) = is n prime? 1 means n is prime, else it returns the smallest prime factor. n can be up to 263−1. The algorithm is elementary, so it's slow beyond 250.
Using &J adjoins new variables "above" the previous ones. However, as of January 2009, it is possible to adjoin a polynomial variable "at the bottom." So if, say, x and y exist, y later or "above" x, one can do &(J > z), which will insert z as the lowest variable (below x) rather than the highest. You cannot cancel from the botttom.
Lcoef = the leading coefficient of a polynomial.
Nlcoef = the leading numerical coefficient of a polynomial. Unlike Lcoef, always returns a number.
Ntcoef = the trailing numerical coefficient of a polynomial. That number is always an actual coefficient, so can never be 0.
Zncoef = the "last" numerical coefficient of a polynomial. It will be 0 if there is no constant term.
Pivot Strategies: As of January 2009 there are options for the heuristics that direct the pivot choice in the normalization of matrices. This can have a large effect on time and space, though often it does not. The heuristic is set with the command &u or &(u = val). val is an integer from 0 - 5. The size or mass of a potential pivot can be measured by just its number of terms (called term# below), or by one of two mass heuristics (called mass0 and mass1 below).
Setting val =
0 is the default previous heuristic, which selects the least massive entry by the original mass0 heuristic.
1 selects the 'lightest' entry where weight = term# + sum of term# for the entire row entry is in.
2 selects the 'lightest' entry where weight = mass0 + sum of term# for the entire row entry is in.
3 selects the 'lightest' entry where weight = mass1 + sum of term# for the entire row entry is in.
4 selects the 'lightest' entry where weight = term#.
5 is like 3 but also counts the weight of the column an entry is in.
Termsf(x) = counts the total number of terms when polymodding to create a field, counting the field variables. If s = (t+1)x + t − 1 where t is a field variable, Termsf(s) = 4 while Terms(s) = 2.

As of October 2009, the LaGrange modular determinant coefficients can be dumped to a file, rather than stored in RAM. This can be a big space saving when doing a very large computation. The command is &(_L=1). In other words, if the highest precedence variable is x and lower ones are y, z, …, and if a determinant cn xn + cn−1 xn−1 + … is being computed with LaGrange interpolation, the coefficients c0, c1, ..., cn (which are polynomials in y, z, …) will be dumped to the output file to save RAM.

As a time- and space-saving aid, one can add the when saving, as in
!!(&o, 'q := ', q, ';'). Without the , q is duplicated in the course of expression evaluation. That might be a big waste of time or space.

The display of elapsed time changed in 2010. When timing is enabled, two numbers are displayed, called "Elapsed CPU time" and "Elapsed real time". CPU time is just the CPU time used by Fermat. This is what has been displayed by Fermat in most previous versions. However, the number is meaningful only up to about an hour. For much longer times, the value shown is meaningless. Elapsed real time is wall clock time, just as it sounds. If the elapsed real time is more than 5 seconds, then it is also displayed.
Sort: sort an (already existing) array [a] of polynomials, actually monomials. [a] can be either sparse or ordinary. The first column holds the keys. For ordinary arrays, every entry in column 1 must have been assigned a value. 0 is a legal value. For sparse arrays, it is more subtle: every row must have an entry, and the first entry in each row is taken as the key. (Recall that sparse arrays never contain 0). To avoid confusion, the best policy is to have the first column contain the key. As swaps are made, the entire row is swapped. The algorithm is quicksort. Quolynomials are allowed; denominators are ignored.
The order is a monomial order, the one built into Fermat via the Mono command. In comparing a and b, only the leading monomials are compared - the ones you see first when they are displayed. Numerical coefficients are irrelevant. All numbers are considered equal. To illustrate, create a multivariate polynomial w, do Mons(w, [a], 1), then Sort[a]. Syntax: Sort[a].
Mono(a,b) to compare monomials; returns true iff a ≤ b. Numerical coefficients are irrelevant. Individual variables are ranked by order of their creation, latest is largest. Monomials are ranked first by length = #variables, i.e. all univariates < all bivariates < all trivariates, ... etc. Within equal #vars, ranking is by subset of vars. Within subset, ranking is exponent arrays. See "Sort" below. This is the order that is used internally by Fermat for certain algorithms.
Splice: Splice(s,c,m) multiplies c by xm and "tacks it in front" of s; it adds them, if there is overlap. c becomes 1, s gets the answer. see Split.
Split is basically the dual of Splice. Invoke with Split(w,n,v) or Split(w,n). w and v are existing scalar variables (including entries in an array). n is an integer. w becomes w mod xn, where x is the highest variable. v (if present, optional) becomes w \xn.
Time displays the time and date. Visible on startup.
Vars(x) = number of variables that actually occur in x.
SDet = "Space-saving determinant." Space is saved when computing over ground ring Z using LaGrange interpolation and the Chinese Remainder Theorem. Fermat has aways used the the CRT algorithm from Knuth volume 2 on page 277 (exercise 7). The advantage is one can do everything "on the fly." The disadvantage is that if one will need, say, 50 primes, then every determinant modulo the 50 primes is stored until the end, when the answer is computed over Z by combining them. That might need too much space. Instead, SDet implements formulas 7-9 on page 270 of Knuth.
The disadvantage is it can't be done on the fly: one needs to know in advance how many primes will be needed. The user must (over)estimate this number.
Input: integer and a square matrix of polynomials. Output: determinant. Call: SDet(n, [m]). n = how many primes will be needed. SDet is not guaranteed to work with the more sophisticated options of Det, i.e. Det([m4], r, d1, d2).
Toot: sound the system beep.
As of 2010, a new arithmetic command has been added for when a function is in Integer mode: x :*  y to set x : = x*y; NB: this only works in Integer.
Reverse[a] will reverse the elements in Array a[n]. If [a] has one column, this is obvious. Otherwise, the exact behavior depends on whether [a] is a standard array or a sparse array. For standard, each a[i] is swapped with a[n+1−i], where you think of [a] as in column-major order. For sparse [a], the rows are swapped.
Poly[a]: With x = the highest polynomial variable, Poly[a] takes an array of numbers and yields Σ < i=1,n > [  a[i]  xn+1−i  ]. At least that is what happens on a standard array. For a sparse array, only the first entry in each row is used. So again, if [a] has one column Poly produces what you'd think.
400,000 pointers have been set aside for the array of arrays, up from the previous 2000. It is now possible to assign array of arrays pointers, as %[1] := %[2]. This is good for swapping data. One can also do [c] : = [%[1]].



File translated from TEX by TTHgold, version 4.00.
On 25 Jul 2011, 11:36.