Try these on your computer algebra system!
These tests are part of
this test suite for computer algebra systems.
Note well: Everything here is for the Linux, Mac OS X, and new (November 2011) Windows versions. Those versions use basically identical C code.
Older, obsolete, Pascal based versions, are much slower.
Also note that almost all of these timings date to 2005, and many go back to 2002. Obviously, today's machines are at least three times as fast, many are five
times as fast. I hope to revise the tests sometime in 2012.
With OSX (Macintosh) versions before 10.4 [now obsolete of course], I often notice wide variations in timing. This seems to depend on how many other applications you are running (especially Classic),
number of processors on your machine, amount of RAM, and size of L2 cache. Time these tests on a clean reboot. (Classic is no longer available in 10.4 and
later.)
The test is to verify that res evaluated at q1=s1, q2=s2, q3=s3, and q4=s4 is 0. Secondly, and
easier, that res1 evaluated at q1=ss1, q2=ss2, q3=ss3, and q4=ss4 is 0. This is an actual problem that came up in image analysis. (Lewis, Robert H. and Peter F. Stiller, "Solving the recognition problem for six lines using the Dixon resultant," Mathematics and Computers in Simulation 49 (1999) 205-219.
download pdf version)
On my 400 mhz G3 Mac it takes [took! update below] 2 minutes 43 seconds and 200 meg of RAM to do the former. The latter
takes 14 seconds and 22 meg of RAM. Both are done over the ground ring of integers.
Update: Version 2.6.4 of Fermat (Jan 20, 2002) the times are reduced to 137 seconds and
12 seconds. [These times are on a Mac and will be speeded up even more if the user has enough RAM to increase
Update: Version 3.0 of Fermat (July 15, 2002) does it in 105 seconds and 11.5 secs.
Update: Version 3.1 of Fermat (April 23, 2003) does it in 96 seconds and 10.5 secs.
(These times are on a now obsolete 400mhz G3 computer. On a 1.6 ghz Dell Pentium 4, the time corresponding to 96 seconds is 47.2.)
Update: Version 3.3 of Fermat (April 30, 2004) does it in 75 seconds and 9.5 secs.
(These times are on a now obsolete 400mhz G3 computer. On a 1.6 ghz Dell Pentium 4 running Windows Fermat, the time corresponding to 75 seconds is 44. The same
machine running Linux Fermat takes 31 secs.)
Update: Version 3.5.6 of Fermat (June 1, 2005): On the 1.6 ghz Dell Pentium 4 running Linux Fermat takes 28.3 secs. I will
no longer bother to run the second, easier test.
Update: Version 6.17 of Fermat (November 11, 2017): On a three year old Mac mini, the first takes 5.7 seconds. On a server made of three year old Dell
PCs, it takes 4.26 seconds. I will no longer bother to run the second, easier test.
Let me know how it goes on your system!
Update: I am now aware (May 2005) of three other systems that succeeded in doing these tests.
The first is GiNaC. Richard B.
Kreckel writes (24 February 2001) that on a Pentium III, 800 mhz, 768 meg of RAM running Linux,
the first test takes about 270 minutes, the second test takes 16 minutes. Both times are about
100 times longer than Fermat's, and use more than three times the RAM.
The second is Reduce. Thanks to Tony Hearn, I ran this test on an 867 mhz G4 Mac in May 2003. Reduce verifies that res evaluated at (q1=s1, q2=s2, q3=s3, q4=s4) is 0 in very good time. The time depends
on the amount of RAM that the user gives to Reduce at startup with the -k option. Specifying 210 meg (which is what Fermat uses) yields a time of 77 seconds. Specifying 600 meg yields a time of 67 seconds.
Fermat 3.1 takes 60 seconds. Fermat 3.6.4 would take 35.9 seconds.
The third is Magma. According to an experienced user of Magma, the way to set up the test in Magma is:
Update September 2015: The Fermat time was 6.0 seconds. This is on a 2013 Mac mini. The Magma time is now 12.2 seconds. (Magma V2.21-4)
Each matrix is given by a series of lines in the following format. If the i th line is
4,228,1,830,-1,857,1,859,2
then there are 4 entries in row i: 1 in spot [i, 228], -1 in spot [i,830], etc. The indexing is 0-based, so the first entry in the matrix is [0,0].
The first matrix is 945 X 1260. It has rank 875. The main diagonal has eight 3s. On my 400 mhz G3 Mac Fermat takes 11 seconds and a few meg of RAM.
The second is 51975 X 13860. It has rank 12440. The main diagonal is all 1s. On my 400 mhz G3 Mac Fermat takes 7 minutes 35 seconds and 47 meg of RAM. Download
a gzip file.
Let me know how it goes on your system!
Using Fermat I computed GCD(Det[d1], Det[d2]), in all 18 parameters. On my 400 mhz G3 Mac, the determinants take from 12 - 15 seconds each, and the GCD of two of them takes 78 seconds. The answer is homogeneous of total degree 12, has 21894 terms, and has very modest numerical coefficients. All together the system used about 85 meg of RAM in these computations.
Update: with version 2.6.3 of Fermat (Oct 15, 2001), the GCD of the two determinants takes 58
seconds.
Update: with version 2.6.4 of Fermat (Jan 20, 2002), the GCD of the two determinants
takes 40 seconds.
Update: With version 3.0 (July 15, 2002) the GCD of the two determinants
takes 28 seconds.
Update: With version 3.1 (April 23, 2003) the GCD of the two determinants takes 10 seconds (400 mhz G3 Mac Fermat).
Update: With version 3.4 (November 2004) the GCD of the two determinants takes 1.86 seconds on the 1.6 ghz Dell Pentium 4 running Linux Fermat.
Update: With version 3.5.6 (June 1, 2005) the GCD of the two determinants takes 1.05 seconds on the 1.6 ghz Dell Pentium 4 running Linux Fermat.
Update: With version 3.6.0 (June 27, 2005) the GCD of the two determinants takes 0.95 seconds on the 1.6 ghz Dell Pentium 4 running Linux Fermat.
Note: the gcd time is affected a bit by the order of declaration of the variables a1, a2, ..., c6. The time quoted here is for the alphabetical listing and is a
middle value of those that I tried.
Let me know how it goes on your system!
Thanks to Tony Hearn, I ran this test on Reduce on an 867 mhz G4 Mac in May 2003. Reduce was unable to compute any determinant in one hour. When I gave Reduce
the determinants computed by Fermat, Reduce found their gcd in the very good time of 6.1 secs. Fermat takes 3.6 secs (version 3.3). As of June 27, 2005 on Fermat
3.6.0, this is reduced to about 1.95 secs.
I ran this on 2 August 2005 on a 1.8 ghz Mac G5 running OSX with 2 gig of RAM (the only machine I have access to with both Fermat and Magma).
I used magma 2.12-5. The first two determinants took 4.18 and 5.68 seconds. The gcd of the two took 5.52 seconds. All of this used 117 meg of RAM. With Fermat 3.6.4, the first two determinants took a total of 5.04 seconds. Their gcd took 1.10 seconds. All of this used 87.4 meg of RAM. These computations were done on a fresh reboot, with no other applications running.
Update September 2015: The determinants of the two matrices take a total of 1.05 seconds. The gcd takes 0.265 seconds. This is on a 2013 Mac mini.
With version 3.3 (April 30, 2004) the total time to read and simplify all three on a 1.6 mhz Dell running Linux, it is 3.89 secs. The numbers of terms in the three answers (numerator + denominator) are 967, 1026, and 1884.
On June 1, 2005 I changed the file slightly. The part before the first &x (which signals a break in Fermat's reading of the file) does most of the reading but none of the
significant computations (i.e. no divisions, no gcd). This takes about 0.43 seconds on the 1.6 mhz Dell running Linux. Typing &r again will effect the significant computations, which took 3.51 seconds on versions of Fermat up to 3.5. On version 3.5.6 (June 1, 2005) the time is reduced to 2.57 seconds.
Update September 2015: The significant computations (which took 3.51 seconds in 2005) now take 0.204 seconds on a 2013 Mac mini.
Let me know how it goes on your system!
Update September 2015: The Fermat times are now between 0.89 and 1.3 seconds on a 2013 Mac mini.
Let me know how it goes on your system!
since February 6, 1999
(Thanks WebCounter!)