Hi,
I played a bit with different gcc versions on Alpha and I noticed that when executing the FIB test from http://home.iae.nl/users/mhx/fib.html in paflof, results are _a lot_ slower with gcc 3.1.1 than with 2.96 (SuSE 7.1 Alpha release) Not only is the 3.1.1 binary about 20% bigger, it also 22% slower.
The test: HEX : FIB ( x -- y ) RECURSIVE DUP 2 > IF DUP 1- RECURSE SWAP 2- RECURSE + EXIT THEN DROP 1 ;
28 FIB BYE
The results: gcc 2.96 binary real 1m55.642s user 1m55.479s sys 0m0.014s
gcc 3.1.1 binary real 2m21.763s user 2m21.533s sys 0m0.027s
Where does this come from? Is the code generated by gcc 3.x really that slow, compared to 2.96? Do we make some assumptions that might cause this? (How badly do gotos break code flow analysis for example?)
What can we do against this? Having more prim words will speed up the whole thing, but probably not really make it easier for the compiler to optimize.
Comments?
Stefan..