On Monday, April 29, 2002, at 02:03 , Bob Showalter wrote: > splice() will be slower as the size of the array grows. If I take > your benchmark and change to array from 1..100 to 1..10000, I get > the following results for 100 iterations (on an old Pentium 266): > > Benchmark: timing 100 iterations of consume, decLoop... > consume: 51 wallclock secs (50.72 usr + 0.03 sys = 50.75 CPU) @ 1.97/ > s > (n=100) > decLoop: 19 wallclock secs (19.45 usr + 0.01 sys = 19.46 CPU) @ 5.14/ > s > (n=100) yeah - I started to think about that since I am also looking at felix's dwarfs of fisher_yates_shuffle( \@array ) approach - which starts making more sense when your array starts growing into strings of stuff.... details when I update the benchmark test here in a bit - since clearly we want to start looking at larger bodies of data.... ciao drieux ---Thread Previous | Thread Next