Jarkko Hietaniemi wrote: > The problem with all articial benchmarks is that they are artificial. > You can sell PCs with Hz and supercomputers with LINPACKs, but what in > the real world matters is how fast real applications run. The problems > with real applications are many: they are big, they depend on external > software and real word input, which might be again big, or proprietary, > or both. Yup - which is why I have been using *my* real world application as the basis of my testing. It is an XS wrapper around a new Solaris accounting mechanism. The XS allows you to read and process the accounting data files. It creates a slew of objects - about 25 for each record, and a file may contain 750,000 records, which is why the speed of object creation is so important to me. It is very easy to write benchmarks that lead you into believing that edge cases are important. It is very easy to be mislead by what you think you know. It is very easy to optimise for one edge case at the detriment of overall performance. From looking at profiling data the only big hit left (apart from changing perl itself) was the chunk of time used by sigsetjmp, which is why I tried configuring it out. Syscalls are always expensive, not only in terms of the direct CPU they consume, but also because they imply the rescheduling of the calling process. In terms of the impact on a performance they hit above their apparent weight if you just look at the CPU time they use. Alan BurlisonThread Previous